Back to Community
trading earnings surprises with Estimize data

Update 6/23/2015: Please see (https://www.quantopian.com/posts/earnings-drift-with-estimize) for the newest version of this algorithm

The folks at Quantopian were kind enough to give me a sneak preview of the latest data-set they're making available for backtesting: crowd-sourced earnings and revenue estimates from Estimize.

This backtest shows an example of trading earnings surprises for Netflix (NFLX) over the last 5 quarters using Estimize's crowd-sourced EPS estimates. The strategy is simple - when there is an earnings surprise that passes some set threshold (I started with +10%), then buy and hold for 15 trading days. I've exposed as global variables both the THRESHOLD to use for deciding what size earnings surprises to trade and the HOLD_PERIOD that says how long you want to hang on to the position.

This algo is set up to trade a single stock (like NFLX), a manually selected list of stocks, or a set_universe basket of stocks based on trailing dollar volume. I'd like to extend this proof of concept to do a market-neutral quantile analysis and also to test the predictive power of the Estimize estimates by trading the day BEFORE report on the % difference between Estimize's mean and the Wallstreet mean (another field accessible in the Estimize test dataset). Currently I am defining the 'surprise' as the % difference between the Estimize mean and the actual reported EPS #.

Please clone this algo! I know that this algorithm can use a lot of improvement. I'd love to get some help exploring this data further.

code note: to get at this data set you'll need to use the new custom fetch_estimize method. Quantopian tells me the full documentation will come out in a day or two.

Clone Algorithm
305
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# This algo ingests a dataset of Estimize (www.estimize.com) quarterly earnings estimates from a Quantopian 
# repository and implements a simple earnings surprise trading strategy based on divergence
# between the Estimize mean EPS estimate and the company's reported actual EPS value. 

# Long-only Earnings Surprise Strategy: 

# Each day scan the investible universe for companies with earnings surprises > THRESHOLD
# -- Buy a fixed position in all companies w/a qualifying event 
# -- Hold for a predefined HOLD_PERIOD and then Exit position completely

# "estimize.eps.surprise" is a derived value computed inside Quantopian's fetch_estimize method, defined as:
# estimize.eps.surprise = (Estimize mean - 'Wallstreet' estimate ) / ('Wallstreet' estimate)

#import required libraries:
import pandas   
import collections
import datetime
import pytz

#define the desired holding period in trading days:
HOLD_PERIOD = 15
#Threshold for categorizing an event as an earnings surprise (e.g. 0.05 = +%5 surprise):
THRESHOLD = 0.1
#Initialize the timezone:
UTC = pytz.timezone('UTC')
    
def initialize(context):
    #fetch Estimize data dump using a customized variant of Fetcher built explicitly for this dataset:
     fetch_estimize()
    
    #initialize a list of stocks to trade; demo test set == (AAPL, AA, F, MSFT, NFLX):
    # context.stocks = [sid(24), sid(2), sid(2673), sid(5061), sid(23709)]
     context.stocks=[sid(23709)]
    # set_universe(universe.DollarVolumeUniverse(97, 99))
        
    # set max and min position sizes:
     context.max_notional = 500000.0
     context.tolerance = context.max_notional / 10
     context.min_notional = -1 * context.max_notional  
     context.invested = 0    
  
    # initialize a dict to track the holding period of each investment    
     context.hold_tracker = collections.defaultdict(dict)   

        
def handle_data(context, data):
      
    for stock in data:           
        
        if 'estimize.eps.surprise' in data[stock] and 'price' in data[stock]:
            #pull out the Estimize surprise field that was fetched - this is our buy signal
            eps_surprise = data[stock]['estimize.eps.surprise']
            
            #record this variable for visual inspection (right now this will only plot the
            #surprise for the last stock looped over)
            record(eps_surprise=100*eps_surprise)
            
            #pull out the most recent report date - 
            #for this algo only trade when today == most recent report date
            most_recent_rpt_dt = data[stock]['reports_at']
            most_recent_rpt_dt = most_recent_rpt_dt.replace(tzinfo=UTC) #make tz aware
            
            today = data[stock].datetime
            
            price = data[stock].price
            notional = context.portfolio.positions[stock].amount * price
        
            if notional > context.tolerance:
                   context.invested = 1
            else:
                   context.invested = 0
            
            #check if there are any open positions that have reached their exit trigger:
            if stock in context.hold_tracker:
               tracker_info = context.hold_tracker[stock]
               if tracker_info['holding']:
                  if tracker_info['days_held'] == HOLD_PERIOD:
                     shares = round(notional / price)
                     order(stock,-shares) 
                     log.info('\n holding  period expired, selling %s shares' % shares)
                     del context.hold_tracker[stock]
                  else:
                     tracker_info['days_held'] += 1                                        
            
            #BUY if a stock passes the eps_surprise THRESHOLD and increment days_held tracker    
            if eps_surprise > THRESHOLD \
            and today==most_recent_rpt_dt \
            and (context.max_notional - notional) > context.tolerance \
            and context.invested==0:
                
                   shares = round((context.max_notional - notional) / price)
              
                   order(stock,shares)
                   log.info('\n %s' % stock)
                   log.info('\n buy %s shares' % shares)
                   context.hold_tracker[stock] = {
                                                 'holding': True,
                                                 'days_held': 0
                                                  }
                    
            #SELL position if a stock drops below the eps_surprise THRESHOLD and notional>0                            
            elif eps_surprise < THRESHOLD and notional > 0:     
                     shares = notional / price
                     order(stock,-shares) 
                     log.info('\n failed to meet threshold, selling %s shares' % shares)
                    
        
        
            
         
        

    
    
    
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

38 responses

That's very interesting. It looks like the Estimize data goes back to January 2012. Is that correct?

I wonder if we could get historical reported earnings as well for the entire Quantopian dataset. That would let us calculate P/E ratios.

Are there any other fundamentals from financial reporting that people would like to have?

That's right, Dennis, Estimize data is for 2012. It's a new company and they are working on their 6th quarter of data, I think.

The P/E ratio of various stocks is available through Quandl (example) but the point is well taken. We need to add more data and make it more easily accessible. I think this work with Fetcher, and then Estimize is just the beginning. We also have a couple more neat ones coming out later this week.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

That's great.

Can you give an example of how I can use Quandl and set_universe together?

If anyone is looking for system ideas, there have been some papers published about the dispersion of analyst estimates being a predictor of future stock returns and/or stock volatility.

Hello Jessica,

Without working through your algorithm in detail, I'm trying to get a feel for your approach. I've attached a backtest highlighting the most recent jump in the NFLX price, after an earnings announcement. It appears that the NFLX price jumps overnight, between April 22 & April 23, stepping from a close of $174.4 to an open of $216.25. So the price jump is instantaneous, on a minute level.

Even if there is an "earnings surprise" I don't understand how to take advantage of the information. Are you assuming that if NFLX beats the Estimize estimate by a significant margin, there will be price appreciation over a longer time scale (i.e. the price versus time curve will trend positive after the earnings surprise)?

As a side note to the Quantopian folks, it'd be nice if we could plot down to the minute level.

Grant

Clone Algorithm
7
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np

# globals for get_data batch transform decorator
R_P = 0  # refresh period
W_L = 1 # window length

def initialize(context):
    
    context.stocks = [sid(23709)]

def handle_data(context, data):
    
    d = get_data(data, context.stocks)
    
    if d is None: 
        return
    
    current_price = data[context.stocks[0]].price
    
    timestamp = data[context.stocks[0]].datetime
    
    print ' ----------------------------'
    print timestamp
    print ' ----------------------------'
    
    print np.flipud(d)

    record(NFLX = current_price)
    
@batch_transform(refresh_period=R_P, window_length=W_L) # set globals R_P & W_L above
def get_data(datapanel,sids):
    return datapanel['price'].as_matrix(sids)
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Great question Grant - there are lots of different strategies for trading earnings events. There is a market anomaly - well documented back 20 years or so in the finance literature ref of 'post-earnings announcement drift'. So while you'll see the lion's share of price adjustment to an earnings surprise within the first day (or the first second of trading, or pre-market etc) you will often also see a continued trending of price in the direction of the earnings surprise for days or even weeks following the event. Of course this is at odds with efficient market theory - but it's easily observable. This algo as implemented above waits and trades the day AFTER the earnings surprise happens - and uses the % difference between the Estimize number and the actual reported value.

The other thing you can do is to try to predict the earnings surprise and trade ahead of the event. This is the next thing I'd like to try with the Estimize data and actually the current Fetch_estimize method will already return the signal value you need - I think its labeled estimize.delta. That is the % difference between the Estimize mean and the Wallstreet number - and its available to you at 4pm the day before report. There may be some date handling Quantopian needs to open up to enable this, and I will post an update to this thread when I do get that predictive version working.

I ran a backtest with the Estimize dataset in "minute" mode. The new Estimize information did not come in at 4pm but rather was available at the start of the next day. Maybe that is expected.

Also, there seems to be a problem with the date stamps. It looks like when there is new Estimize information the datetime gets clobbered by 'report_at'. As a result all of these timestamps report as midnight incorrectly:

get_datetime()
data[stock]['datetime']
data[stock]['dt']

And happen to match the value in data[stock]['report_at'].

In between Estimize data events (e.g. that don't have new Estimize information) all the timestamps appear to work normally and report the correct date and time.

Dennis, unfortunately Fetcher doesn't work very well when the data source and the algorithm are not at the same frequency - that is, minutely data in fetcher doesn't run well in a daily algo, and vice versa. Running daily fetcher in minutely mode algo can definitely work, but it has a few loose ends.

The root of the problem is that the daily data from fetcher gets placed at midnight UTC, but there are no trade events that happen at that time.

Jessica seemed to indicate that this data could be placed at 4pm. Can the Fetcher handle sparse data? E.g. an update at exactly 4pm but no other minute of the day?

Yes, Fetcher can do sparse data.

I think technically you'd want to do it at 9:30AM, because that's when the announced EPS information is first available and tradeable.

Right now, the fetch_estimize is collapsing all EPS announcements to be at midnight, which works well for daily data. To do what you're trying to do, we need to collapse the EPS announcements to 9:30AM. That's definitely possible using a custom pre-func, but I haven't yet written the documentation explaining how to do that ;) It's on my to-do list for tomorrow.

But if the EPS data is actually available at 4pm we should have a choice to make trading decisions right away. For Daily mode this is extra important since we only have one chance to put in orders for the next day.

You mean you want to trade when the actual earnings are released by the company? I don't think that's always 4pm, and I doubt you can get that information from estimize in a timely manner...

@Dennis - I actually have a query out to the Estimize folks on how best to handle their timestamps. Under the hood the raw dataset has a millisecond timestamp for every estimate created. Estimize does lock down new estimates at 4pm on the day before report - but I think really what you'd want if you want to try to execute a trade on the day before report is to have granular control over rolling up the individual estimates so you know you don't have look ahead. I'm picturing something like - I want to take the Estimize mean in a 'point in time fashion' as it was at say noon, or 3:30pm or whatever and assume I can get that day's close price.

That said - I will let Dan speak to how the trade execution actually works in Quantopian's backtester - I believe if I place an order on day t=0 I get execution on day t=1, so you might actually need to either end around that or be more conservative and take the Estimize mean as of 9am as Dan suggests or earlier.

@Simon - the 4pm cutoff is when Estimize stops letting users log new/revised estimates prior to a company's report. I'm verifying that I have that correct, but that was my understanding from Leigh when we talked in NY the other night.

I guess I'm just trying to understand how the data is coming in.

For Daily mode I wanted to make sure the 'midnight' information was available in the daily event of the same date. That seems to be the case so there isn't any undue delay for making a trade.

However, I think I had things a bit backwards with regards to Minute mode. I just re-ran a test and sure enough there is a separate 'midnight' event that happens after 4pm and before 9:31am.

But the result seems to be that the EPS announcement for January 25 (end of day) is available before the start of trading which would be about 8 hours prior to the actual announcement. Is that what's happening? If so it tears a huge hole in the space-time continuum :)

I think to maintain both Daily and Minute mode accuracy the data event should be time-stamped 23:59 on January 25 instead of 00:00.

Ah yes, external data time-stamped too early will lead many people to systems that perform admirably in back-tests. :)

Hello Jessica & all,

I had the thought that the Estimize data might not be necessary to profit from drift after the price has jumped overnight. Might it work just as well to buy NFLX if the price jumps X% from one trading day to the next? Would the price experience the same upward drift regardless of the cause of the jump?

Grant

If anyone is interested in other ideas around trading strategies to build on top of the Estimize data here are a few below. There are also a lot of other ideas we've thrown around at Estimize, as well as some that our current quant clients who buy our API have told us they are looking into. Please feel free to respond here or hit me at [email protected].

  • We did some modeling around what happens when a company reports their EPS or Revenue in between a higher Estimize consensus number and a lower Wall Street consensus number. For example, Estimize consensus of .60, Wall Street consensus of .50, company reports .55. On the open of the following trading day we sold short the stock and closed the position at the close of that trading day. We also hedged each position with an equal dollar value of $SPY so that we were market neutral. To make this even more robust you may want to hedge with the sector ETF of the stock. We found that on average, each trade produced 1% of alpha, which is a lot. I would love to see someone recreate that strategy within Quantopian. This strategy may have some serious liquidity constraints though.

  • It turns out that Estimize data is more valuable leading up into the report than it is coming out of the report. One strategy you may want to test is to be long or short the stock somewhere between 1 and 10 trading days before the report depending on the delta between the Wall Street consensus and the Estimize consensus. It may also be interesting to look at the magnitude of that delta to determine position sizing, or just try and find where along that scale you can generate more alpha.

  • When I was a quant we ran a strategy that looked at earnings and price momentum on more of an intermediate time frame. I would look for candidates that have positive EPS and Revenue growth, above 20%, which have beat their Wall Street consensus numbers in the past 2-3 quarters, and which have a large delta between the Wall Street and Estimize consensus numbers this quarter. These stocks should show outsized momentum in the positive direction and can be traded on the 3-9 month timeframe. The Estimize dataset may be a little short to test this strategy effectively but it's definitely worth a shot.

Leigh Drogen
Founder and CEO of Estimize, also quant nerd who loves Quantopian

Great ideas, Leigh. Thanks for posting.

I think the current Quantopian dataset for Estimize is lacking some features. It only seems to have events on the earnings announcement date.

I'd love to see the delta leading up to the earnings announcement so I could try your 2nd and 3rd idea.

Folks,

Here's a quick and dirty algorithm that just uses the overnight price jump of NFLX to decide if it should be bought and held for 15 days. If the price jumps by more than 10%, I buy an even, minimum lot of 100 shares and hold for 15 days, after which I sell the lot. During the 15-day hold period, I do not allow additional buying. I tweaked the initial capital so that no borrowing from the Bank of Quantopian would be required. The commissions are set to $1 per trade, which I assume is realistic with a Quantopian/Interactive Brokers account.

It is not clear if the gains are due to actual post-earnings drift, or if the price is just appreciating due to other causes during the 15-day holding periods.

Grant

Clone Algorithm
619
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
hold_period = 15

def initialize(context):
    
    context.stocks = [sid(23709)]
    context.previous_price = 1
    
    context.initialize = True
    context.event_day = 0
    context.new_day = False
    context.day_counter = 0
    context.bought = False
    context.day_submitted = 0
    
    set_commission(commission.PerTrade(cost=1.0))
  
def handle_data(context, data):
    
    current_price = data[context.stocks[0]].price
    event_day = data[context.stocks[0]].datetime.day
    
    if context.initialize:
        context.event_day = event_day
        context.initialize = False 
    
    if event_day != context.event_day:
        context.new_day = True
    else:
        context.new_day = False
        
    if context.new_day:
        context.day_counter = context.day_counter + 1
        price_diff = 100*(current_price/context.previous_price-1)
        record(price_diff = price_diff)
        
        if (price_diff > 10) and not(context.bought):
            order(context.stocks[0],100)
            context.bought = True
            print context.bought
            context.day_submitted = context.day_counter
        elif context.day_counter-context.day_submitted == hold_period and context.bought:
            order(context.stocks[0],-100)
            context.bought = False
            print context.bought
    
    context.previous_price = current_price
    context.event_day = event_day
    
    
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Great idea Grant. This brings up another strategy that should be explored....

When companies report earnings that significantly beat estimates, and gap up the following day more than 5%, it is extremely important to watch the price action during that day for clues as to the post earnings drift the next few weeks. In my experience when a stock gaps up significantly after the beat and is sold throughout the day, closing well below its opening price, it meas that institutions were taking advantage of the markup to distribute stock, not accumulate more. This is problematic for future higher prices as the stock no longer has institutional support at those levels. We saw this happen with Pandora on Friday. This usually signals the conclusion of a specific trade that funds have put on and a tough quarter at least for the stock ahead.

But, when a stock gaps up significant and sees a close significant above where it opened, it signals that something fundamental has caught the funds off guard and they are willing to buy the stock at even higher prices because they believe it is just the beginning of the move. When I ran Surfview Capital we always looked for this price action as an indication of future higher prices on the intermediate term time frame.

I would love to see someone write an algo that took advantage of the post earnings drift from stocks that opened >5% higher and closed >2% above their open. You should also couple this with a beat of their earnings estimates. The optimal trading time if I had to guess would be 20 trading days. And I would make it market neutral, so hedge with a short $SPY position.

@Dennis C

You can build a time series of Estimize estimates as all individual estimates are included and timestamped. If you wanted to see the delta between Estimize and Wall Street let's say 10 days before the report you should have no problem doing that.

Thanks Leigh,

I'll have another, more careful read-though of your recommendations above and get back to you with some questions. Some thought needs to be put into the coding, so that it is cleanly scalable to multiple securities.

Grant

Hello Leigh,

Some questions:

  1. To be specific, when you say "opened >5% higher," do you mean compared to the closing price of the prior trading day? Or should the baseline be computed using the prior price history (e.g. a moving average)? And should the closing price of the first minute of the new trading day be used? Or would it be better to use the data over many minutes of the new day (e.g. the first 30 minutes of trading)?
  2. Regarding the test for "closed >2% above their open," it implies that an order would not be placed until the following day (the day after the price jump is detected). Correct?
  3. You say that I "should also couple this with a beat of their earnings estimates." How would you recommend incorporating this information into the algorithm?
  4. Instead of hedging with a short SPY position, would it be equivalent to buy SH (ProShares Short S&P 500)?
  5. Regarding market neutrality, it seems that I'd need an estimate for the influence of the overall market on the individual security price, right? What is the recipe for determining the amount of the hedge? For example, if I buy 100 shares of NFLX, how many shares of SPY should I short to make it a market-neutral position?

Grant

  1. yes, compared to closing price of prior day. Closing price of the first minute.

  2. I would place the order for market on close if the stock closes >2% above its opening price, don't wait until the next day to execute. If you can't do that, execute the order at 3:58 PM.

  3. This one is a little more fuzzy.

  4. I made a gramatical error in my post, you should be hedging with a LONG SPY position against your short stock position.

  5. The most simplistic way to be market neutral is to hedge with the same dollar amount in the other direction. This doesn't account for beta though. If you wanted to do that you would compute the beta of your long portfolio at any time and need to hedge with the same amount of beta. Getting even more detailed, you could hedge with a similar dollar value of the sector ETF for each position.

Thanks Leigh,

I think Quantopian has added some order types, perhaps including market-on-close--I'll need to investigate. In any case, getting an order filled prior to close can be kludged.

Regarding the market-neutral constraint, as I understand, you are saying that if the stock price is expected to drift upward due to an earnings jump, then a long position would be established, offset by a short position in SPY. Alternatively, if the stock price is expected to drift downward, then the stock would be shorted, with an offset long position in SPY. Correct? As I see it, if the market-neutral constraint is applied perfectly, then any gains should be due to the effect of the earnings announcement alone, and not due to the effect of the overall market on the stock. Is my thinking correct?

Grant

Right, you want to isolate the earnings drift alpha factor. You can calculate the beta of the stock to some hedge instrument (sector etf?) using the rolling variance and covariance as I did in my hedged volatility risk premium system (no guarantees I did it right!), or with a rolling linear regression of the stock vs index returns.

The reason you want to isolate the alpha is that once you have a bunch of systems with isolated and uncorrelated alpha returns, is that you can stack them and get the benefits of diversification among them, the central limit theorem applies etc. If they all have residual beta, then you can't (as easily) combine them.

Thanks Simon,

If you have an example (e.g. your "hedged volatility risk premium system") please point me to it.

Grant

@batch_transform(window_length=21, refresh_period=1)  
def get_beta(datapanel, spy_sid, other_sid):  
    spy = datapanel['close_price'][spy_sid]  
    log_spy = numpy.log(spy)  
    ret_spy = log_spy.diff()  
    other = datapanel['close_price'][other_sid]  
    log_other = numpy.log(other)  
    ret_other = log_other.diff()

    n = 5  
    V_M = pandas.stats.moments.rolling_var(ret_spy, n)  
    last_V_M = V_M[-1:].ix[0]  
    # maybe there's a better covariance estimator too?  
    Cov_A = pandas.stats.moments.rolling_cov(ret_other, ret_spy, n)  
    last_Cov_A = Cov_A[-1:].ix[0]  
    unadj_Beta = last_Cov_A / last_V_M  
    alpha = 0.0  
    # could this alpha be dynamically tuned to adjust to the  
    # strategie's ex post beta?  
    adj_Beta = alpha + (1. - alpha) * unadj_Beta  
    return adj_Beta  

From the final backtest posted on https://www.quantopian.com/posts/system-based-on-easy-volatility-investing-by-tony-cooper-at-double-digit-numerics

Note that is only doing a 5 day variance/covariance estimate of the beta, you probably want something much longer, and if I had to do it again, I would probably do a rolling linear regression, which seems more robust but is probably equivalent.

Hello Leigh & Simon,

When I look at NFLX versus SPY here, it's hard to imagine that a "market neutral" hedge could be concocted for NFLX based on SPY. It appears to me that NFLX is on its own planet. Or am I missing something?

As a counter-example, looking at WMT & SPY here it seems reasonable that a significant component of WMT's price variation is due to the overall market.

Grant

Well, true. :). You want something that is very similar to your instrument, but that didnt just release their earnings.

Here's my overnight jump detector algorithm run on CREE instead of NFLX. I also changed the buy criterion to > 5% price jump overnight.

--Grant

Clone Algorithm
619
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
hold_period = 15

def initialize(context):
    
    # context.stocks = [sid(23709)] # NFLX
    context.stocks = [sid(8459)] # CREE
    
    context.previous_price = 1
    
    context.initialize = True
    context.event_day = 0
    context.new_day = False
    context.day_counter = 0
    context.bought = False
    context.day_submitted = 0
    context.shares = 0
    
    set_commission(commission.PerTrade(cost=1.0))
  
def handle_data(context, data):
    
    current_price = data[context.stocks[0]].price
    event_day = data[context.stocks[0]].datetime.day
    
    if context.initialize:
        context.event_day = event_day
        context.initialize = False 
    
    if event_day != context.event_day:
        context.new_day = True
    else:
        context.new_day = False
        
    if context.new_day:
        context.day_counter = context.day_counter + 1
        price_diff = 100*(current_price/context.previous_price-1)
        record(price_diff = price_diff)
        record(current_price = current_price)
        
        if (price_diff > 5) and not(context.bought):
            order(context.stocks[0],100)
            context.bought = True
            print context.bought
            context.day_submitted = context.day_counter
        elif context.day_counter-context.day_submitted == hold_period and context.bought:
            order(context.stocks[0],-100)
            context.bought = False
            print context.bought
    
    context.previous_price = current_price
    context.event_day = event_day
    
    
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Estimize's data looks mighty exciting. If anyone is interested, I can share with you some Quantpedia data on earnings anomalies, which might help fill some of the historical gaps.

I've also looked into some robust historical fundamental data sources:
- I/B/E/S, now owned by Reuters seems to be the authoritative source on estimates data, going back 40+ years (though I dread to think of its price). Top academic institutions might have access.
- Tradestation's fundamental data is so-so. You have to do some magic in EasyLanguage to extract anything useful though.
- Zacks Research Wizard has good data, but their UI is rather rigid. I would love to hack into their database (but advise against this for obvious ethical reasons). The price tag is manageable for the retail investor if indeed the earnings anomaly is robust. Their database updates weekly and allows for simple backtesting.
- Portfolio123 looks promising, but haven't tried it yet. Pricing is similar for ZacksRW.

I also looking for feedback on an earnings related blog post that I just wrote. Comments are encouraged!

Seems fetching estimize data no longer works. Is there a way to currently backtest using estimize data?

Hi Barry,

The updated version can be found here: https://www.quantopian.com/posts/earnings-drift-with-estimize with a bigger universe size and accompanying research notebook here: https://www.quantopian.com/posts/research-stepping-through-crowdsourced-earnings-data-with-estimize

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Seong,

When I try to run the backtest in the link you provided it gives this error:

AttributeError: 'int' object has no attribute 'symbol'
... USER ALGORITHM:262, in my_universeGo to IDE
symbols = [s.symbol for s in sids]

Hi Barry, I posted a fix to here: https://www.quantopian.com/posts/earnings-drift-with-estimize.

Thanks!