This is an updated, Pipeline version of David Edward’s Long/Short Cross-Sectional Momentum strategy. If you aren't familiar with long/short strategies, I would highly recommend going over the full lecture before exploring this algorithm:The basics of the strategy are as follows:
It looks at an N day window of M day returns on a basket of large cap
stocks, then the cross-sectional average for each day is subtracted
out. It then uses the average of the result as a ranking for the
universe, long the top and short the bottom in equal amounts.
Since this algorithm now uses pipeline, it's able to screen a much larger universe of securities on a daily basis. By doing so, I saw performance improvements compared to the original that only used the top 500 securities by market cap.
Here are the full list of changes:
- This algorithm uses Pipeline to calculate the cross-sectional average looking at the top 1,000 market cap securities
- Capital base is 1,000,000 versus 100,000
- There is a minimum liquidity floor of ADV > 10,000,000 for the past 30 trading days
- Trading frequency is set to bi-monthly
- Stocks with earnings announcements are excluded from the trading universe using EventVestor’s Earnings Calendar data feed to reduce volatility.
- Stocks with news sentiments that contradict the main
Returnsfactor are excluded from the universe using Accern's Alphaone data feed.
- The drawdown and volatility was low enough that I felt comfortable increasing the leverage to 2.0
Like David mentions in the original post, this algorithm can serve as a pretty good template to begin layering on-top of. I expect to be posting a few variations (and encourage you to do the same) so follow this thread for more!
|Returns||1 Month||3 Month||6 Month||12 Month|
|Alpha||1 Month||3 Month||6 Month||12 Month|
|Beta||1 Month||3 Month||6 Month||12 Month|
|Sharpe||1 Month||3 Month||6 Month||12 Month|
|Sortino||1 Month||3 Month||6 Month||12 Month|
|Volatility||1 Month||3 Month||6 Month||12 Month|
|Max Drawdown||1 Month||3 Month||6 Month||12 Month|
""" It looks at an N day window of M day returns on a basket of large cap stocks, then the cross-sectional average for each day is subtracted out. I then use the average of the result as a ranking for the universe, long the top and short the bottom in equal amounts. """ import numpy as np import pandas as pd import datetime from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline import Pipeline from quantopian.pipeline.data import morningstar from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.factors import CustomFactor, AverageDollarVolume # Both Free & Paid versions will be accessed through the same # namespace # The sample data feed is available from 01 Jan 2007 - 24 Mar 2014 # The full data feed is available for $5/mo at: # https://www.quantopian.com/data/eventvestor/earnings_calendar from quantopian.pipeline.factors.benzinga import BusinessDaysUntilNextEarnings, BusinessDaysSincePreviousEarnings # For use in your algorithms # Using the full paid dataset in your pipeline algo # from quantopian.pipeline.data.accern import alphaone # Using the free sample in your pipeline algo # 26 Aug 2012 - 30 Mar 2014 from quantopian.pipeline.data.accern import alphaone_free as alphaone class Returns(CustomFactor): inputs = [USEquityPricing.close] window_length = 300 def compute(self, today, assets, out, prices): # Getting the range of indexes so we can reindex later index = range(0, len(prices)) # Calculated a shifted return prices = pd.DataFrame(np.log(prices)).dropna(axis=1) R = prices - prices.shift(50) R = R[np.isfinite(R[R.columns])].fillna(0) # Subtracts the cross-sectional average out of each data point on each day. ranks = (R.T - R.T.mean()).T.mean() # Fill in nan values so we can drop them later ranks = ranks.reindex(index, fill_value=np.nan) out[:] = np.array(ranks) def make_pipeline(): """ Create and return our pipeline. We break this piece of logic out into its own function to make it easier to test and modify in isolation. In particular, this function can be copy/pasted into research and run by itself. """ pipe = Pipeline() returns = Returns() pipe.add(returns, "returns") pipe.add(alphaone.article_sentiment.latest, "article_sentiment") pipe.add(alphaone.impact_score.latest, "impact_score") # EarningsCalendar.X is the actual date of the announcement # E.g. 9/12/2015 # pipe.add(EarningsCalendar.next_announcement.latest, 'next') # pipe.add(EarningsCalendar.previous_announcement.latest, 'prev') # BusinessDaysX is the integer days until or after the closest # announcement. So if AAPL had an earnings announcement yesterday, # prev_earnings would be 1. If it's the day of, it will be 0. # For BusinessDaysUntilNextEarnings(), it is common that the value # is NaaN because we typically don't know the precise date of an # earnings announcement until about 15 days before ne = BusinessDaysUntilNextEarnings() pe = BusinessDaysSincePreviousEarnings() # The number of days before/after an announcement that you want to # avoid an earnings for. avoid_earnings_days = 15 # Create and apply a filter representing the top 1000 equities by MarketCap # every day. mkt_cap = morningstar.valuation.market_cap.latest mkt_cap_top = mkt_cap.top(1000) # Liquidity floor dollar_volume = AverageDollarVolume(window_length=30) # Set our screens pipe.set_screen((ne.isnan() | (ne > avoid_earnings_days) | (pe > avoid_earnings_days)) & (dollar_volume > 10**7) & mkt_cap_top) return pipe def initialize(context): # Create our pipeline attach_pipeline(make_pipeline(), name='scores') # Set our leverage variables context.longleverage = 1.0 context.shortleverage = -1.0 # Trade biweekly schedule_function(trade, date_rule=date_rules.month_start(), time_rule=time_rules.market_open(minutes=20)) schedule_function(trade, date_rule=date_rules.month_start(days_offset=15), time_rule=time_rules.market_open(minutes=20)) def trade(context, data): # Order our securities and exit any stocks that are not a part # of our daily universe for stock in context.shorts.index: if stock in data: order_target_percent(stock, context.shortleverage/ len(context.shorts)) for stock in context.longs.index: if stock in data: order_target_percent(stock, context.longleverage / len(context.longs)) for stock in context.portfolio.positions: if stock not in context.longs.index and \ stock not in context.shorts.index: order_target(stock, 0) def before_trading_start(context, data): # Get the top 7% and bottom 7% of performers and set them as our # longs and shorts results = pipeline_output('scores').dropna() lower, upper = results['returns'].quantile([.05, .95]) context.shorts = results[results['returns'] <= lower] context.shorts = context.shorts[context.shorts['article_sentiment']*context.shorts['impact_score'] < 5] context.longs = results[results['returns'] >= upper] context.longs = context.longs[context.longs['article_sentiment']*context.longs['article_sentiment'] > -5] update_universe(context.longs.index | context.shorts.index) def handle_data(context, data): # Record our leverage and exposure leverage=context.account.leverage exposure=context.account.net_leverage record(leverage=leverage, exposure=exposure)