Back to Community
Long only value momentum strategy with filter - first algo

Have been on Quantopian for a while. Really enjoy the community. This is my first algo which is a momentum strategy (buy winners, and sell losers). I am trying to summarise my recent research here.

Some key items in this algo:
1. Ranking based on three factors: Size: market capital > $2B; Value: low ev_to_ebitda; Momentum: Past 2 to 12 month cumulative return
2. Trend filter: only open trade with SPY_current_price > SPY_mvag(200), if SPY_current_price < SPY_mvag(200) clean positions and switch to TLT(Treasury ETF)
3. Trade: Then long top 30 stocks in the ranking
4. Rebalance: monthly (first day of each month)
5. Weigh: equal weight

Other things factor, want to investigate:
1. Ranking logic (Tricky): Size(large, mid, small), value(book_to_market, evit_ev, P/E etc), momentum(trend signal, past 3 months modified log return slope(from Stock on the move), etc), other(low beta, low volatility, etc )
2. Weight: EW, VW, ERC, MVO
3. Rebalance frequency: weekly, monthly, quarterly? (monthly seams reasonable considering commission, and momentum continuous)

Ideas are from several literature(not limited):
1. Value and Momentum Everywhere, Clliford Asness ect 2013.
http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf
2. Stocks on the Move: Beating the Market with Hedge Fund Momentum Strategies, Andreas F. Clenow 2015
http://www.stocksonthemove.net/book-images/#prettyPhoto

Code implementation are based on several previous Quantopians post(not limited):
1. Equity Long-Short everywhere by Simon Thornington
https://www.quantopian.com/posts/equity-long-short
2. EV/EBITDA value, then momentum by Johnny Wu
https://www.quantopian.com/posts/ev-slash-ebitda-value-then-momentum
3. Risk Budgeting to impove performance
https://www.quantopian.com/posts/risk-budgeting-to-improve-performance

Will go through some backtests, and show some thoughts. Since this is my first algo, please advice if something goes wrong.

Clone Algorithm
292
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month


# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: high ebit_to_ev 
Momentum: Pass 2 to 12 month cumulative return, past 63 days log return slope

other factors:
Volatility: 30 day high - low range < 15%    (not use)
filter: current < mvag(200) clean positions and stich to TLT (check) 

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight (check)
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.TLT = sid(23921)                 #iShare 20+ Year Treasury Bond
    context.last_month = -1 
    set_commission(commission.PerShare(cost = 0.13, min_trade_cost = 1.3))   #can be changed for commission  IBs US API policy
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def add_book_market(df):
    df['book_to_market'] = df['book_value_per_share'] /(df['market_cap']*1.0)/(df['shares_outstanding']*1.0)
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month
    
    fundamentals_df = get_fundamentals(
        query(
            # put your query in here by typing "fundamentals."
            fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.book_value_yield,
            fundamentals.asset_classification.morningstar_sector_code, 
            fundamentals.valuation.enterprise_value, 
            fundamentals.income_statement.ebit,          
            fundamentals.income_statement.ebitda)
        .filter(fundamentals.valuation.market_cap > 2e9)
        .filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        .filter(fundamentals.valuation.enterprise_value > 0)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
        .filter(fundamentals.valuation.shares_outstanding != None)
        .order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
        .limit(100)).T
    
    context.universe = np.union1d(fundamentals_df.index.values, [context.benchmark])
    # Filter out only stocks that fits in criteria
    context.stocks = [stock for stock in fundamentals_df]
    # Update context.fundamental_df with the securities that we need
    context.fundamentals_df = fundamentals_df[context.stocks]
    update_universe(context.universe)
     
# Allocation 
# - Retrieve acquirers multiple for all tracked stocks
# - Rank all tracked stocks based on score
# - Slope of last 90 days log return, R2 value of linear fit
# - Select top 30 stocks of high ranking
# - Risk parity weight, each stock has equal risk contribution to the portfolio

def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    indicator = "MOM2_12_cul_return"
    #indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    method = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, method) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
 
    # With trend filter
    if SPY_current > SPY_200_mavg and portfolio_cash != 0: 
        for stock in context.portfolio.positions:
        #if stock not in context.shortlist:
               if stock not in context.longlist:
                    order_target(stock, 0)    
        #Order
        for stock in context.longlist:
               if stock in data:
                    order_target_percent(stock, context.alloc[stock])
                    
    elif (SPY_current < SPY_200_mavg) and portfolio_positions !=0:
        for stock in context.portfolio.positions:
               if stock in data and stock != context.TLT:
                    order_target_percent(stock, 0)
        order_target_percent(context.TLT, 0.95)
    
             
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    h_63 = history(64, "1d", 'price')
    h_63_simple_return = h_63.pct_change()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_63_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist)
            
    return weight 
There was a runtime error.
9 responses

Have a initial look at performance of a momentum algo. (Just ran a random one)
Portfolio return is highly correlated with the Benchmark. It suffered in 2008-2009 crisis, in the middle of 2010, and 2011. Relative high return in the bull market, but low lost in the bear market compare to benchmark.

Clone Algorithm
10
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: low ev_to_ebitda 
Momentum: Pass 12 month cumulative return

other factors:
Volatility: 30 day high - low range < 15%
Bull&bear: mvag(50)<mvag(200) (bear)

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.last_month = -1 
    set_commission(commission.PerShare(cost = 0.03, min_trade_cost = 1))   #can be changed
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
    #schedule_function(bookkeeping)
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month

    fundamentals_df = get_fundamentals(
        query(fundamentals.valuation.market_cap,
              fundamentals. valuation.shares_outstanding,
              fundamentals.valuation_ratios.book_value_yield,
              fundamentals.income_statement.ebit,
              fundamentals.income_statement.ebit_as_of,
              fundamentals.valuation.market_cap,
              fundamentals.valuation.enterprise_value,
              fundamentals.valuation.enterprise_value_as_of,
              fundamentals.share_class_reference.symbol,
              fundamentals.company_reference.standard_name,
              fundamentals.operation_ratios.total_debt_equity_ratio
              )
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap > 2000000000) # cap > $2B
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc()) 
        .offset(0)
        .limit(500) 
        ).T
    fundamentals_df = add_ebit_ev(fundamentals_df)
   # update stock universe
   # context.size_small = df_500.sort(['market_cap'], ascending = True)[0:100]               #Top 100 small cap
   # context.size_large = df_500.sort(['market_cap'], ascending = False)[0:100]              #Top 100 large cap 
    context.size_small = fundamentals_df[-150:]
    context.size_large = fundamentals_df[0:150]
    context.value_under = fundamentals_df.sort(['book_value_yield'], ascending = False)[0:100]       #Top 100 under value 
    context.value_over = fundamentals_df.sort(['book_value_yield'], ascending = True)[0:100]         #Top 100 over value
    context.value_better = fundamentals_df.sort(['ebit_ev'], ascending = False)[0:100]               #Top 100 ebit_ev
    context.value_worse = fundamentals_df.sort(['ebit_ev'], ascending = True)[0:100]                 #Bottom 100 ebit_ev
  
    # Update stocks universe
    context.stocks = [stock for stock in fundamentals_df]    
    context.fundamentals_df = fundamentals_df[context.stocks]
    context.universe = np.union1d(context.value_better.index.values, [context.benchmark])
    context.stocks = [stock for stock in context.universe]    
    update_universe(context.universe)
    
# Allocation 
# - Retrieve acquirers multiple for all tracked stocks
# - Rank all tracked stocks based on score
# - Slope of last 90 days log return, R2 value of linear fit
# - Select top 30 stocks of high ranking
# - Risk parity weight, each stock has equal risk contribution to the portfolio

def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    #indicator = "MOM2_12_cul_return"
    indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    mothod = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, mothod) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
  
    # No filter
    for stock in context.portfolio.positions:
          #if stock not in context.shortlist:
          if stock not in context.longlist:
               order_target(stock, 0)
          #Order
    for stock in context.longlist:
          if stock in data:
               order_target_percent(stock, context.alloc[stock])
 

    # With trend filter
#    if (SPY_50_mavg < SPY_200_mavg) and portfolio_positions !=0:
#        for stock in context.portfolio.positions:
#               if stock in data:
#                    order_target_percent(stock, 0)
#    elif portfolio_cash != 0: 
#        for stock in context.portfolio.positions:
#        #if stock not in context.shortlist:
#               if stock not in context.longlist:
#                    order_target(stock, 0)    
        #Order
#        for stock in context.longlist:
#               if stock in data:
#                    order_target_percent(stock, context.alloc[stock])
             
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_90_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist).fillna(0) 
            
    return weight 
There was a runtime error.

Investigate ranking logic and it is tricky. This is how hedge fund makes money, and want to keep it secret. Found an interesting paper about Cliff Asness (AQR)'s momentum fund, and show how they construct momentum fund. It also showed their logic in another Asness's paper (Value momentum everywhere). The value indicator they used is BE/ME ratio, and momentum indicator was past 2 to 12 months accumulative return.
http://fortune.com/2011/12/19/cliff-asness-a-hedge-fund-genius-goes-retail/

However, how to choose value indicator is tricky. Other ratios can also be used to compare and rank companies such as EBITDA/EV, EBIT/EV, PE ratio, etc are often used by this community to choose the stock universe.

Below one is based on market cap, and ebit/ev ratio to initial rank the stock, other rule such as momentum, weight are the same, but the algo performance are a little bit different.

Clone Algorithm
64
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month


# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: high bbok_to_market ratio
Momentum: Pass 2 to 12 month cumulative return, past 63 days log return slope

other factors:
Volatility: 30 day high - low range < 15%    (not use)
filter: current < mvag(200) clean positions

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight (check)
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.TLT = sid(23921)                 #iShare 20+ Year Treasury Bond
    context.last_month = -1 
    set_commission(commission.PerShare(cost = 0.03, min_trade_cost = 1.))   #can be changed for commission  IBs US API policy
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def add_book_market(df):
    df['book_to_market'] = df['book_value_per_share'] /(df['market_cap']*1.0)/(df['shares_outstanding']*1.0)
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month
    
    fundamentals_df = get_fundamentals(
        query(fundamentals.valuation.market_cap,
              fundamentals.valuation.shares_outstanding,
              fundamentals.valuation_ratios.book_value_per_share,
              fundamentals.valuation_ratios.ev_to_ebitda,
              fundamentals.income_statement.ebit,
              fundamentals.income_statement.ebit_as_of,
              fundamentals.valuation.enterprise_value,
              fundamentals.valuation.enterprise_value_as_of,
              fundamentals.share_class_reference.symbol,
              )
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        #.filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        #.filter(fundamentals.valuation.enterprise_value > 0)
        .filter(fundamentals.valuation.market_cap > 2000000000) # cap > $2B
        .order_by(fundamentals.valuation.market_cap.desc())
        #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
        .limit(500) 
        ).T
    
    #Add ebit_ev and book_to_market columns
    fundamentals_df = add_ebit_ev(fundamentals_df)
    fundamentals_df = add_book_market(fundamentals_df)
    #Find size, book_to_
    context.size_small = fundamentals_df[-100:]                #Top 100 small size
    context.size_large = fundamentals_df[0:100]                #Top 100 large size
    
    context.value_under = fundamentals_df.sort(['book_to_market'], ascending = False)[0:100]       #Top 100 under value 
    context.value_over = fundamentals_df.sort(['book_to_market'], ascending = True)[0:100]         #Top 100 over value
    
    context.value_better = fundamentals_df.sort(['ebit_ev'], ascending = False)[0:100]    #Top 100  
    context.value_worse = fundamentals_df.sort(['ebit_ev'], ascending = True)[0:100]      #Bottom 100 
  
    # Update stocks universe
    context.stocks = [stock for stock in fundamentals_df]    
    context.fundamentals_df = fundamentals_df[context.stocks]
    
    context.universe = np.union1d(context.value_better.index.values, [context.benchmark])     #chose high book_to_market value for the stock universe
    context.stocks = [stock for stock in context.universe]
    update_universe(context.universe)
     
def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    indicator = "MOM2_12_cul_return"
    #indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    method = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, method) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
  
    # No filter
    for stock in context.portfolio.positions:
          #if stock not in context.shortlist:
          if stock not in context.longlist:
               order_target(stock, 0)
          #Order
    for stock in context.longlist:
          if stock in data:
               order_target_percent(stock, context.alloc[stock])
 

    # With trend filter
"""    if SPY_current > SPY_200_mavg and portfolio_cash != 0: 
        for stock in context.portfolio.positions:
        #if stock not in context.shortlist:
               if stock not in context.longlist:
                    order_target(stock, 0)    
        #Order
        for stock in context.longlist:
               if stock in data:
                    order_target_percent(stock, context.alloc[stock])
                    
    elif (SPY_current < SPY_200_mavg) and portfolio_positions !=0:
        for stock in context.portfolio.positions:
               if stock in data:
                    order_target_percent(stock, 0)
        order_target_percent(context.TLT, 0.95)"""
    
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    h_63 = history(64, "1d", 'price')
    h_63_simple_return = h_63.pct_change()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_63_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist)
            
    return weight 
There was a runtime error.

In previous backtest, the momentum is past 2 to 12 months cumulative return(skip the most recent month to avoid the 1 moth reversal in stock return), and it is standard in the momentum literature. However, other momentum are exist. In the book of Stock on the move, the idea momentum is adjusted regression slope (slope of past 3 months log return linear regression * R^2). This backtest shows this idea.

Clone Algorithm
64
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month


# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: high bbok_to_market ratio
Momentum: Pass 2 to 12 month cumulative return, past 63 days log return slope

other factors:
Volatility: 30 day high - low range < 15%    (not use)
filter: current < mvag(200) clean positions

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight (check)
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.TLT = sid(23921)                 #iShare 20+ Year Treasury Bond
    context.last_month = -1 
    set_commission(commission.PerShare(cost = 0.03, min_trade_cost = 1.))   #can be changed for commission  IBs US API policy
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def add_book_market(df):
    df['book_to_market'] = df['book_value_per_share'] /(df['market_cap']*1.0)/(df['shares_outstanding']*1.0)
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month
    
    fundamentals_df = get_fundamentals(
        query(fundamentals.valuation.market_cap,
              fundamentals.valuation.shares_outstanding,
              fundamentals.valuation_ratios.book_value_per_share,
              fundamentals.valuation_ratios.ev_to_ebitda,
              fundamentals.income_statement.ebit,
              fundamentals.income_statement.ebit_as_of,
              fundamentals.valuation.enterprise_value,
              fundamentals.valuation.enterprise_value_as_of,
              fundamentals.share_class_reference.symbol,
              )
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        #.filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        #.filter(fundamentals.valuation.enterprise_value > 0)
        .filter(fundamentals.valuation.market_cap > 2000000000) # cap > $2B
        .order_by(fundamentals.valuation.market_cap.desc())
        #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
        .limit(500) 
        ).T
    
    #Add ebit_ev and book_to_market columns
    fundamentals_df = add_ebit_ev(fundamentals_df)
    fundamentals_df = add_book_market(fundamentals_df)
    #Find size, book_to_
    context.size_small = fundamentals_df[-100:]                #Top 100 small size
    context.size_large = fundamentals_df[0:100]                #Top 100 large size
    
    context.value_under = fundamentals_df.sort(['book_to_market'], ascending = False)[0:100]       #Top 100 under value 
    context.value_over = fundamentals_df.sort(['book_to_market'], ascending = True)[0:100]         #Top 100 over value
    
    context.value_better = fundamentals_df.sort(['ebit_ev'], ascending = False)[0:100]    #Top 100  
    context.value_worse = fundamentals_df.sort(['ebit_ev'], ascending = True)[0:100]      #Bottom 100 
  
    # Update stocks universe
    context.stocks = [stock for stock in fundamentals_df]    
    context.fundamentals_df = fundamentals_df[context.stocks]
    
    context.universe = np.union1d(context.value_better.index.values, [context.benchmark])     #chose high book_to_market value for the stock universe
    context.stocks = [stock for stock in context.universe]
    update_universe(context.universe)
     
def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    #indicator = "MOM2_12_cul_return"
    indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    method = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, method) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
  
    # No filter
    for stock in context.portfolio.positions:
          #if stock not in context.shortlist:
          if stock not in context.longlist:
               order_target(stock, 0)
          #Order
    for stock in context.longlist:
          if stock in data:
               order_target_percent(stock, context.alloc[stock])
 

    # With trend filter
"""    if SPY_current > SPY_200_mavg and portfolio_cash != 0: 
        for stock in context.portfolio.positions:
        #if stock not in context.shortlist:
               if stock not in context.longlist:
                    order_target(stock, 0)    
        #Order
        for stock in context.longlist:
               if stock in data:
                    order_target_percent(stock, context.alloc[stock])
                    
    elif (SPY_current < SPY_200_mavg) and portfolio_positions !=0:
        for stock in context.portfolio.positions:
               if stock in data:
                    order_target_percent(stock, 0)
        order_target_percent(context.TLT, 0.95)"""
    
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    h_63 = history(64, "1d", 'price')
    h_63_simple_return = h_63.pct_change()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_63_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist)
            
    return weight 
There was a runtime error.

Other factors like rebalance frequency, weight criteria should also be considered. Since momentum strategy tends to hold relative large amount number of stocks, due to rebalancing, the commission fee is not low. AQR estimated that their commission fee could be about 0.7% of the portfolio per year.

All the backtest results show that the momentum strategy somehow mimic the performance of index. It had nightmare during 08, 09 crisis, and middle of 10, 11. Is there a way to prevent it? If some signal comes out, we can sell all the position, and hold cash or switch to relative low risky asset like treasury. This comes trend filter like a very popular one 200 day moving average(other crossover indicator may all work). Below backtest shows that if index(SPY) current price < mavg(200), then keep cash on hand.

However, the relative higher return doesn't mean it is better, the excess return came from it limited huge drawdown in previous mentioned period. No investor wants to give you the cash for sitting there. There comes the first backtest for switching to less riskier asset (bond fund) when bear market comes.

Clone Algorithm
64
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: low ev_to_ebitda 
Momentum: Pass 12 month cumulative return

other factors:
Volatility: 30 day high - low range < 15%
Bull&bear: mvag(50)<mvag(200) (bear)

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.last_month = -1 
    #set_commission(commission.PerShare(cost = 0.13, min_trade_cost = 1.3))   #can be changed for commission  IBs US API policy
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
    #schedule_function(bookkeeping)
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month

    fundamentals_df = get_fundamentals(
        query(fundamentals.valuation.market_cap,
              fundamentals.valuation.shares_outstanding,
              fundamentals.valuation_ratios.book_value_yield,
              fundamentals.income_statement.ebit,
              fundamentals.income_statement.ebit_as_of,
              fundamentals.valuation.market_cap,
              fundamentals.valuation.enterprise_value,
              fundamentals.valuation.enterprise_value_as_of,
              fundamentals.share_class_reference.symbol,
              )
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap > 2000000000) # cap > $2B
        .order_by(fundamentals.valuation.market_cap.desc()) 
        .offset(0)
        .limit(500) 
        ).T
    fundamentals_df = add_ebit_ev(fundamentals_df)
   # update stock universe
   # context.size_small = df_500.sort(['market_cap'], ascending = True)[0:100]               #Top 100 small cap
   # context.size_large = df_500.sort(['market_cap'], ascending = False)[0:100]              #Top 100 large cap 
    context.size_small = fundamentals_df[-150:]
    context.size_large = fundamentals_df[0:150]
    context.value_under = fundamentals_df.sort(['book_value_yield'], ascending = False)[0:100]       #Top 100 under value 
    context.value_over = fundamentals_df.sort(['book_value_yield'], ascending = True)[0:100]         #Top 100 over value
    context.value_better = fundamentals_df.sort(['ebit_ev'], ascending = False)[0:100]
    context.value_worse = fundamentals_df.sort(['ebit_ev'], ascending = True)[0:100]
  
    # Update stocks universe
    context.stocks = [stock for stock in fundamentals_df]    
    context.fundamentals_df = fundamentals_df[context.stocks]
    context.universe = np.union1d(context.value_better.index.values, [context.benchmark])
    context.stocks = [stock for stock in context.universe]    
    update_universe(context.universe)
    
# Allocation 
# - Retrieve acquirers multiple for all tracked stocks
# - Rank all tracked stocks based on score
# - Slope of last 90 days log return, R2 value of linear fit
# - Select top 30 stocks of high ranking
# - Risk parity weight, each stock has equal risk contribution to the portfolio

def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    indicator = "MOM2_12_cul_return"
    #indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    method = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, method) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
  
    # No filter
#    for stock in context.portfolio.positions:
          #if stock not in context.shortlist:
#          if stock not in context.longlist:
#               order_target(stock, 0)
          #Order
#    for stock in context.longlist:
#          if stock in data:
#               order_target_percent(stock, context.alloc[stock])
 

    # With trend filter
    if SPY_current > SPY_200_mavg and portfolio_cash != 0: 
        for stock in context.portfolio.positions:
        #if stock not in context.shortlist:
               if stock not in context.longlist:
                    order_target(stock, 0)    
        #Order
        for stock in context.longlist:
               if stock in data:
                    order_target_percent(stock, context.alloc[stock])
                    
    elif (SPY_current < SPY_200_mavg) and portfolio_positions !=0:
        for stock in context.portfolio.positions:
               if stock in data:
                    order_target_percent(stock, 0)
    
             
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    h_63 = history(64, "1d", 'price')
    h_63_simple_return = h_63.pct_change()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_63_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist).fillna(0) 
            
    return weight 
There was a runtime error.

Momentum strategy has been well investigated in the academic world. In a long run, it can generally beat the market. However, due to its mimic characteristics, it may suffer huge loss during distressed time. Sometime, it may fail to capture the trend.

Even in normal period, it is still tricky. Ranking logic, type of momentum, weighting may also effect the performance.

This research does not tend to find the best performance algo, but figure out its work flow, its limit, and ways to improve. More details can be investigated in future.

Cool, thanks for sharing. You clearly looked at a variety of different concepts. It's also good to see the impact of the different parts of the strategy in the individual back-tests.

May I make a suggestion for a slight improvement? Your algo's leverage keeps creeping up slowly over time. This is a notorious issue when working with the fundamentals data. We had a recent discussion about this, including code to solve the problem: https://www.quantopian.com/posts/value-template-long-only-w-slash-trend-filter. The algo isn't as fancy as yours, but I think the "ignore_obsolete" part in the code and garyha's code for tracking metrics could be useful

@Origin, thanks for your suggestion. I also noticed the leverage issue. It really help. Another issue is that my algo rebalances every first business day of a month, however, sometimes it didn't finish the rebalance on that day, it keeping buying or selling some shares in the couple of following days. In addition, some stocks include "_A", "_B" etc, but can't be filtered even though I set some rules to do the job. Any idea? Thanks.

Filtering the stock universe down to a clean, fully tradable set is surprisingly difficult. The A/_B issue can be solved by filtering on "fundamentals.share_class_reference.is_primary_share". I then usually prevent the delayed rebalance by first filtering stocks on dollar-volume, i.e. mean daily_volume * price * somefractional_constant >= planned purchase. Also, canceling all orders at the end of the trading day avoids stragglers in a brute-force way, but potentially leaves money sitting around.

Here is the original algorithm ran on a minute level backtest, I did change the # of securities to 50 so as not to hit the time out exception.

Clone Algorithm
63
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month


# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.

"""
Long term Value Momentum strategy

This algorithm screens all stocks and ranks stocks based on three factors:

Size: market capital > $2B
Value: high ebit_to_ev 
Momentum: Pass 2 to 12 month cumulative return, past 63 days log return slope

other factors:
Volatility: 30 day high - low range < 15%    (not use)
filter: current < mvag(200) clean positions and stich to TLT (check) 

Rebalance: 
monthly (first day of each month)

Weight:
MVO: Mean variance optimization weight
EW: Equal weight (check)
VW: Value weight
ERC: Equal risk contributiaion: Risk parity, based on volatility

"""
from operator import itemgetter
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import datetime
import math
import talib

def initialize(context):
    context.alloc = pd.Series()
    context.score = pd.Series()
    context.benchmark = sid(8554)             #SPY: SP500 ETF Future Trust
    context.TLT = sid(23921)                 #iShare 20+ Year Treasury Bond
    context.last_month = -1 
    set_commission(commission.PerShare(cost = 0.13, min_trade_cost = 1.3))   #can be changed for commission  IBs US API policy
    context.leverage = 1.0  #leverage = 1
    schedule_function(rebalance, 
                      date_rules.month_start(), 
                      time_rules.market_close(minutes=1))                   #1 minute after market open
#
# Symbol selection for value momentum
# - Only query once a month (for backtest performance)
# - Query fundamentals database for largest companies above market cap limit
# - Ensure data available for all active positions
#

def add_ebit_ev(df):
    ev = df['enterprise_value']
    ev[ev < 0.0] = 1.0
    df['enterprise_value'] = ev
    df['ebit_ev'] = df['ebit'] / df['enterprise_value']
    return df

def add_book_market(df):
    df['book_to_market'] = df['book_value_per_share'] /(df['market_cap']*1.0)/(df['shares_outstanding']*1.0)
    return df

def before_trading_start(context):
    #only query database at the beginning of the month
    month = get_datetime().month
    if context.last_month == month:#
        return
    context.last_month = month
    
    fundamentals_df = get_fundamentals(
        query(
            # put your query in here by typing "fundamentals."
            fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.book_value_yield,
            fundamentals.asset_classification.morningstar_sector_code, 
            fundamentals.valuation.enterprise_value, 
            fundamentals.income_statement.ebit,          
            fundamentals.income_statement.ebitda)
        .filter(fundamentals.valuation.market_cap > 2e9)
        .filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        .filter(fundamentals.valuation.enterprise_value > 0)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
        .filter(fundamentals.valuation.shares_outstanding != None)
        .order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
        .limit(50)).T
    
    context.universe = np.union1d(fundamentals_df.index.values, [context.benchmark])
    # Filter out only stocks that fits in criteria
    context.stocks = [stock for stock in fundamentals_df]
    # Update context.fundamental_df with the securities that we need
    context.fundamentals_df = fundamentals_df[context.stocks]
    update_universe(context.universe)
     
# Allocation 
# - Retrieve acquirers multiple for all tracked stocks
# - Rank all tracked stocks based on score
# - Slope of last 90 days log return, R2 value of linear fit
# - Select top 30 stocks of high ranking
# - Risk parity weight, each stock has equal risk contribution to the portfolio

def rebalance(context, data):

    SPY_50_mavg = data[context.benchmark].mavg(50)
    SPY_200_mavg = data[context.benchmark].mavg(200)
    SPY_current = data[context.benchmark].price
    portfolio_positions = context.portfolio.portfolio_value 
    portfolio_cash = context.portfolio.cash
        
    #update long list and short list
    indicator = "MOM2_12_cul_return"
    #indicator = "90_return_slope"
    context.longlist, context.shortlist = gennerate_MomList(context, data, indicator)
   
    #Weighting
    #method = 'ERC'            #equal risk contribution
    method = 'EW'              #equal weight
    context.alloc = create_weigth(context, data, method) 
    print context.alloc
    #if (SPY_50_mavg < SPY_200_mavg):
    #    w_short = 0.99/len(context.shortlist)                #equal weight size for long position
    print "reb long %d symbols" % len(context.longlist)
    #print "reb short %d symbols" % len(context.shortlist)
 
    # With trend filter
    if SPY_current > SPY_200_mavg and portfolio_cash != 0: 
        for stock in context.portfolio.positions:
        #if stock not in context.shortlist:
               if stock not in context.longlist:
                    order_target(stock, 0)    
        #Order
        for stock in context.longlist:
               if stock in data:
                    order_target_percent(stock, context.alloc[stock])
                    
    elif (SPY_current < SPY_200_mavg) and portfolio_positions !=0:
        for stock in context.portfolio.positions:
               if stock in data and stock != context.TLT:
                    order_target_percent(stock, 0)
        order_target_percent(context.TLT, 0.95)
    
             
def gennerate_MomList(context, data, indicator):
    if indicator == "MOM2_12_cul_return":
        df_252_prices = history(252,'1d','price')
        #Calculate past MOM2-12 cumulative return, skip the most recent month to avaid the 1-month reversal in stock returns
        MomList = []
        for stock in df_252_prices:
            MOM2_12_return = (df_252_prices[stock][-21] - df_252_prices[stock][0])/df_252_prices[stock][0]
            if np.isnan(MOM2_12_return): 
                pass
            else:
                MomList.append([stock, MOM2_12_return])
        
        #sort cumulative return, and get high momentum and low momentum list    
        MomList_high = sorted(MomList, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
        
        MomList_low = sorted(MomList, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0])
        
    elif indicator == "90_return_slope":
        MomList2 = []
        h_63 = history(64, "1d", 'price')
        h_63_high = history(63, "1d", 'high')
        h_63_low = history(63, "1d", 'low')
        max_diff = (h_63_high - h_63_low) / h_63_low
        h_63_simple_return = h_63.pct_change()
        h_63_log_return = np.log(h_63/h_63.shift(1))
        
        x_index = pd.Series(range(63))+1
   
        for stock in h_63_log_return:
            slope, intercept, r_value, p_value, std_err = stats.linregress(x_index, h_63_log_return[stock][1:])
            score = slope * np.sqrt(252) * r_value**2
            if score == score:
                MomList2.append([stock, score])
 
        #sort in descending order, the momentum score    
        MomList_high = sorted(MomList2, key=itemgetter(1), reverse=True)[0:30]
        Momlonglist = []
        for stock in MomList_high:
            Momlonglist.append(stock[0])
            
        MomList_low = sorted(MomList2, key=itemgetter(1), reverse=False)[0:30]
        Momshortlist = []
        for stock in MomList_low:
            Momshortlist.append(stock[0]) 
            
    #context.alloc = create_weigth(context, data, 'EW') 
    return Momlonglist, Momshortlist
         

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    record(leverage = context.account.leverage)
    record(exposure = context.account.net_leverage)
   
def create_weigth(context, data, method):
    weight = pd.Series()
    h_63 = history(64, "1d", 'price')
    h_63_simple_return = h_63.pct_change()
    
    if method == 'ERC':
        rets_sort = pd.DataFrame()
        for sec in context.longlist:
            rets_sort = rets_sort.append(h_63_simple_return[sec], ignore_index=False)
        std = rets_sort.T.std()
        weight = pd.Series(np.power(std,-2)/sum(np.power(std,-2)), index=std.index).fillna(0)
      
    elif method == 'EW':
        weight = pd.Series(0.95/len(context.longlist), index = context.longlist)
            
    return weight 
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.