A weekly view of "A simple momentum rotation system for stocks"

Anthony Garner started a nice thread [here] about monthly momentum based rotation among stocks.
Back tests by several members show that the strategy can work very well, but its returns very sensitive (~10x) to starting day.

I've started this new thread to discuss the behavior of Garner's general strategy with weekly rotation and several other modifications that were mostly presented in the original thread. The weekly vs monthly behavior is different enough that a new thread is warranted.

The results presented below are for weekly rotation with the following rules applied to stock selection
a) are in the top 3000 by market cap
b) have net gain over the past 252 days
c) have positive cash flow
d) have and average daily dollar volume of at least $500k over the past 60 days As with the monthly version the strategy is in stocks vs bonds based on fast vs slow SMA of a proxy index. In this case VTI is used vs SPY as the strategy considers the top 3000 stocks and often invests outside of the SP500. I also disabled the stop loss and profit taking sections of Garner's code, as they should not have much effect in a one week holding period. offset= 0 total return 1422% Alpha 1.03 Sharpe 4.34 Max DD 36% Vol 0.25 offset= 1 total return 1366% Alpha 0.98 Sharpe 4.17 Max DD 36% Vol 0.25 [back test attached] offset= 2 total return 2193% Alpha 1.63 Sharpe 6.96 Max DD 32% Vol 0.24 offset= 3 total return 1543% Alpha 1.13 Sharpe 4.81 Max DD 32% Vol 0.24 offset= 4 total return 1794% Alpha 1.32 Sharpe 5.65 Max DD 26% Vol 0.24 This is certainly a more consistent set of results. From this perhaps others might recommend how to further improve behavior (max drawdown, length of drawdown periods, ...) I also tried to remove dead code and provide some comments within the code as to intended behavior. At the top of the file are notes related to some quick look trade results. Hope this is helpful. [edit: for poor grammar and to replace back test with one that has summary results consistent with this post. Original post had back test that had correct result but was done before I added day_offset summary to my notes in the file header.] 568 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month """ Adapted from "A simple momentum rotation system for stocks" https://www.quantopian.com/posts/a-simple-momentum-rotation-system-for-stocks PF 2016_0807: The unmodified performance of this algorithm is remarkable from 1/4/03 to 11/30/15 Total Returns 1287% Benchmark 192.5% Max Drawdown 50.4% Alpha 0.87 Beta 0.85 Sharpe 3.25 Volatility 0.30 Method outline is: Buy and hold best 10 of 3000 stocks each month During the month sell big losers (stop loss) and big winners (profit taking) Selection considers - Four momentum factors over 20, 60, 125 and 252 days - Efficiency threshold = 0.031 based on 252 day return vs sum of daily High minus Low **PF 2016_0807: Comments relative to the monthly version of the strategy PF 2016_0807: The original post successfully outlines a method outline of use to the community. There was no attempt to make this a production-ready algorithm, so some problems and uncertain features exist. A few of these impacted my ability to understand what was happening so I tried to resolve them (perhaps only to myself): 1) although this is nominally a long-only algo the daily rebalance can result in shorting 2) liquidity problem even when starting with only$100k. Leverage is roughly 35% to 110%. The number of assets is roughtly 4 to 15 vs defined top-10.
3) the algorithm lacks any logic to exit stocks during prolonged drawdown periods. Real investors would have exited a few times from 2003 through 2015.
4) the utility of the efficiency factor test is not clear. The threshold of 0.031 appears to be oddly low as efficiency could easily be much larger.

PF 2016_0807: Below is a summary of what I did to resolve/improve issues 1-3 and my finding that the utility of the efficiency function can be had more simply by requiring the 252-day return (factor_4) to be > 0.0.
I tried to leave the rest of the algorithm as is. Performance is evaluated over the same 1/4/2003 to 11/30/2015 period as the original posting. Another tester might investigate other interesting features of Garner's algorithm (ranking periods, profit taking logic, ...)

PF 2016_0807: Shorting issue (resolved in one change)
I modified the code to issue sell orders for obsolete postions before issuing buy orders for new positions.
This has resolved the problem and improved overall return as the stocks being shorted were probably not good shorting candidates.

PF 2016_0807: Liquidity problems (resolved in three changes)
Leverage often exceeds 1.0 due to an inability to sell obsolete positions in a single trading session.
Leverage 1: Add a function to daily rebalance to continue sales of these positions
This did drive the leverage down to 1.0 quickly in all but a few cases.
As expected the total return also dropped as the average leverage was reduced and more trade fees were paid.
Total Returns 1163%    Benchmark 192.5%    Max Drawdown    52.1%
Alpha    0.78    Beta    0.88    Sharpe    2.85    Volatility    0.31

PF 2016_0807: Leverage 2: Add Average Daily Dollar Volume (ADDV) as a filter factor.
Consider only stocks with ADDV > $500k over the past 20 days This nearly eliminated the need to sell obsolete stocks on multiple days until portfolio size got much bigger ~$500k
This did improve overall returns
Total Returns 1314%    Benchmark 192.5%    Max Drawdown    48.1%
Alpha    0.88    Beta    0.89    Sharpe    3.17    Volatility    0.31

PF 2016_0807: Leverage 3: Allow the number of equities to increase with portfolio value
Try context.holdings = max(10, int( portfolio_value/30e3 )
As expected this reduced volatility. It also had some benefit to overall return
Total Returns 1356%    Benchmark 192.5%    Max Drawdown    48.5%
Alpha    0.91    Beta    0.91    Sharpe    3.72    Volatility    0.28

PF 2016_0807: Drawdown protection (improved to acceptable level)
Add a simple drawdown protection based on simple moving averages of SPY
If SPY_SMA_fast < SPY_SMA_slow, then go to cash; else use the algorithm
Fast period should be on the order of the shortest momentum filter (20 days)
Since SMA filter is slower than EMA a period less than 20 days is desired.
Slow period should be several multiples of the fast period, but not slower than the overall algo.
The geometric average of the four periods (20,60,125,252) is 78 days
A 15/80 day test provided good drawdown reduction (26% vs 48%) with about 10% loss in total return
15/80 Cash  Total return 1204%    Alpha 0.85    Sharpe 4.00    Max DD 26%

PF 2016_0807: Most asset allocation models would exit to bonds vs cash, so that was tried as well
Bond set = [TLT, IEF, AGG]
15/80 Bonds   Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
This is a nice result. A somewhat better result might be had by allowing rotation between stocks, bonds, cash, or some combination of stocks/bonds, but that is beyond my current purpose.

PF 2016_0807: What is effect of the ADDV limit?
ADDV limit. $30k per holding and$100k initial investment.
Exiting to bonds when indicated by 15/80 SMA test
$0.2M: Total return 1810% Alpha 1.34 Sharpe 6.15 Max DD 20%$0.5M:  Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
$1.5M: Total return 1546% Alpha 1.13 Sharpe 5.00 Max DD 20% PF 2016_0807: What is the effect of the efficiency threshold? I tried several values as shown below Any limit > 0.0 has a good result until some point above 0.5. Garner's 0.031 recommendation for his top 10 algorithm looks good. My finding is for a variable and larger set of equities (10 to 60 in any trial). PF 2016_0807: Intermediate is the return reported for week of 1/3/2010 (near midpoint) Limit 0.0 total return 1815% intermediate 848% Sharpe 6.15 Limit 0.031 total return 1790% intermediate 836% Sharpe 6.07 Limit 0.1 total return 1786% intermediate 818% Sharpe 6.05 Limit 0.2 total return 1784% intermediate 813% Sharpe 6.03 Limit 0.4 total return 1799% intermediate 791% Sharpe 6.09 Limit 0.5 total return 1764% intermediate 809% Sharpe 5.97 Limit 0.7 total return 1550% intermediate 739% Sharpe 5.20 ==> might as well use a limit of 0.0 ==> This is equivalent to stating factor_4 > 1.0 which is easier to implement. PF 2016_0809: Thomas Chang published a more compact implementation of the four factor ranking. I'll probably use this in a future version of this strategy PF 2016_0809: However problems remain Most notable there is a very large sensitivity to starting date Garner made several posts showing this wildly variable performance. Over the span of 1/4/2003 to 11/30/2015 the total return can be as little as 200% with no improvement in volatility or drawdown vs SP500 buy-and-hold. Starting date sensitivity is a common problem in asset rotation strategies, but this one is particularly sensitive. **PF 2016_0809: End of comments relative to the monthly version of the strategy **PF 2016_0814: Start of comments relative to the weekly version of the strategy PF 2016_0814: Potential remedies for rotation method and timing a) rebalance more frequently, perhaps weekly b) initiate multiple overlapping positions (invest weekly and and hold monthly) c) consider using a small cap proxy for the entry/exit test. PF 2016_0814: Potential remedies for asset selection a) investigate whether the some very simple fundamentals screening could reduce the likelihood of buying troublesome stocks b) investigate different momentum models (slope, percent below high) PF 2016_0814: Rebalance weekly Below are returns as a function of days_offset offset= 0 total return 719% Alpha 0.48 Sharpe 3.90 Max DD 47% Vol 0.28 offset= 1 total return 317% Alpha 0.17 Sharpe 0.73 Max DD 52% Vol 0.31 offset= 2 total return 1142% Alpha 0.48 Sharpe 3.14 Max DD 29% Vol 0.28 offset= 3 total return 1308% Alpha 0.94 Sharpe 3.72 Max DD 29% Vol 0.27 offset= 4 total return 1762% Alpha 1.29 Sharpe 5.35 Max DD 33% Vol 0.25 PF 2016_0814: Rebalance weekly and check that free cash flow is positive Rationale: a quick look of stocks selected during periods of poor algorithm performance showed poor fundamentals. Positive FCF might be one of the simplest tests for "minimally acceptable" fundamentals. See returns as a function of days_offset offset= 0 total return 1024% Alpha 0.79 Sharpe 3.29 Max DD 44% Vol 0.25 offset= 1 total return 1201% Alpha 0.86 Sharpe 3.70 Max DD 34% Vol 0.25 offset= 2 total return 1912% Alpha 1.41 Sharpe 6.16 Max DD 26% Vol 0.24 offset= 3 total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 offset= 4 total return 1533% Alpha 1.12 Sharpe 5.03 Max DD 28% Vol 0.23 ==> This is more consistent with regard to return and volatility is improved somewhat, but max DD is still too high. PF 2016_0814: Effect of proxy choice Rationale: Strategy considers top 3000 stocks, so a broader proxy should be used using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some broad market candidates: SPY (SP500) total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 IWV (Russell 3000) total return 1320% Alpha 0.95 Sharpe 4.13 Max DD 30% Vol 0.24 VTI (all cap) total return 1312% Alpha 0.95 Sharpe 4.10 Max DD 30% Vol 0.24 Unfortunately I can't find an equal weighted fund that from 2003. ==> As expected broad index (IWV or VTI) may be better PF 2016_0814: Revisiting liquidity I'm still encountering some liquidity problems. Several times per year a stock will take several days to sell the position. using a middling SPY scenario (weekly with positive FCF and offset = 3 days, no filtering for price_vs_max) Here are results for some pairings of ADDV periods and values 20d/500k total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 60d/500k total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 ==> use the 60 day test PF 2016_0814: Effect of limiting performance vs recent maximum Rationale: Some stocks may experience a very large recent spike in price then enter a period of decline. Although in decline the large price jump keeps the stock in our selection set. Implement a simple filter max_N = max close in past N days price_vs_max = close[0]/max_N if price_vs_max > threshold then stock is OK to use using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some pairings of N and threshold 20d/0.0 total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 60d/0.7 total return 1182% Alpha 0.85 Sharpe 3.71 Max DD 28% Vol 0.24 60d/0.85 total return 1147% Alpha 0.82 Sharpe 3.59 Max DD 28% Vol 0.24 ==> This seems unlikely to be a beneficial test PF 2016_0814: Investors often chase the shiny object. Define augmented momentum to provide a bonus for the best single day in the period best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) Augmented momentum for Factor_1, simple_momentum for Factor_2, 3, 4 offset= 0 total return 1102% Alpha 0.78 Sharpe 3.30 Max DD 47% Vol 0.25 offset= 1 total return 1167% Alpha 0.83 Sharpe 3.55 Max DD 36% Vol 0.25 offset= 2 total return 2565% Alpha 1.92 Sharpe 8.42 Max DD 25% Vol 0.23 offset= 3 total return 1247% Alpha 0.90 Sharpe 3.87 Max DD 28% Vol 0.24 offset= 4 total return 1481% Alpha 1.08 Sharpe 4.65 Max DD 31% Vol 0.24 Augmented momentum for Factor_1,2,3,4 offset= 0 total return 1392% Alpha 1.01 Sharpe 4.27 Max DD 44% Vol 0.25 offset= 2 total return 2756% Alpha 2.07 Sharpe 9.13 Max DD 29% Vol 0.23 offset= 3 total return 1702% Alpha 1.25 Sharpe 5.50 Max DD 28% Vol 0.24 ==> This is intersting, but using if feels like data fitting so I won't PF 2016_0814: Can we improve max drawdown by adjusting the stop loss parameter? Garner's original strategy used a 75% stop loss limit. This is probably a good value for monthly rebalance, but a tighter limit might make sense for weekly rebalancing Check this vs a middling scenario (weekly with positive FCF, offset = 0 days, no filtering for price_vs_max, simple_momentum model, 60 day ADDV >$500k)
Here are results for various stop loss limits
0%   total return 1263%    Alpha 0.91    Sharpe 3.93    Max DD 31%    Vol 0.24
60%   total return 1231%    Alpha 0.88    Sharpe 3.84    Max DD 31%    Vol 0.24
75%   total return 1177%    Alpha 0.84    Sharpe 3.69    Max DD 31%    Vol 0.24
80%   total return 1095%    Alpha 0.78    Sharpe 3.20    Max DD 30%    Vol 0.24
85%   total return 1036%    Alpha 0.73    Sharpe 3.32    Max DD 30%    Vol 0.24
90%   total return  787%    Alpha 0.55    Sharpe 2.56    Max DD 29%    Vol 0.23
==> This result suprises me. It must be that a significant fraction of the stocks that lose 25% during the week later recover some of this loss.

PF 2016_0814: Weekly rebalance baseline
Let's put together some of the apparently better ideas
1. Rebalance weekly vs monthly
2. Decide whether to be in stocks or bonds (safe) based on fast vs slow SMA of VTI (All cap index)
3. Only consider stocks that
a) are in the top 3000 by market cap
b) have net gain over the past 252 days
c) have positive cash flow
d) have and average daily dollar volume of at least $500k over the past 60 days 4. Select the top N of these stocks based on combined ranking over 20, 60, 125, 252 days 5. Set the value N to be portfolio value divided by$30k with a minimum of 10 stocks
6. Define a safe set of bonds to hold when not in stocks
7. Disable Garner's stop loss and profit taking as these don't benefit weekly strategy

offset= 0      total return 1422%    Alpha 1.03    Sharpe 4.34    Max DD 36%  Vol 0.25
offset= 1      total return 1366%    Alpha 0.98    Sharpe 4.17    Max DD 36%  Vol 0.25
offset= 2      total return 2193%    Alpha 1.63    Sharpe 6.96    Max DD 32%  Vol 0.24
offset= 3      total return 1543%    Alpha 1.13    Sharpe 4.81    Max DD 32%  Vol 0.24
offset= 4      total return 1794%    Alpha 1.32    Sharpe 5.65    Max DD 26%  Vol 0.24

PF 2016_0814: Still have liquidity problems, especially with low share price stocks.
Try filtering those

Using the offset = 0d case above
Price > $0 total return 1422% Alpha 1.03 Sharpe 4.34 Max DD 36% Vol 0.25 ??? Price >$3     total return 1343%    Alpha 0.97    Sharpe 4.10    Max DD 36%  Vol 0.25
Price > $5 total return 1069% Alpha 0.75 Sharpe 3.22 Max DD 33% Vol 0.25 The progress from$0 to $3 to$5 did reduce the number of partial order messages, but also degraded returns for the case of days_offset=0.

Checking the result for all five day_offset cases and Price > $3: offset= 0 total return 1343% Alpha 0.97 Sharpe 4.10 Max DD 36% Vol 0.25 offset =1 total return 1368% Alpha 0.99 Sharpe 4.17 Max DD 36% Vol 0.25 offset= 2 total return 2292% Alpha 1.71 Sharpe 7.36 Max DD 32% Vol 0.24 offset= 3 total return 1514% Alpha 1.10 Sharpe 4.72 Max DD 32% Vol 0.24 offset= 4 total return 1609% Alpha 1.17 Sharpe 5.03 Max DD 28% Vol 0.24 ==> This slight overall reduction is OK, but I'll continue to investigate liquidity fixes. **PF: End of comments relative to the weekly version of the strategy **PF: Parking lot of things to check later. List in no particular order. - how to reduce drawdown spans (strategy can result in ~3y periods with no net gain) - how to avoid occasional liquidity (partial order) problems - evaluating possibility of nonuniform weighting - implementing overlapping holding periods (maybe order every 2 days and hold for 10) - eliminating use of the built-in market_cap() method that is not supported in live trading - evaluating results in a tear sheet - evaluating results with the AlphaLens tool - how to safely use leverage > 1.0 (see Guy Fleury posts) - picking a better safe set (little thought went into this one) - checking momentum of each safe asset before purchase (... in or cash for each) - exploring alternative entry/exit logic (vs the simple fast/slow SMA) **PF: that is the parking lot for now """ # # import methods and data # from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline import Pipeline from quantopian.pipeline import CustomFactor from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.data import morningstar from quantopian.pipeline.factors import AverageDollarVolume import numpy as np from collections import defaultdict # # define custom classes # class simple_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = close[-1]/close[0] class augmented_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) class price_vs_max(CustomFactor): inputs = [USEquityPricing.close] window_length = 252 def compute(self, today, assets, out, close): out[:] = close[-1]/np.nanmax(close, axis=0) class market_cap(CustomFactor): inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding] window_length = 1 def compute(self, today, assets, out, close, shares): out[:] = close[-1] * shares[-1] class get_fcf_per_share(CustomFactor): inputs = [morningstar.valuation_ratios.fcf_per_share] window_length = 1 def compute(self, today, assets, out, fcf_per_share): out[:] = fcf_per_share class get_last_close(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = close[-1] def initialize(context): # # schedule methods # schedule_function(func=periodic_rebalance, date_rule=date_rules.week_start(days_offset=1), time_rule=time_rules.market_open(), half_days=True) schedule_function(func=daily_rebalance, date_rule=date_rules.every_day(), time_rule=time_rules.market_close(hours=1)) # # set portfolis parameters # set_do_not_order_list(security_lists.leveraged_etf_list) context.acc_leverage = 1.00 context.min_holdings = 10 # # set profit taking and stop loss parameters # context.profit_taking_factor = 0.01 context.profit_taking_target = 10.0 #set much larger than 1.0 to disable context.profit_target={} context.profit_taken={} context.stop_pct = 0.0 # set to 0.0 to disable context.stop_price = defaultdict(lambda:0) # # Set commission model to be used # set_commission(commission.PerShare(cost=0.005, min_trade_cost=1.00)) # # Define safe set (of bonds) # context.safe = [ sid(23870), #IEF sid(23921), #TLT sid(25485) #AGG ] # # Define proxy to be used as proxy for overall stock behavior # set default position to be in safe set (context.buy_stocks = False) # context.canary = sid(22739) context.buy_stocks = False # # Establish pipeline # pipe = Pipeline() attach_pipeline(pipe, 'ranked_stocks') # # Define the four momentum factors used in ranking stocks # factor1 = simple_momentum(window_length=20) pipe.add(factor1, 'factor_1') factor2 = simple_momentum(window_length=60) pipe.add(factor2, 'factor_2') factor3 = simple_momentum(window_length=125) pipe.add(factor3, 'factor_3') factor4 = simple_momentum(window_length=252) pipe.add(factor4, 'factor_4') # # Define other factors that may be used in stock screening # factor5 = get_fcf_per_share() pipe.add(factor5, 'factor_5') factor6 = AverageDollarVolume(window_length=60) pipe.add(factor6, 'factor_6') factor7 = get_last_close() pipe.add(factor7, 'factor_7') factor_4_filter = factor4 > 1.0 # only consider stocks with positive 1y growth factor_5_filter = factor5 > 0.0 # only consider stocks with positive FCF factor_6_filter = factor6 > 0.5e6 # only consider stocks trading >$500k per day
#    factor_7_filter = factor7 > 3.00  # only consider stocks that close above this value
#
# Establish screen used to establish candidate stock list
#
mkt_screen = market_cap()
stocks = mkt_screen.top(3000)
total_filter = (stocks
& factor_4_filter
& factor_5_filter
& factor_6_filter)

pipe.set_screen(total_filter)
#
# Establish ranked stock list
#

combo_raw = (factor1_rank+factor2_rank+factor3_rank+factor4_rank)/4

#
# Calculate maximum number of stocks to buy
#
n_30 = int(context.portfolio.portfolio_value/30e3)
context.holdings = max(context.min_holdings, n_30)
#
# Screen to find the current top stocks
#
context.output = pipeline_output('ranked_stocks')
ranked_stocks = context.output
context.stock_factors = ranked_stocks.sort(['combo_rank'], ascending=True).iloc[:context.holdings]
context.stock_list = context.stock_factors.index
#
# Use fast/slow SMA test of proxy to determine whether to be in stocks vs safe
#
Canary = data.history(context.canary, 'price', 80, '1d')
Canary_fast = Canary[-15:].mean()
Canary_slow = Canary.mean()
if Canary_fast > Canary_slow: context.buy_stocks = True

def daily_rebalance(context, data):
#
# Do daily maintenance
#    a) sell obsolete positions
#    b) implement stop loss
#    c) implement profit taking
#    d) record values for backtest display
#
#
# Sell any holdings that are not in context.this_periods_list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# update stop loss limits and sell any stocks that are below their limits
#
for stock in context.portfolio.positions:
price = data.current(stock, 'price')
context.stop_price[stock] = max(context.stop_price[stock],
context.stop_pct * price)
if price < context.stop_price[stock]:
order_target(stock, 0)
context.stop_price[stock] = 0
log.info("%s stop loss"%stock)
#
# Profit take if profit target is met
# Skip this for safe set assets
#
takes = 0
for stock in context.portfolio.positions:
if stock not in context.safe:
if data.can_trade(stock) and data.current(stock, 'close') > context.profit_target[stock]:
context.profit_target[stock] = data.current(stock, 'close')*1.25
profit_taking_amount = context.portfolio.positions[stock].amount * context.profit_taking_factor
takes += 1
log.info(profit_taking_amount)
order_target(stock, profit_taking_amount)
#
# Record parameters
#
n100 = len(context.output)/100
record(leverage=context.account.leverage,
positions=len(context.portfolio.positions),
t=takes,
candidates=n100)

def periodic_rebalance(context,data):
#
# rebalance portfolio based on most recent context.buy_stocks signal
#
# rebalance portfolio in stocks
#
context.this_periods_list = context.stock_list
#
# sell any holdings not in this period's stock list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# equally weight portfolio over assets that can trade
# don't buy stock if its 20d momentum (Factor_1) is not positive
# set profit_target threshold based on recent close
#
weight = context.acc_leverage / len(context.stock_list)
p_tgt = context.profit_taking_target
for stock in context.stock_list:
if stock in security_lists.leveraged_etf_list:
continue
if data.can_trade(stock) and context.stock_factors.factor_1[stock] > 1:
order_target_percent(stock, weight)
context.profit_target[stock] = data.current(stock, 'close')*p_tgt
#
# otherwise put portfolio into safe set
#
else:
context.this_periods_list = context.safe
#
# sell any holdings not in safe set
#
for stock in context.portfolio.positions:
if stock not in context.safe:
order_target(stock, 0)
#
# equally weight portfolio over safe assets that can trade
#
n = 0
for stock in context.safe:
if n > 0:
weight = 1.0/n
for stock in context.safe:
order_target_percent(stock, weight)
There was a runtime error.
45 responses

As a consistency check, I used the morningstar.valuation.market_cap instead of the product of USEquityPricing.close and morningstar.valuation.shares_outstanding in the class market_cap(CustomFactor); that is,

class market_cap(CustomFactor):
inputs = [morningstar.valuation.market_cap]
window_length = 1
def compute(self, today, assets, out, mcap):
out[:] = mcap[-1]



The resulting backtest has a return of 639% instead of 1366%, a surprising drop of over 50%.

Peter, great job, thanks so much. From now on, my mods, for this strategy, will by based on your version of the SMRS. Thanks again for providing a nice example of aesthetic coding in Q. I'm sure it will help me, and others.

We should all agree that the SMRS is a trend following system. We should also agree that it tries to take positions in the highest momentum stocks out of the 3,000 highest market caps in its universe.

Momentum is set simply as Δp for its lookback periods. We would also agree that none of us will know which stock will be traded, for which quantity, at what time, and at what price. All we do know is that because we are sorting on momentum and taking the top of the list, that there will always be one, a top of the list that is. We all also know, that none of us can cheat, doctor the data, or use only stocks that work, peek, or substitute the data in any way. So that what we see from a backtest is just the output of the trading script. If I run Peter's program, I will get the same answer as everybody else that runs it.

The SPY, or VTI, sma crossover serves as decision surrogate to open or close a trading window. In itself, maybe not enough to outperform the market. But

Under the open window condition, the system tries to extract a profit from a sea of variance by observing that a trend is more likely to continue than to reverse on a daily basis. And if the trading window is open, it says that, on average, the majority of stocks have an upward slope. So, there won't be a lack of potential candidates to trade with.

Because you are playing in a positive window of opportunities, you should have, on average, the ability to capture some of the average upward drift. And since you are ready to exit all trades as soon as the trading window closes, leverage becomes an option. A kind of conviction level in what you think your trading strategy can do. You know the outcome will be positive, and you are ready to backup your conviction by using leverage.

You know that going forward all the numbers will be different, all the trades will be different. Changing a single number of that program can change its outcome. But, overall, the trading strategy will behave the same, meaning that it will ride the wave while the trading window is open and only then. A kind of underlying guarantee that you will survive major drawdowns simply because you will be in cash, or cash equivalents, when they happen, as they will.

So, it is not lunacy to exploit with conviction what your trading strategy is definitely doing.

The strategy only rides upward slopes of variance where the majority of your bets should be positive, and therefore, generate the sought after profits. It's like skimming off the top. And therefore, why not leverage? Or have some no conviction in what they do? The business is to take the risk somebody else is not ready to take at the very moment that you perceive an expected positive window of opportunity is open. On average, what could go wrong?

So, yes, I find riding the upside of the variance wave more than a reasonable bet, it should turn out to be positive on the long run every time. And for me, not using leverage on this kind of trading strategy would be like saying that I have no conviction in it at all, and might then use words like: it might be profitable. And might does not have the same meaning as will.

Note that this version of Peter's code generates a 22.44% CAGR for the period. That is a very good number. I do like the other metrics.

The first modification I had to do was add leverage. I like it at 85%, it leaves some room for when the limit is slightly exceeded. Now, doing this, will most probably change most of the numbers on every trade. So, what. When a trading strategy does thousands and thousands of trades, you don't look at one in particular, you view them as a group, and it is the final output that matters.

The output of just changing that leverage number on Peter's version of the program resulted in the following:

Peter's Program with 85% leverage

The no leverage scenario, Peter's original version gave:
total return 1365.8% Alpha 0.99 Beta 0.41 Sharpe 4.17 Max DD 36% Vol 0.25

Same program with 1.85 leverage:
total return 5860.9% Alpha 4.43 Beta 0.78 Sharpe 11.69 Max DD 48% Vol 0.39

So, yes, more volatility, higher drawdown, but also higher Sharpe, higher alpha, and higher trading activity. And still a beta below 1.00 as if underexposed to the market, taking less risk.

What should have been reasonable to expect was: 1.85 * Σ(H.*ΔP), or a 2526.7% total return. Yet, the use of leverage increase the performance level to 5860.9% which is a 37% CAGR for the period. Increasing the output not by 1.85 but by 4.29. There is a reason for this. The modifications that Peter brought to this trading script opened up that door.

I don't think that if a trading strategy can support leverage as this one can, that it is necessarily a bad idea to use leverage. There is no lunacy here, only conviction.

When changing program parameters, you are technically affecting the underlying trading philosophy, or maybe more appropriately, the trading strategy's behavior.

For instance, changing the profit_taking_factor from 0.01 to 0.15 has no effect what so ever. The output is the same as the last chart in the previous post. Therefore, one should question its utility.

Changing the order of appearance of the context.safe group should be viewed as trivial. There are two tests to be made here, one for the order change to show if it has an impact which it won't since stocks are ranked. So, I took the easy solution which was to remove IEF from the list. The reason is: it has a lower average daily volume which forces more small trades. It is as if it was more of a burden to the system. To see the impact, a simple test would show this. So, here it is:

Peter's Program : 85% leverage : no IEF

SMRS : Leverage 85% : no IEF
total return 6003.7% Alpha 4.54 Beta 0.77 Sharpe 11.99 Max DD 48% Vol 0.39

That decision generated $143,000 more in profits. In a way justifying it was indeed more expensive due to its lower daily volume. And since IEF and TLT are the cash equivalents, there is no harm there. But for 140k, why not drop IEF? Nonetheless, it is a move that improved the CAGR level to 37.3%. Another way to impact the trading strategy is to allocate trades differently. A small change in how one looks at the problem can change the outcome. Variable n_30 had its denominator increase to: 49e3. It increases the number of stocks to be traded once the equity exceeds$550k. This is what enables to increase the number of stocks traded in step with its equity growth. Technically, you are just increasing the bet size. It does not change your total equity, it only changes how much of the cash can be allocated at any one time on a trade. You still start with only 10 positions. The output of what could be considered a trivial change gives:

Peter's Program : 85% leverage : no IEF : 49k allocation

SMRS : Leverage 85% : no IEF : up to 49k bets
total return 7257.6% Alpha 5.51 Beta 0.78 Sharpe 13.58 Max DD 49% Vol 0.41
An administrative trading decision that adds $1.25M to the previous test, and pushes the CAGR level over the whole trading interval to 39.3%. And we can say that we did not add much in the risk department to do this. To get more, you have to do more. You have to fight for every alpha point. Guy, Thanks for the posts. Your observations demand some thoughtful consideration that I won't have until next weekend, or later. I'll remark on two topics: a) I picked the$30k max holding size in an attempt to reduce liquidity problems with lightly traded stocks. If you successfully apply leverage then your portfolio could get so large that there won't be enough desirable $30k bets and strategy performance will degrade with increasing portfolio size. I'm sure you'll find a balance between allowing small stocks into your universe without starving your algorithm as the portfolio value gets large. I think this problem is commonly encountered by active fund managers. If they are successful then their portfolios increase until liquidity drives the portfolio toward the market as a whole. b) Yes, the TLT/IEF/AGG safe set is not optimal with regard to maximizing overall return. I gave that safe set very little thought other than picking common bond funds that were available over the backtest period. Given the short rebalancing period you could try other assets with low/negative correlation with stocks like Gold and introduce a short term momentum test before buying any member of a broader safe set. Safe set selection is in my parking lot for this strategy. Peter, yes, I agree. Note that, technically, for me, you have already solved most of problem #one in your code. As the equity grows, more and more stocks are added and you stay with 49k bets. It will take some time for it to reach anything close to half of the stock universe available. But I have a solution to improve on that too. The cash equivalent are just sideline measures, protection against bigger drawdowns. And they do provide sufficient protection to consider leveraging. Anthony, what can I say. We've had this discussion a few years back. But, here it goes again, so others can see the difference in viewpoints. You are playing what I call a linear game in a CAGR game. Your method of play will make it that your performance level will degrade as time goes by. To me, it is very simple. I look at the equation: Σ(H.*ΔP). And it says that whatever I do trading, it will ultimately tend to market averages if I play for a long enough period of time. This payoff matrix resumes all the trading activity done using a strategy H applied over a price difference matrix (Δp = p(out) – p(in). It gives the big number on the charts: total profits as a percentage of initial capital. Now, you play a game that translates to Σ(H.*ΔP). You could play a game where your bet size increases, but noooooo. And therefore, all you will get is: Σ(H.*ΔP) and what it implies. So, there is no surprise that you find it hard to outperform the index, you don't even play, or use trading methods, that could help you outperform over the long term. Even playing a linear game you could be entitled to Σ(H(1+at).*ΔP) where you could reinvest generated profits and end up with a higher long term return, but again, noooo. When I design trading strategies, I'm always looking for ways to do this: Σ(H(1+g)^t.*ΔP). And that is a CAGR game. As long as you stay in the linear realm, I can accept everything you say, as long as it is within the confines of your present understanding of the game. Except I find that at times you are not that consistent with the mathematics of your game. As if your viewpoint needs to make room for the kitchen sink. Everything you see on all the tests I've already presented are attempts at reaching higher g as in: Σ(H(1+g)^t.*ΔP) and not about going for: Σ(H(1+at).*ΔP) or Σ(H.*ΔP). But until you can see the difference in the approach taken you will keep shouting yourself in the foot. I know those are harsh words, so please don't be offended by my views as I'm not by yours. I've taken a program YOU wrote, that you graciously made public, in the hope of someone find a helpful solution that would help you achieve more in CAGR terms. Peter made some modifications to your program that enabled CAGR thinking. It is a natural for me. It would have taken me weeks to achieve the level of proficiency needed to write Peter's program since I'm just getting reacquainted with Q. But I do see what his code can do. Using Peter's version of your program, I can do everything that was presented. I have not even changed a single line of code. All that was done was changing parameter assumptions and those have been given as the tests progressed. Each chart adding more CAGR value, not by restructuring the program, but by looking at the existing code structure and finding the points of inconsistency with the code provided by Peter. You could do more simply because the code allows it. It is playing my kind of game, a CAGR one. And I did set the decision surrogate points to do what I have in mind. Anthony, I would simply say: geez, it is your program, not one that I don't want to share as in the last time we had this conversation, but your program that has been just slightly modified by Peter. But, what mods! Essentially, two lines of code that can open up a world of possibilities. Sorry you don't see it. To me, it is so simple. It is all in the expression: Σ(H(1+g)^t.*ΔP). You think what you design is a limit. I agree. The code Peter added, however, opens the doors to my kind of game and its possibilities. So, if you don't mind, I have other numbers to change in that program and push it to even higher performance levels. I'm a bit suspicious of the market cap factor. If all I change is the universe to be the top 3000 AverageDollarVolume the performance is pretty horrible. 134% total return -- less than SPY. That doesn't seem right to me. Is there that much magic in the largest companies versus the most traded companies? The only change I made from Peter's algorithm is as follows: #mkt_screen = market_cap() #stocks = mkt_screen.top(3000) adv = AverageDollarVolume(window_length=252) stocks = adv.top(3000)  Anthony, you design a trading strategy that is made to be out of the market when, in the short term, average prices decline a little. A simple moving average crossover of an index is sufficient to put you back in cash or cash equivalents. Why the fear of a 50% drop in the market if you will be in cash if ever it happens? When you say a 50% market drop over the next week or month, can I say I see it as somewhat of an exaggeration, a black swan syndrome. On this, I'll sell you a very massive put option on this 50% drop any day of the week, very massive. And your option against such a black swan event is just that to buy puts as protection, should you think it is worth it. Furthermore, you start with 10 positions in the market. To have your 50% drop would require that all 10 among the highest market cap stocks fall by 50% or that they on average drop by 50%. Again, the probabilities on that are very slim. You will be in cash a long time before it happens. Look at how your strategy behave during the financial crisis... The strategy allows you to be in the market only on an upswing, only in the highest market caps that are trending the most to the upside. It's like riding the upswing of a sine function, and taking the trading window only when the wave is going up. At each cycle, you are in cash at all other times. To me, it is like you don't understand the program you wrote, and even less the real power behind Peter's modifications. @ Anthony -- Do you think the effect is real? Or is there perhaps a hidden look ahead bias in the existing market cap factor. E.g. we are somehow buying companies in the backtest that become large companies? I know there is the warning about using that factor in live trading and I think there has been discussion about that error before, but I haven't kept up with that discussion. To get more, you have to do more. The trading strategy operates on the idea that one can cut out a trading window of variable size and undetermined timing or variance where it is estimated, or hoped for, that in general, prices are rising. It tackles only this upside, when it sees a decline in general prices, it runs away and hides. It switches to the sideline, liquidate its stock inventory and buy bonds or equivalents and waits for the next window of opportunities to be declared. The ability to expand this trading window, or select as much as possible of these upswings could and can be another way of improving the performance level. Things like opening the trading window earlier to profit more from the upcoming upswing or closing it later to avoid some of the whipsaw switching could provide more gains. These two end limits can be changed with ease since they depend on the lookback period attributed to canary.fast (-15) and canary.slow (80). Changing to canary.fast (-13) and to canary.slow (100) could be considered a trivial modification. This increases trading windows from both end points. You react slightly earlier (6.5 days lag) and tolerate a bit more in a retracement. Such a modification resulted in the following: Peter's Program : 85% leverage : no IEF : 49k allocation : windows -13 and 100 http://alphapowertrading.com/images/divers/SMRS_Peter_orig_wk_Lev_185_no_IEF_49stake_w13_100.png SMRS : Trade windows end points: -13 and 100 total return 8184.9% Alpha 6.23 Beta 0.76 Sharpe 15.73 Max DD 46% Vol 0.40 A move that added$887k to the bottom line and increased the CAGR level to 40.6%.

None of these changes alter the trading program itself. The code is the same, but it surely behaves differently. The numbers selected are not the outcome of the search for optimal figures, but simply from the understanding of what the program does. It's not even an attempt to search for its limits. All the tests presented to date were done once and only once. And I knew before the test that their outcome would be positive. Estimates could have been done on a napkin.

All these tests end November 30, 2015. So, one can have the equivalent of a 9 months walk forward test on any of these simply by change the end date to the last trading day. It will all be data that these strategies have never seen with no way of knowing what they will be fed with. An easy way to show that these modifications won't break down during those added months.

Using shorter windows, reduces trading opportunities as well as add frictional costs since you will be liquidating the portfolio more often and go to cash equivalents more often. Often times for no good reason at all. Just because a little dip in price frightened you.

Peter's Program : 85% leverage : no IEF : 49k allocation : windows -13 and 60

SMRS : Trade windows end points: -13 and 60
total return 6431.6% Alpha 4.87 Beta 0.78 Sharpe 16.09 Max DD 52% Vol 0.40

Still, a CAGR of 38%. Same volatility, a bit more drawdown and a lower alpha. But mostly a cost of $1.7M thinking that a shorter trading window would make you more sensitive to price changes. It is a costly option to pay for one's sensibility. You would have done better using a longer open windows of opportunities. Anthony, don't you think that I am not award of all the problems associated with backtesting? You see, you accept your program as doing what it does. You understand it simply because you programmed it to do what you want it to do. The program does not know better: it simply executes, if, there are no software bugs. As for logical bugs, the program will execute them, no problem. As for misinterpretation of what you think it should do, it will again just execute the code. No interrogation, no let me see if I can do it better, no judgment on the inputs or outputs. It just executes the code as is. I easily accept the output of your program. And if I used your program I would expect about the same results you do. I can read code. I can understand the tasks undertaken. And in a backtest, the program will leave traces of what it did. In a trading program, the traces can be resumed in a single expression: total return, or in a big file: with all the trades executed by the program. And this will reveal how it managed the portfolio it was given. Its goal was to achieve positive returns if it could at all: Σ(H.*ΔP) > 0. I had no problem in accepting your program for what it is. Geez, it is a program, code, a series of trading procedures. And when run, the program will do its job. That we like the output or not, that was it, it was the program's responsibility to generate: Σ(H.*ΔP) > 0, it was the programmer who had to provide the trading procedures, and he should stand behind his code. We can complain, but for what? We can rerun the program, but it will give back the same answer, it's just a toaster. Now, when somebody else changes part of your program, it becomes another tool than the one you designed. And let me tell you that the modifications brought to your program by Peter make it a totally different animal than the one you designed. I can easily see why. To me, it's like having been serve a solution on a silver platter. It would have taken me weeks to get there, and here, out of the blue, it is offered graciously. And now, I should not use what it is capable of doing. I should restrain its abilities. Well, not for me, thank you. You don't like using leverage, the answer is simple: don't use any. You don't like to trade more than 10 stocks. Go ahead, I have no problems with that. You want to play with$10k, go ahead as well. You want to play linear, I have no problem with that either.

But on my side, it is only natural for me to use Peter's version of the program for the simple reason it does what I want it to do. I didn't even add a single line of code. Just changing initial parameter values. So, yes, I accept Peter's modifications. Yes, it makes it a great program more capable than your version by a wide, very wide margin to the point that Peter's version is a different program in its own right since I'm making it do things that you can not see in your own program.

Again, my thanks goes to Peter for saving me all that time and effort. To say it was delivered in such a neat and stylish package.

I think this scheme works based on selecting good companies rather than market timing. Many of the selection factors are long term, 6 months to a year. Good companies have good year-to-year performance and the algorithm selects them so it works. When I think of momentum I think of something more timing related. Give the price a "push" and it will coast past its true value and oscillate around it. (An under damped 2nd order response in physics terms).
The latter should work for any stock, but that does not seem to be the case. I have not seen mean regression which also works on this concept work well either.

What concerns me is that the momentum (or trend following) seems to have stopped working over the past couple years. Now is this just bad luck or has something fundamental changed in the market so that it will never work again??

I once used my margin to buy a new house. Then paid it back when I sold the old one. That's as far as I will go with leverage.

Anthony, and so it ends the same way as a few years back. Why learn something different when it is not your way?

You act as if whatever anybody else does is wrong. Or that they are on a crusade or something. That they most certainly have not done their homework, or don't understand how to calculate even the outcome of a trade.

Can't see what is in front of your eyes, do you? Unfortunately, I can't make you see what you don't want to see. And I am not in the education business.

You have Peter's code. It does exactly what it is programmed to do, nothing more. There is no magic, no mathematical twist that nobody could understand. It does not rely on any esoteric schemes. The code structure is very similar to your own program, after all, it was built over it. And yet, you don't seem to understand what it really does or can do.

For instance, I find that trading strategy is kind of wasteful, almost to a fault. But I have not analyzed the total output, it makes so many trades. As time advances, it will make more and more trades based on its trading procedures. It gets to a point where it makes so many trades that it is hard to really know what is happening. The frictional costs are high and yet it still makes money. Here is a snapshot of the transaction details of my last test:

It might have started with $100k, but it does not stay at that level. In that chart, you have the last 17 trading days out of 3,366: how much was bought, sold, and the number of trades for each day. You have days with over 20,000 trades, buying and selling millions worth of shares. Here is my take: the trading strategy makes its money the old fashion way, on average, a few bucks at a time, in a sea of variance profiting from the general upward drift while a trading window is opened. This appears sufficient to make a profit over extended periods of time (12.9 years in this case) due to its increase bet size. At least, that is what the tests have shown. That you would not like your trading strategy to do such things. It is your choice and understandable. I respect that, to each his own. However, I do like the output of the program, even if I might find it wasteful of resources, which, btw, is also a problem your own strategy suffers from. For me, it only represents an area where the program could be improved. However, it will take me more time to get to that level of coding. All I can say is: what has been provided is the output of that program with its different settings for some of its parameters. Note even changing yet a single line of code logic and look at the wide range of performance levels. I simply want to know what is under the hood. I find it a fascinating program. Changing nothing but the starting date, amount and REMOVING the comission and slippage...... 4 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month """ Adapted from "A simple momentum rotation system for stocks" https://www.quantopian.com/posts/a-simple-momentum-rotation-system-for-stocks PF 2016_0807: The unmodified performance of this algorithm is remarkable from 1/4/03 to 11/30/15 Total Returns 1287% Benchmark 192.5% Max Drawdown 50.4% Alpha 0.87 Beta 0.85 Sharpe 3.25 Volatility 0.30 Method outline is: Buy and hold best 10 of 3000 stocks each month During the month sell big losers (stop loss) and big winners (profit taking) Selection considers - Four momentum factors over 20, 60, 125 and 252 days - Efficiency threshold = 0.031 based on 252 day return vs sum of daily High minus Low **PF 2016_0807: Comments relative to the monthly version of the strategy PF 2016_0807: The original post successfully outlines a method outline of use to the community. There was no attempt to make this a production-ready algorithm, so some problems and uncertain features exist. A few of these impacted my ability to understand what was happening so I tried to resolve them (perhaps only to myself): 1) although this is nominally a long-only algo the daily rebalance can result in shorting 2) liquidity problem even when starting with only$100k. Leverage is roughly 35% to 110%. The number of assets is roughtly 4 to 15 vs defined top-10.
3) the algorithm lacks any logic to exit stocks during prolonged drawdown periods. Real investors would have exited a few times from 2003 through 2015.
4) the utility of the efficiency factor test is not clear. The threshold of 0.031 appears to be oddly low as efficiency could easily be much larger.

PF 2016_0807: Below is a summary of what I did to resolve/improve issues 1-3 and my finding that the utility of the efficiency function can be had more simply by requiring the 252-day return (factor_4) to be > 0.0.
I tried to leave the rest of the algorithm as is. Performance is evaluated over the same 1/4/2003 to 11/30/2015 period as the original posting. Another tester might investigate other interesting features of Garner's algorithm (ranking periods, profit taking logic, ...)

PF 2016_0807: Shorting issue (resolved in one change)
I modified the code to issue sell orders for obsolete postions before issuing buy orders for new positions.
This has resolved the problem and improved overall return as the stocks being shorted were probably not good shorting candidates.

PF 2016_0807: Liquidity problems (resolved in three changes)
Leverage often exceeds 1.0 due to an inability to sell obsolete positions in a single trading session.
Leverage 1: Add a function to daily rebalance to continue sales of these positions
This did drive the leverage down to 1.0 quickly in all but a few cases.
As expected the total return also dropped as the average leverage was reduced and more trade fees were paid.
Total Returns 1163%    Benchmark 192.5%    Max Drawdown    52.1%
Alpha    0.78    Beta    0.88    Sharpe    2.85    Volatility    0.31

PF 2016_0807: Leverage 2: Add Average Daily Dollar Volume (ADDV) as a filter factor.
Consider only stocks with ADDV > $500k over the past 20 days This nearly eliminated the need to sell obsolete stocks on multiple days until portfolio size got much bigger ~$500k
This did improve overall returns
Total Returns 1314%    Benchmark 192.5%    Max Drawdown    48.1%
Alpha    0.88    Beta    0.89    Sharpe    3.17    Volatility    0.31

PF 2016_0807: Leverage 3: Allow the number of equities to increase with portfolio value
Try context.holdings = max(10, int( portfolio_value/30e3 )
As expected this reduced volatility. It also had some benefit to overall return
Total Returns 1356%    Benchmark 192.5%    Max Drawdown    48.5%
Alpha    0.91    Beta    0.91    Sharpe    3.72    Volatility    0.28

PF 2016_0807: Drawdown protection (improved to acceptable level)
Add a simple drawdown protection based on simple moving averages of SPY
If SPY_SMA_fast < SPY_SMA_slow, then go to cash; else use the algorithm
Fast period should be on the order of the shortest momentum filter (20 days)
Since SMA filter is slower than EMA a period less than 20 days is desired.
Slow period should be several multiples of the fast period, but not slower than the overall algo.
The geometric average of the four periods (20,60,125,252) is 78 days
A 15/80 day test provided good drawdown reduction (26% vs 48%) with about 10% loss in total return
15/80 Cash  Total return 1204%    Alpha 0.85    Sharpe 4.00    Max DD 26%

PF 2016_0807: Most asset allocation models would exit to bonds vs cash, so that was tried as well
Bond set = [TLT, IEF, AGG]
15/80 Bonds   Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
This is a nice result. A somewhat better result might be had by allowing rotation between stocks, bonds, cash, or some combination of stocks/bonds, but that is beyond my current purpose.

PF 2016_0807: What is effect of the ADDV limit?
ADDV limit. $30k per holding and$100k initial investment.
Exiting to bonds when indicated by 15/80 SMA test
$0.2M: Total return 1810% Alpha 1.34 Sharpe 6.15 Max DD 20%$0.5M:  Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
$1.5M: Total return 1546% Alpha 1.13 Sharpe 5.00 Max DD 20% PF 2016_0807: What is the effect of the efficiency threshold? I tried several values as shown below Any limit > 0.0 has a good result until some point above 0.5. Garner's 0.031 recommendation for his top 10 algorithm looks good. My finding is for a variable and larger set of equities (10 to 60 in any trial). PF 2016_0807: Intermediate is the return reported for week of 1/3/2010 (near midpoint) Limit 0.0 total return 1815% intermediate 848% Sharpe 6.15 Limit 0.031 total return 1790% intermediate 836% Sharpe 6.07 Limit 0.1 total return 1786% intermediate 818% Sharpe 6.05 Limit 0.2 total return 1784% intermediate 813% Sharpe 6.03 Limit 0.4 total return 1799% intermediate 791% Sharpe 6.09 Limit 0.5 total return 1764% intermediate 809% Sharpe 5.97 Limit 0.7 total return 1550% intermediate 739% Sharpe 5.20 ==> might as well use a limit of 0.0 ==> This is equivalent to stating factor_4 > 1.0 which is easier to implement. PF 2016_0809: Thomas Chang published a more compact implementation of the four factor ranking. I'll probably use this in a future version of this strategy PF 2016_0809: However problems remain Most notable there is a very large sensitivity to starting date Garner made several posts showing this wildly variable performance. Over the span of 1/4/2003 to 11/30/2015 the total return can be as little as 200% with no improvement in volatility or drawdown vs SP500 buy-and-hold. Starting date sensitivity is a common problem in asset rotation strategies, but this one is particularly sensitive. **PF 2016_0809: End of comments relative to the monthly version of the strategy **PF 2016_0814: Start of comments relative to the weekly version of the strategy PF 2016_0814: Potential remedies for rotation method and timing a) rebalance more frequently, perhaps weekly b) initiate multiple overlapping positions (invest weekly and and hold monthly) c) consider using a small cap proxy for the entry/exit test. PF 2016_0814: Potential remedies for asset selection a) investigate whether the some very simple fundamentals screening could reduce the likelihood of buying troublesome stocks b) investigate different momentum models (slope, percent below high) PF 2016_0814: Rebalance weekly Below are returns as a function of days_offset offset= 0 total return 719% Alpha 0.48 Sharpe 3.90 Max DD 47% Vol 0.28 offset= 1 total return 317% Alpha 0.17 Sharpe 0.73 Max DD 52% Vol 0.31 offset= 2 total return 1142% Alpha 0.48 Sharpe 3.14 Max DD 29% Vol 0.28 offset= 3 total return 1308% Alpha 0.94 Sharpe 3.72 Max DD 29% Vol 0.27 offset= 4 total return 1762% Alpha 1.29 Sharpe 5.35 Max DD 33% Vol 0.25 PF 2016_0814: Rebalance weekly and check that free cash flow is positive Rationale: a quick look of stocks selected during periods of poor algorithm performance showed poor fundamentals. Positive FCF might be one of the simplest tests for "minimally acceptable" fundamentals. See returns as a function of days_offset offset= 0 total return 1024% Alpha 0.79 Sharpe 3.29 Max DD 44% Vol 0.25 offset= 1 total return 1201% Alpha 0.86 Sharpe 3.70 Max DD 34% Vol 0.25 offset= 2 total return 1912% Alpha 1.41 Sharpe 6.16 Max DD 26% Vol 0.24 offset= 3 total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 offset= 4 total return 1533% Alpha 1.12 Sharpe 5.03 Max DD 28% Vol 0.23 ==> This is more consistent with regard to return and volatility is improved somewhat, but max DD is still too high. PF 2016_0814: Effect of proxy choice Rationale: Strategy considers top 3000 stocks, so a broader proxy should be used using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some broad market candidates: SPY (SP500) total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 IWV (Russell 3000) total return 1320% Alpha 0.95 Sharpe 4.13 Max DD 30% Vol 0.24 VTI (all cap) total return 1312% Alpha 0.95 Sharpe 4.10 Max DD 30% Vol 0.24 Unfortunately I can't find an equal weighted fund that from 2003. ==> As expected broad index (IWV or VTI) may be better PF 2016_0814: Revisiting liquidity I'm still encountering some liquidity problems. Several times per year a stock will take several days to sell the position. using a middling SPY scenario (weekly with positive FCF and offset = 3 days, no filtering for price_vs_max) Here are results for some pairings of ADDV periods and values 20d/500k total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 60d/500k total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 ==> use the 60 day test PF 2016_0814: Effect of limiting performance vs recent maximum Rationale: Some stocks may experience a very large recent spike in price then enter a period of decline. Although in decline the large price jump keeps the stock in our selection set. Implement a simple filter max_N = max close in past N days price_vs_max = close[0]/max_N if price_vs_max > threshold then stock is OK to use using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some pairings of N and threshold 20d/0.0 total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 60d/0.7 total return 1182% Alpha 0.85 Sharpe 3.71 Max DD 28% Vol 0.24 60d/0.85 total return 1147% Alpha 0.82 Sharpe 3.59 Max DD 28% Vol 0.24 ==> This seems unlikely to be a beneficial test PF 2016_0814: Investors often chase the shiny object. Define augmented momentum to provide a bonus for the best single day in the period best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) Augmented momentum for Factor_1, simple_momentum for Factor_2, 3, 4 offset= 0 total return 1102% Alpha 0.78 Sharpe 3.30 Max DD 47% Vol 0.25 offset= 1 total return 1167% Alpha 0.83 Sharpe 3.55 Max DD 36% Vol 0.25 offset= 2 total return 2565% Alpha 1.92 Sharpe 8.42 Max DD 25% Vol 0.23 offset= 3 total return 1247% Alpha 0.90 Sharpe 3.87 Max DD 28% Vol 0.24 offset= 4 total return 1481% Alpha 1.08 Sharpe 4.65 Max DD 31% Vol 0.24 Augmented momentum for Factor_1,2,3,4 offset= 0 total return 1392% Alpha 1.01 Sharpe 4.27 Max DD 44% Vol 0.25 offset= 2 total return 2756% Alpha 2.07 Sharpe 9.13 Max DD 29% Vol 0.23 offset= 3 total return 1702% Alpha 1.25 Sharpe 5.50 Max DD 28% Vol 0.24 ==> This is intersting, but using if feels like data fitting so I won't PF 2016_0814: Can we improve max drawdown by adjusting the stop loss parameter? Garner's original strategy used a 75% stop loss limit. This is probably a good value for monthly rebalance, but a tighter limit might make sense for weekly rebalancing Check this vs a middling scenario (weekly with positive FCF, offset = 0 days, no filtering for price_vs_max, simple_momentum model, 60 day ADDV >$500k)
Here are results for various stop loss limits
0%   total return 1263%    Alpha 0.91    Sharpe 3.93    Max DD 31%    Vol 0.24
60%   total return 1231%    Alpha 0.88    Sharpe 3.84    Max DD 31%    Vol 0.24
75%   total return 1177%    Alpha 0.84    Sharpe 3.69    Max DD 31%    Vol 0.24
80%   total return 1095%    Alpha 0.78    Sharpe 3.20    Max DD 30%    Vol 0.24
85%   total return 1036%    Alpha 0.73    Sharpe 3.32    Max DD 30%    Vol 0.24
90%   total return  787%    Alpha 0.55    Sharpe 2.56    Max DD 29%    Vol 0.23
==> This result suprises me. It must be that a significant fraction of the stocks that lose 25% during the week later recover some of this loss.

PF 2016_0814: Weekly rebalance baseline
Let's put together some of the apparently better ideas
1. Rebalance weekly vs monthly
2. Decide whether to be in stocks or bonds (safe) based on fast vs slow SMA of VTI (All cap index)
3. Only consider stocks that
a) are in the top 3000 by market cap
b) have net gain over the past 252 days
c) have positive cash flow
d) have and average daily dollar volume of at least $500k over the past 60 days 4. Select the top N of these stocks based on combined ranking over 20, 60, 125, 252 days 5. Set the value N to be portfolio value divided by$30k with a minimum of 10 stocks
6. Define a safe set of bonds to hold when not in stocks
7. Disable Garner's stop loss and profit taking as these don't benefit weekly strategy

offset= 0      total return 1585%    Alpha 1.16    Sharpe 5.45    Max DD 27%  Vol 0.22
offset= 1      total return  914%    Alpha 0.64    Sharpe 2.92    Max DD 36%  Vol 0.23
offset= 2      total return 1587%    Alpha 1.16    Sharpe 5.38    Max DD 20%  Vol 0.23
offset= 3      total return 1874%    Alpha 1.39    Sharpe 6.56    Max DD 20%  Vol 0.22
offset= 4      total return 1529%    Alpha 1.12    Sharpe 5.33    Max DD 26%  Vol 0.22

offset= 0      total return 1585%    Alpha 1.16    Sharpe 5.45    Max DD 27%  Vol 0.22
offset= 1      total return  914%    Alpha 0.64    Sharpe 2.92    Max DD 36%  Vol 0.23
offset= 2      total return 1587%    Alpha 1.16    Sharpe 5.38    Max DD 20%  Vol 0.23
offset= 3      total return 1874%    Alpha 1.39    Sharpe 6.56    Max DD 20%  Vol 0.22
offset= 4      total return 1529%    Alpha 1.12    Sharpe 5.33    Max DD 26%  Vol 0.22

**PF: End of comments relative to the weekly version of the strategy
"""
#
# import methods and data
#
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline import CustomFactor
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import AverageDollarVolume
import numpy as np
from collections import defaultdict

#
# define custom classes
#
class simple_momentum(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 1

def compute(self, today, assets, out, close):
out[:] = close[-1]/close[0]

class augmented_momentum(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 1

def compute(self, today, assets, out, close):
best = np.nanmax(np.diff(close,axis=0),axis=0)
out[:] = (close[-1]/close[0]) + (best/close[0])

class price_vs_max(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 252

def compute(self, today, assets, out, close):
out[:] = close[-1]/np.nanmax(close, axis=0)

class market_cap(CustomFactor):
inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding]
window_length = 1

def compute(self, today, assets, out, close, shares):
out[:] = close[-1] * shares[-1]

class get_fcf_per_share(CustomFactor):
inputs = [morningstar.valuation_ratios.fcf_per_share]
window_length = 1

def compute(self, today, assets, out, fcf_per_share):
out[:] = fcf_per_share

def initialize(context):
#
# schedule methods
#

schedule_function(func=periodic_rebalance,
date_rule=date_rules.week_start(days_offset=1),
time_rule=time_rules.market_open(), half_days=True)
schedule_function(func=daily_rebalance,
date_rule=date_rules.every_day(),
time_rule=time_rules.market_close(hours=1))
#
# set portfolis parameters
#
set_do_not_order_list(security_lists.leveraged_etf_list)
context.acc_leverage = 1.00
context.min_holdings = 10
#
# set profit taking and stop loss parameters
#
context.profit_taking_factor = 0.01
context.profit_taking_target = 10.0 #set much larger than 1.0 to disable
context.profit_target={}
context.profit_taken={}
context.stop_pct = 0.0    # set to 0.0 to disable
context.stop_price = defaultdict(lambda:0)
#
# Set commission model to be used
#
#
# Define safe set (of bonds)
#
context.safe = [
sid(23870), #IEF
sid(23921), #TLT
sid(25485)  #AGG
]
#
# Define proxy to be used as proxy for overall stock behavior
# set default position to be in safe set (context.buy_stocks = False)
#
context.canary = sid(22739)
#
# Establish pipeline
#
pipe = Pipeline()
attach_pipeline(pipe, 'ranked_stocks')
#
# Define the four momentum factors used in ranking stocks
#
factor1 = simple_momentum(window_length=20)
factor2 = simple_momentum(window_length=60)
factor3 = simple_momentum(window_length=125)
factor4 = simple_momentum(window_length=252)
#
# Define other factors that are used in stock screening
#
factor5 = get_fcf_per_share()
factor6 = AverageDollarVolume(window_length=60)
factor7 = price_vs_max(window_length=20)

factor_4_filter = factor4 > 1.0   # only consider stocks with positive 1y growth
factor_5_filter = factor5 > 0.0   # only consider stocks with positive FCF
factor_6_filter = factor6 > 0.5e6 # only consider stocks trading >$500k per day factor_7_filter = factor7 > 0.85 # only consider stocks trading near their high # # Establish screen used to establish candidate stock list # mkt_screen = market_cap() stocks = mkt_screen.top(3000) total_filter = (stocks & factor_4_filter & factor_5_filter & factor_6_filter) pipe.set_screen(total_filter) # # Establish ranked stock list # factor1_rank = factor1.rank(mask=total_filter, ascending=False) pipe.add(factor1_rank, 'f1_rank') factor2_rank = factor2.rank(mask=total_filter, ascending=False) pipe.add(factor2_rank, 'f2_rank') factor3_rank = factor3.rank(mask=total_filter, ascending=False) pipe.add(factor3_rank, 'f3_rank') factor4_rank = factor4.rank(mask=total_filter, ascending=False) pipe.add(factor4_rank, 'f4_rank') combo_raw = (factor1_rank+factor2_rank+factor3_rank+factor4_rank)/4 pipe.add(combo_raw, 'combo_raw') pipe.add(combo_raw.rank(mask=total_filter), 'combo_rank') def before_trading_start(context, data): # # Calculate maximum number of stocks to buy # n_30 = int(context.portfolio.portfolio_value/30e3) context.holdings = max(context.min_holdings, n_30) # # Screen to find the current top stocks # context.output = pipeline_output('ranked_stocks') ranked_stocks = context.output context.stock_factors = ranked_stocks.sort(['combo_rank'], ascending=True).iloc[:context.holdings] context.stock_list = context.stock_factors.index # # Use fast/slow SMA test of proxy to determine whether to be in stocks vs safe # Canary = data.history(context.canary, 'price', 80, '1d') Canary_fast = Canary[-15:].mean() Canary_slow = Canary.mean() context.buy_stocks = False if Canary_fast > Canary_slow: context.buy_stocks = True def daily_rebalance(context, data): # # Do daily maintenance # a) sell obsolete positions # b) implement stop loss # c) implement profit taking # d) record values for backtest display # # # Sell any holdings that are not in context.this_periods_list # for stock in context.portfolio.positions: if data.can_trade(stock): if stock not in context.this_periods_list: order_target(stock, 0) # # update stop loss limits and sell any stocks that are below their limits # for stock in context.portfolio.positions: if data.can_trade(stock): price = data.current(stock, 'price') context.stop_price[stock] = max(context.stop_price[stock], context.stop_pct * price) if price < context.stop_price[stock]: order_target(stock, 0) context.stop_price[stock] = 0 log.info("%s stop loss"%stock) # # Profit take if profit target is met # Skip this for safe set assets # takes = 0 for stock in context.portfolio.positions: if stock not in context.safe: if data.can_trade(stock) and data.current(stock, 'close') > context.profit_target[stock]: context.profit_target[stock] = data.current(stock, 'close')*1.25 profit_taking_amount = context.portfolio.positions[stock].amount * context.profit_taking_factor takes += 1 log.info(profit_taking_amount) order_target(stock, profit_taking_amount) # # Record parameters # n100 = len(context.output)/100 record(leverage=context.account.leverage, positions=len(context.portfolio.positions), t=takes, candidates=n100) def periodic_rebalance(context,data): # # rebalance portfolio based on most recent context.buy_stocks signal # # rebalance portfolio in stocks # if context.buy_stocks: context.this_periods_list = context.stock_list # # sell any holdings not in this period's stock list # for stock in context.portfolio.positions: if data.can_trade(stock): if stock not in context.this_periods_list: order_target(stock, 0) # # equally weight portfolio over assets that can trade # don't buy stock if its 20d momentum (Factor_1) is not positive # set profit_target threshold based on recent close # weight = context.acc_leverage / len(context.stock_list) p_tgt = context.profit_taking_target for stock in context.stock_list: if stock in security_lists.leveraged_etf_list: continue if data.can_trade(stock) and context.stock_factors.factor_1[stock] > 1: order_target_percent(stock, weight) context.profit_target[stock] = data.current(stock, 'close')*p_tgt # # otherwise put portfolio into safe set # else: context.this_periods_list = context.safe # # sell any holdings not in safe set # for stock in context.portfolio.positions: if data.can_trade(stock): if stock not in context.safe: order_target(stock, 0) # # equally weight portfolio over safe assets that can trade # n = 0 for stock in context.safe: if data.can_trade(stock): n += 1 if n > 0: weight = 1.0/n for stock in context.safe: if data.can_trade(stock): order_target_percent(stock, weight) There was a runtime error. My attempt at plugging in a minimum variance optimization with this algo. 78 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month """ Adapted from "A simple momentum rotation system for stocks" https://www.quantopian.com/posts/a-simple-momentum-rotation-system-for-stocks PF 2016_0807: The unmodified performance of this algorithm is remarkable from 1/4/03 to 11/30/15 Total Returns 1287% Benchmark 192.5% Max Drawdown 50.4% Alpha 0.87 Beta 0.85 Sharpe 3.25 Volatility 0.30 Method outline is: Buy and hold best 10 of 3000 stocks each month During the month sell big losers (stop loss) and big winners (profit taking) Selection considers - Four momentum factors over 20, 60, 125 and 252 days - Efficiency threshold = 0.031 based on 252 day return vs sum of daily High minus Low **PF 2016_0807: Comments relative to the monthly version of the strategy PF 2016_0807: The original post successfully outlines a method outline of use to the community. There was no attempt to make this a production-ready algorithm, so some problems and uncertain features exist. A few of these impacted my ability to understand what was happening so I tried to resolve them (perhaps only to myself): 1) although this is nominally a long-only algo the daily rebalance can result in shorting 2) liquidity problem even when starting with only$100k. Leverage is roughly 35% to 110%. The number of assets is roughtly 4 to 15 vs defined top-10.
3) the algorithm lacks any logic to exit stocks during prolonged drawdown periods. Real investors would have exited a few times from 2003 through 2015.
4) the utility of the efficiency factor test is not clear. The threshold of 0.031 appears to be oddly low as efficiency could easily be much larger.

PF 2016_0807: Below is a summary of what I did to resolve/improve issues 1-3 and my finding that the utility of the efficiency function can be had more simply by requiring the 252-day return (factor_4) to be > 0.0.
I tried to leave the rest of the algorithm as is. Performance is evaluated over the same 1/4/2003 to 11/30/2015 period as the original posting. Another tester might investigate other interesting features of Garner's algorithm (ranking periods, profit taking logic, ...)

PF 2016_0807: Shorting issue (resolved in one change)
I modified the code to issue sell orders for obsolete postions before issuing buy orders for new positions.
This has resolved the problem and improved overall return as the stocks being shorted were probably not good shorting candidates.

PF 2016_0807: Liquidity problems (resolved in three changes)
Leverage often exceeds 1.0 due to an inability to sell obsolete positions in a single trading session.
Leverage 1: Add a function to daily rebalance to continue sales of these positions
This did drive the leverage down to 1.0 quickly in all but a few cases.
As expected the total return also dropped as the average leverage was reduced and more trade fees were paid.
Total Returns 1163%    Benchmark 192.5%    Max Drawdown    52.1%
Alpha    0.78    Beta    0.88    Sharpe    2.85    Volatility    0.31

PF 2016_0807: Leverage 2: Add Average Daily Dollar Volume (ADDV) as a filter factor.
Consider only stocks with ADDV > $500k over the past 20 days This nearly eliminated the need to sell obsolete stocks on multiple days until portfolio size got much bigger ~$500k
This did improve overall returns
Total Returns 1314%    Benchmark 192.5%    Max Drawdown    48.1%
Alpha    0.88    Beta    0.89    Sharpe    3.17    Volatility    0.31

PF 2016_0807: Leverage 3: Allow the number of equities to increase with portfolio value
Try context.holdings = max(10, int( portfolio_value/30e3 )
As expected this reduced volatility. It also had some benefit to overall return
Total Returns 1356%    Benchmark 192.5%    Max Drawdown    48.5%
Alpha    0.91    Beta    0.91    Sharpe    3.72    Volatility    0.28

PF 2016_0807: Drawdown protection (improved to acceptable level)
Add a simple drawdown protection based on simple moving averages of SPY
If SPY_SMA_fast < SPY_SMA_slow, then go to cash; else use the algorithm
Fast period should be on the order of the shortest momentum filter (20 days)
Since SMA filter is slower than EMA a period less than 20 days is desired.
Slow period should be several multiples of the fast period, but not slower than the overall algo.
The geometric average of the four periods (20,60,125,252) is 78 days
A 15/80 day test provided good drawdown reduction (26% vs 48%) with about 10% loss in total return
15/80 Cash  Total return 1204%    Alpha 0.85    Sharpe 4.00    Max DD 26%

PF 2016_0807: Most asset allocation models would exit to bonds vs cash, so that was tried as well
Bond set = [TLT, IEF, AGG]
15/80 Bonds   Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
This is a nice result. A somewhat better result might be had by allowing rotation between stocks, bonds, cash, or some combination of stocks/bonds, but that is beyond my current purpose.

PF 2016_0807: What is effect of the ADDV limit?
ADDV limit. $30k per holding and$100k initial investment.
Exiting to bonds when indicated by 15/80 SMA test
$0.2M: Total return 1810% Alpha 1.34 Sharpe 6.15 Max DD 20%$0.5M:  Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
$1.5M: Total return 1546% Alpha 1.13 Sharpe 5.00 Max DD 20% PF 2016_0807: What is the effect of the efficiency threshold? I tried several values as shown below Any limit > 0.0 has a good result until some point above 0.5. Garner's 0.031 recommendation for his top 10 algorithm looks good. My finding is for a variable and larger set of equities (10 to 60 in any trial). PF 2016_0807: Intermediate is the return reported for week of 1/3/2010 (near midpoint) Limit 0.0 total return 1815% intermediate 848% Sharpe 6.15 Limit 0.031 total return 1790% intermediate 836% Sharpe 6.07 Limit 0.1 total return 1786% intermediate 818% Sharpe 6.05 Limit 0.2 total return 1784% intermediate 813% Sharpe 6.03 Limit 0.4 total return 1799% intermediate 791% Sharpe 6.09 Limit 0.5 total return 1764% intermediate 809% Sharpe 5.97 Limit 0.7 total return 1550% intermediate 739% Sharpe 5.20 ==> might as well use a limit of 0.0 ==> This is equivalent to stating factor_4 > 1.0 which is easier to implement. PF 2016_0809: Thomas Chang published a more compact implementation of the four factor ranking. I'll probably use this in a future version of this strategy PF 2016_0809: However problems remain Most notable there is a very large sensitivity to starting date Garner made several posts showing this wildly variable performance. Over the span of 1/4/2003 to 11/30/2015 the total return can be as little as 200% with no improvement in volatility or drawdown vs SP500 buy-and-hold. Starting date sensitivity is a common problem in asset rotation strategies, but this one is particularly sensitive. **PF 2016_0809: End of comments relative to the monthly version of the strategy **PF 2016_0814: Start of comments relative to the weekly version of the strategy PF 2016_0814: Potential remedies for rotation method and timing a) rebalance more frequently, perhaps weekly b) initiate multiple overlapping positions (invest weekly and and hold monthly) c) consider using a small cap proxy for the entry/exit test. PF 2016_0814: Potential remedies for asset selection a) investigate whether the some very simple fundamentals screening could reduce the likelihood of buying troublesome stocks b) investigate different momentum models (slope, percent below high) PF 2016_0814: Rebalance weekly Below are returns as a function of days_offset offset= 0 total return 719% Alpha 0.48 Sharpe 3.90 Max DD 47% Vol 0.28 offset= 1 total return 317% Alpha 0.17 Sharpe 0.73 Max DD 52% Vol 0.31 offset= 2 total return 1142% Alpha 0.48 Sharpe 3.14 Max DD 29% Vol 0.28 offset= 3 total return 1308% Alpha 0.94 Sharpe 3.72 Max DD 29% Vol 0.27 offset= 4 total return 1762% Alpha 1.29 Sharpe 5.35 Max DD 33% Vol 0.25 PF 2016_0814: Rebalance weekly and check that free cash flow is positive Rationale: a quick look of stocks selected during periods of poor algorithm performance showed poor fundamentals. Positive FCF might be one of the simplest tests for "minimally acceptable" fundamentals. See returns as a function of days_offset offset= 0 total return 1024% Alpha 0.79 Sharpe 3.29 Max DD 44% Vol 0.25 offset= 1 total return 1201% Alpha 0.86 Sharpe 3.70 Max DD 34% Vol 0.25 offset= 2 total return 1912% Alpha 1.41 Sharpe 6.16 Max DD 26% Vol 0.24 offset= 3 total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 offset= 4 total return 1533% Alpha 1.12 Sharpe 5.03 Max DD 28% Vol 0.23 ==> This is more consistent with regard to return and volatility is improved somewhat, but max DD is still too high. PF 2016_0814: Effect of proxy choice Rationale: Strategy considers top 3000 stocks, so a broader proxy should be used using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some broad market candidates: SPY (SP500) total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 IWV (Russell 3000) total return 1320% Alpha 0.95 Sharpe 4.13 Max DD 30% Vol 0.24 VTI (all cap) total return 1312% Alpha 0.95 Sharpe 4.10 Max DD 30% Vol 0.24 Unfortunately I can't find an equal weighted fund that from 2003. ==> As expected broad index (IWV or VTI) may be better PF 2016_0814: Revisiting liquidity I'm still encountering some liquidity problems. Several times per year a stock will take several days to sell the position. using a middling SPY scenario (weekly with positive FCF and offset = 3 days, no filtering for price_vs_max) Here are results for some pairings of ADDV periods and values 20d/500k total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 60d/500k total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 ==> use the 60 day test PF 2016_0814: Effect of limiting performance vs recent maximum Rationale: Some stocks may experience a very large recent spike in price then enter a period of decline. Although in decline the large price jump keeps the stock in our selection set. Implement a simple filter max_N = max close in past N days price_vs_max = close[0]/max_N if price_vs_max > threshold then stock is OK to use using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some pairings of N and threshold 20d/0.0 total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 60d/0.7 total return 1182% Alpha 0.85 Sharpe 3.71 Max DD 28% Vol 0.24 60d/0.85 total return 1147% Alpha 0.82 Sharpe 3.59 Max DD 28% Vol 0.24 ==> This seems unlikely to be a beneficial test PF 2016_0814: Investors often chase the shiny object. Define augmented momentum to provide a bonus for the best single day in the period best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) Augmented momentum for Factor_1, simple_momentum for Factor_2, 3, 4 offset= 0 total return 1102% Alpha 0.78 Sharpe 3.30 Max DD 47% Vol 0.25 offset= 1 total return 1167% Alpha 0.83 Sharpe 3.55 Max DD 36% Vol 0.25 offset= 2 total return 2565% Alpha 1.92 Sharpe 8.42 Max DD 25% Vol 0.23 offset= 3 total return 1247% Alpha 0.90 Sharpe 3.87 Max DD 28% Vol 0.24 offset= 4 total return 1481% Alpha 1.08 Sharpe 4.65 Max DD 31% Vol 0.24 Augmented momentum for Factor_1,2,3,4 offset= 0 total return 1392% Alpha 1.01 Sharpe 4.27 Max DD 44% Vol 0.25 offset= 2 total return 2756% Alpha 2.07 Sharpe 9.13 Max DD 29% Vol 0.23 offset= 3 total return 1702% Alpha 1.25 Sharpe 5.50 Max DD 28% Vol 0.24 ==> This is intersting, but using if feels like data fitting so I won't PF 2016_0814: Can we improve max drawdown by adjusting the stop loss parameter? Garner's original strategy used a 75% stop loss limit. This is probably a good value for monthly rebalance, but a tighter limit might make sense for weekly rebalancing Check this vs a middling scenario (weekly with positive FCF, offset = 0 days, no filtering for price_vs_max, simple_momentum model, 60 day ADDV >$500k)
Here are results for various stop loss limits
0%   total return 1263%    Alpha 0.91    Sharpe 3.93    Max DD 31%    Vol 0.24
60%   total return 1231%    Alpha 0.88    Sharpe 3.84    Max DD 31%    Vol 0.24
75%   total return 1177%    Alpha 0.84    Sharpe 3.69    Max DD 31%    Vol 0.24
80%   total return 1095%    Alpha 0.78    Sharpe 3.20    Max DD 30%    Vol 0.24
85%   total return 1036%    Alpha 0.73    Sharpe 3.32    Max DD 30%    Vol 0.24
90%   total return  787%    Alpha 0.55    Sharpe 2.56    Max DD 29%    Vol 0.23
==> This result suprises me. It must be that a significant fraction of the stocks that lose 25% during the week later recover some of this loss.

PF 2016_0814: Weekly rebalance baseline
Let's put together some of the apparently better ideas
1. Rebalance weekly vs monthly
2. Decide whether to be in stocks or bonds (safe) based on fast vs slow SMA of VTI (All cap index)
3. Only consider stocks that
a) are in the top 3000 by market cap
b) have net gain over the past 252 days
c) have positive cash flow
d) have and average daily dollar volume of at least $500k over the past 60 days 4. Select the top N of these stocks based on combined ranking over 20, 60, 125, 252 days 5. Set the value N to be portfolio value divided by$30k with a minimum of 10 stocks
6. Define a safe set of bonds to hold when not in stocks
7. Disable Garner's stop loss and profit taking as these don't benefit weekly strategy

offset= 0      total return 1422%    Alpha 1.03    Sharpe 4.34    Max DD 36%  Vol 0.25
offset= 1      total return 1366%    Alpha 0.98    Sharpe 4.17    Max DD 36%  Vol 0.25
offset= 2      total return 2193%    Alpha 1.63    Sharpe 6.96    Max DD 32%  Vol 0.24
offset= 3      total return 1543%    Alpha 1.13    Sharpe 4.81    Max DD 32%  Vol 0.24
offset= 4      total return 1794%    Alpha 1.32    Sharpe 5.65    Max DD 26%  Vol 0.24

PF 2016_0814: Still have liquidity problems, especially with low share price stocks.
Try filtering those

Using the offset = 0d case above
Price > $0 total return 1422% Alpha 1.03 Sharpe 4.34 Max DD 36% Vol 0.25 ??? Price >$3     total return 1343%    Alpha 0.97    Sharpe 4.10    Max DD 36%  Vol 0.25
Price > $5 total return 1069% Alpha 0.75 Sharpe 3.22 Max DD 33% Vol 0.25 The progress from$0 to $3 to$5 did reduce the number of partial order messages, but also degraded returns for the case of days_offset=0.

Checking the result for all five day_offset cases and Price > $3: offset= 0 total return 1343% Alpha 0.97 Sharpe 4.10 Max DD 36% Vol 0.25 offset =1 total return 1368% Alpha 0.99 Sharpe 4.17 Max DD 36% Vol 0.25 offset= 2 total return 2292% Alpha 1.71 Sharpe 7.36 Max DD 32% Vol 0.24 offset= 3 total return 1514% Alpha 1.10 Sharpe 4.72 Max DD 32% Vol 0.24 offset= 4 total return 1609% Alpha 1.17 Sharpe 5.03 Max DD 28% Vol 0.24 ==> This slight overall reduction is OK, but I'll continue to investigate liquidity fixes. **PF: End of comments relative to the weekly version of the strategy **PF: Parking lot of things to check later. List in no particular order. - how to reduce drawdown spans (strategy can result in ~3y periods with no net gain) - how to avoid occasional liquidity (partial order) problems - evaluating possibility of nonuniform weighting - implementing overlapping holding periods (maybe order every 2 days and hold for 10) - eliminating use of the built-in market_cap() method that is not supported in live trading - evaluating results in a tear sheet - evaluating results with the AlphaLens tool - how to safely use leverage > 1.0 (see Guy Fleury posts) - picking a better safe set (little thought went into this one) - checking momentum of each safe asset before purchase (... in or cash for each) - exploring alternative entry/exit logic (vs the simple fast/slow SMA) **PF: that is the parking lot for now """ # # import methods and data # from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline import Pipeline from quantopian.pipeline import CustomFactor from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.data import morningstar from quantopian.pipeline.factors import AverageDollarVolume import numpy as np from collections import defaultdict from cvxopt import matrix, solvers from sklearn.covariance import OAS # # define custom classes # class simple_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = (close[-1] / close[0]) / np.std(close, axis=0) class augmented_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) class price_vs_max(CustomFactor): inputs = [USEquityPricing.close] window_length = 252 def compute(self, today, assets, out, close): out[:] = close[-1]/np.nanmax(close, axis=0) class market_cap(CustomFactor): inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding] window_length = 1 def compute(self, today, assets, out, close, shares): out[:] = close[-1] * shares[-1] class get_fcf_per_share(CustomFactor): inputs = [morningstar.valuation_ratios.ev_to_ebitda] window_length = 1 def compute(self, today, assets, out, fcf_per_share): out[:] = fcf_per_share class get_last_close(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = close[-1] def initialize(context): # # schedule methods # schedule_function(func=periodic_rebalance, date_rule=date_rules.week_start(days_offset=1), time_rule=time_rules.market_open(), half_days=True) schedule_function(func=daily_rebalance, date_rule=date_rules.every_day(), time_rule=time_rules.market_close(hours=1)) # # set portfolis parameters # set_do_not_order_list(security_lists.leveraged_etf_list) context.acc_leverage = 1.00 context.min_holdings = 10 # # set profit taking and stop loss parameters # context.profit_taking_factor = 0.01 context.profit_taking_target = 10.0 #set much larger than 1.0 to disable context.profit_target={} context.profit_taken={} context.stop_pct = 0.0 # set to 0.0 to disable context.stop_price = defaultdict(lambda:0) # # Set commission model to be used # set_commission(commission.PerShare(cost=0.005, min_trade_cost=1.00)) # # Define safe set (of bonds) # context.safe = [ sid(23870), #IEF sid(23921), #TLT sid(25485) #AGG ] # # Define proxy to be used as proxy for overall stock behavior # set default position to be in safe set (context.buy_stocks = False) # context.canary = sid(22739) context.buy_stocks = False # # Establish pipeline # pipe = Pipeline() attach_pipeline(pipe, 'ranked_stocks') # # Define the four momentum factors used in ranking stocks # factor1 = simple_momentum(window_length=20) pipe.add(factor1, 'factor_1') factor2 = simple_momentum(window_length=60) pipe.add(factor2, 'factor_2') factor3 = simple_momentum(window_length=125) pipe.add(factor3, 'factor_3') factor4 = simple_momentum(window_length=252) pipe.add(factor4, 'factor_4') # # Define other factors that may be used in stock screening # factor5 = get_fcf_per_share() pipe.add(factor5, 'factor_5') factor6 = AverageDollarVolume(window_length=60) pipe.add(factor6, 'factor_6') factor7 = get_last_close() pipe.add(factor7, 'factor_7') factor_4_filter = factor4 > .0 # only consider stocks with positive 1y growth factor_5_filter = factor5 > 0.0 # only consider stocks with positive FCF factor_6_filter = factor6 > 0.5e6 # only consider stocks trading >$500k per day
#    factor_7_filter = factor7 > 3.00  # only consider stocks that close above this value
#
# Establish screen used to establish candidate stock list
#
mkt_screen = market_cap()
stocks = mkt_screen.top(3000)
total_filter = (stocks
& factor_4_filter
& factor_5_filter
& factor_6_filter)

pipe.set_screen(total_filter)
#
# Establish ranked stock list
#

combo_raw = (factor1_rank+factor2_rank+factor3_rank+factor4_rank)/4

#
# Calculate maximum number of stocks to buy
#
n_30 = int(context.portfolio.portfolio_value/30e3)
context.holdings = max(context.min_holdings, n_30)
#
# Screen to find the current top stocks
#
context.output = pipeline_output('ranked_stocks')
ranked_stocks = context.output
context.stock_factors = ranked_stocks.sort(['combo_rank'], ascending=False).iloc[:context.holdings]
context.stock_list = context.stock_factors.index
#
# Use fast/slow SMA test of proxy to determine whether to be in stocks vs safe
#
Canary = data.history(context.canary, 'price', 80, '1d')
Canary_fast = Canary[-15:].mean()
Canary_slow = Canary.mean()
if Canary_fast > Canary_slow: context.buy_stocks = True

def daily_rebalance(context, data):
#
# Do daily maintenance
#    a) sell obsolete positions
#    b) implement stop loss
#    c) implement profit taking
#    d) record values for backtest display
#
#
# Sell any holdings that are not in context.this_periods_list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# update stop loss limits and sell any stocks that are below their limits
#
for stock in context.portfolio.positions:
price = data.current(stock, 'price')
context.stop_price[stock] = max(context.stop_price[stock],
context.stop_pct * price)
if price < context.stop_price[stock]:
order_target(stock, 0)
context.stop_price[stock] = 0
log.info("%s stop loss"%stock)
#
# Profit take if profit target is met
# Skip this for safe set assets
#
takes = 0
for stock in context.portfolio.positions:
if stock not in context.safe:
if data.current(stock, 'close') > context.profit_target[stock]:
context.profit_target[stock] = data.current(stock, 'close')*1.25
profit_taking_amount = context.portfolio.positions[stock].amount * context.profit_taking_factor
takes += 1
log.info(profit_taking_amount)
order_target(stock, profit_taking_amount)
#
# Record parameters
#
n100 = len(context.output)/100
record(leverage=context.account.leverage,
positions=len(context.portfolio.positions),
t=takes,
candidates=n100)

def periodic_rebalance(context,data):
#
# rebalance portfolio based on most recent context.buy_stocks signal
#
# rebalance portfolio in stocks
#
context.this_periods_list = context.stock_list
#
# sell any holdings not in this period's stock list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# equally weight portfolio over assets that can trade
# don't buy stock if its 20d momentum (Factor_1) is not positive
# set profit_target threshold based on recent close
#
stocks = []
for stock in context.stock_list:
if stock  not in security_lists.leveraged_etf_list and data.can_trade(stock) and context.stock_factors.factor_1[stock] > 0:
stocks.append(stock)

prices = data.history(stocks, "price", 90, "1d")
returns = prices.pct_change().dropna().values
cov = OAS().fit(returns).covariance_
weight = getweights(cov)
p_tgt = context.profit_taking_target
for i, stock in enumerate(stocks):
order_target_percent(stock, weight[i])
context.profit_target[stock] = data.current(stock, 'close')*p_tgt
#
# otherwise put portfolio into safe set
#
else:
context.this_periods_list = context.safe
#
# sell any holdings not in safe set
#
for stock in context.portfolio.positions:
if stock not in context.safe:
order_target(stock, 0)
#
# equally weight portfolio over safe assets that can trade
#
n = 0
for stock in context.safe:
if n > 0:
weight = 1.0/n
for stock in context.safe:
order_target_percent(stock, weight)

def getweights(cov):
(n, n) = np.shape(cov)
P = matrix(cov)
q = matrix([0.] * n, (n, 1))
G = matrix(-np.eye(n))
h = matrix([0.] * n, (n, 1))
A = matrix([1.] * n, (1, n))
b = matrix([1.], (1, 1))
res = solvers.qp(P=P, q=q, G=G, h=h, A=A, b=b)
return res['x']                       
There was a runtime error.

Anthony, you are making my point. I don't think one can win with your version of the program. Also, while you look at shorter time interval, my interest would be in longer time interval to see if the program can stand the test of time.

Here is the story. When I came back to Q I was looking for tools to help me answer the following graph:

It says that where ever you are in time, stock performance in terms of CAGR will be fanned out as in the above log chart. Now, between the two extremes, put in 3,000 such lines to represent your stock universe. The end distribution will be oddly bell shaped with the mean near or below the 10% CAGR line (the secular trend), but most assuredly positive. Going forward you have absolutely nothing that can tell you the end ranking of these lines. Meaning you still don't know the future but it will be shaped as in the chart.

So, my first step was to get reacquainted with the program syntax, structures and functions. The easy way was to jump right it. At the time, your trading strategy happened to be at the top of the list in the forum. I was looking for something that had a program framework: some kind of stock selection, use of functions, buy and sell orders, and some kind of excuse to get in and out of positions. How lucky can you get? The first program I opened and voilà. It had those things, it even had an efficiency rating (e_r) that could be used to select only stocks that lived above the red dashed line in my chart.

If you play a long only strategy, it is much better to pick stocks going up than going down.

So, I estimated that it was a good place to start. After a few tests, I knew the program had too many weaknesses to ever be used as is. It's usefulness was also limited as to its performance expectancy. I certainly wanted more than that.

But, it was a code example. I could relearn how to program in Q, and I had a framework in hand. I could start to structure other programs on the same logical template. And all I saw in the strategy was this template, a relearning tool.

However, things changed when Peter made his modifications to your program. Totally changing the orientation one could give it. And all the tests I've provided based on Peter's version have been done without changing a single line of code. It does put emphasis on the initial value of parameters having shown such a wide range of outputs following minimal changes to the program: a number here or there.

First, what Peter showed was that the efficiency ratio (e_r) was worth, let's say it, zero.

The only reason that could have saved your program had no impact whatsoever on the outcome. Next, for your “protection” you had put in stop losses. But, there again, Peter demonstrated that they had no value, and that if you did not use any, you would get about the same results. So, another procedure in your program to throw away. What is left, and where I also agree with Peter's conclusion, is the simple: close[-1] - close[0] > 0. Meaning that if the last price is above the zero line in the above chart, you have a stock that showed some positive drift. Now, I can not say that is some kind of innovative thinking. That has been around for I don't know how long: the scholars of the sixties, or even before that, Bachelier in 1900 could have said that having a stock going up the year before was no guarantee that it would go up the next.

Nonetheless, Peter's program with its modifications opened up possibilities. I will still need to fix some of its weaknesses and adapt it to my purpose, but it does provide the building blocks for some interesting strategy design.

Peter's version of this trading strategy feeds on its equity line. Anything that can add additional cash throughout the trading interval can and will be used. This is where Peter's modification really come into play. It enables full use of available equity and will adapt to the ups and downs of portfolio value.

The objective remains to feed the beast. It will take every dollar that is made available. Two lines of code...

I returned to the tradable window settings to (-13) and (100). Naturally, running that test again would give the same answer as already presented. What I changed this time was to allow leveraging also during the drive to safety when all shares are liquidated and cash equivalents are used. I set this leveraging on the bonds at 1.80. The impact should be almost insignificant since the rate of return on these is relatively low. But, it nonetheless puts more cash into the account which can be used in the next upswing. And this effect will snowball with time, to the point where that modification produced the following chart:

Peter's Program : 85% leverage stocks, 80% leverage ETFs : no IEF : 49k allocation : windows -13 and 100

SMRS : Trade windows end points: -13 and 100 : ETF leveraging
total return 10624.1% Alpha 8.13 Beta 0.68 Sharpe 20.29 Max DD 41% Vol 0.41

This added $2.48M to the account pushing the CAGR to 43.5% over the entire 12.9 year trading interval. At the same time, one can notice that volatility might have remained about the same, but the overall drawdown went down a little. Where we see improvements is in the alpha, beta and Sharpe. The beta still indicates that this portfolio varies less than the market as a whole. In all, it is 55.3 times better than the market average. That translates to a lot of alpha. Of note, none of the program's logic, or structure, has been changed. It is the same program that Peter so graciously provided. The only thing that changed were number settings. Making one thing equal to x rather than to y with for only purpose to generate more cash or equity as the trading strategy progresses in time. It might not know where the market is going, but it will take the ride. The last post showed that you could get impressive results. However, Peter did raise a question: what about when liquidity problems and there are not enough candidates? A partial solution to that is to simply increase the stock universe and allow more to past through. Again, just two numbers to change. One, the number of the stock universe which was raised to 3200. A relatively small increment which technically should have a minor impact, but ends up to have one. And two, allow factor_7 > 0.80. This will take the top 20% of the highest caps, giving the program access to more potential candidates and potential trades. And again, the way to show this is to do another test which follows: Peter's Program : 85% leverage stocks, 80% leverage ETFs : no IEF : 49k allocation : windows -13 and 100 : : 3200 stock universe : top 20% candidates http://alphapowertrading.com/images/divers/SMRS_Peter_orig_wk_Lev_185_no_IEF_49stake_w13_100_bp_c200_80pc.png SMRS : Trade windows end points: -13 and 100 : ETF leveraging : more candidates total return 12584.1% Alpha 9.65 Beta 0.67 Sharpe 24.99 Max DD 44% Vol 0.40 This partial solution improved performance level, raising the CAGR to 45.4% and generating$1.96M more in profits. There was practically no impact on the volatility, drawdown or beta. However, the alpha and Sharpe improved indicating that the measures taken of increasing the population size only led to more trading activity while trading windows were open for business.

And again, saying that the concept of using the upswing of a wave for a long only trading strategy is not so bad after all. The above produced 65.4 times what the market average was able to provide over the same trading interval.

Not a line of code was added to this program. Only its decision surrogate points were altered in order to express a different long term vision for where the trading strategy should go.

Hello All,

First thank you to all of you for sharing your hard work. I am new to algo trading and have learned a lot from these post.

I tried to load Peters algo above on a IB demo account. It fails to load because of lines 286 and 293. I have not come up with a solution yet. Any ideas?

Thanks again!

For all the tests I've presented to date, none required a change in the code itself, only some parameters to orient the strategy in a desired direction, evidently with a slightly more aggressive stance. But if you look at their respective betas, apparently not by that much since those portfolios still varied by less than the market.

Ever since Peter Falter put out his modifications to the original version of this program, I have not used anything else. That program contained the elements with which I could work with. And to top if off, Peter delivered the program in impeccable form, even documented with his own analysis and test results.

When you look at it, a program, is just that: a program. We can all design them differently. I've read somewhere that there could be some 400,000 such trading scripts on Q alone. And we don't see many of them. So, I'd like to also express my thanks to Anthony Garner for having put the original version of his program available. Without it, Peter would not have made the modifications he did. So, again, thanks to both.

With Peter's version of the program, since I have not changed a single line that would affect the logical structure of the program, I would have to conclude that the setting of parameter values would be the only reason as to why the output varied by so much. In a sense, these parameters were directing the orientation in which the strategy could evolve over time. What I call pivotal decision surrogates that can have an impact on a trading strategy just by being there with their default values.

It's like in the last post. No code logic was changed. You increased the stock universe from 3,000 to 3,200. What impact could that have had when you are looking for only the top 10? Well, statistically it does. It is sufficient to change the composition of the portfolio, not only on the first day, but for the duration of the test. And this small incremental change, which should have no bearing on the performance result, does.

It was the same for another criterion: the list of stocks admissible to be selected. Instead of using the top 15% of the ranked selection, it was increased to 20%. Another move which should intuitively have no impact since you will be taking only the top 10 stocks. And, yet, it did have an impact. The statistical reasoning behind that was by increase the selectable population to 3,200, and the selectable group to 20% simply allow more trade opportunities. And the program is designed to take advantage of these opportunities by allowing more candidates that would not have been considered otherwise. Increasing marginally its search for profits.

Why all this? What I see is: it is not just the program that one has to look at. A debugged program is just the beginning. One has to see what the code is really trying to do. Why some changes to so basic and ordinary parameters can adapt to what you have in mind.

Peter presented a problem: the lack of selectable candidates as time progressed. An easy, simple, and immediate solution (among others) was: increase the sampling size of selectable candidates, it will increase tradable opportunities. No code required, but still, building on the trading strategy's inner potential. And, it was enough to add $1.96M more in profits on its initial$100k stake. In all, something that could be view at most as a kind of administrative procedure: increase bank of potential candidates by 7%. A procedure that could have been taken even from outside the program as some kind of directive for going forward.

This trading strategy has even more hidden potential, but then, it will require changing its trading philosophy.

Some have expressed more than just doubts that Peter's version of the original program could produce anything near some of the performance charts I've presented here.

To make their point, they presented their case using Peter's version as well, but with their own parameter settings. Showing emphatically, that in no way could anyone produce anything really worthwhile using that program. Look a few posts back, you will see some examples.

My observation is: the performance results obtained are in line with their parameter settings. The program did, in fact, do the job wanted. So, on that basis, I do agree with their appraisal: with those settings, that program can not perform.

Why is it that if I use the original version of this program, or Peter's last modified version, the tests results are acceptable? While, if I change some of the same program's parameters (read Peter's version), it immediately becomes hogwash, untrustworthy, or utter nonsense, as if it was impossible that that program ever under any circumstances produce better numbers than those they displayed?

If I used the same version they did, I will obtain exactly the same results. Nothing better or worst.

But this does not say that Peter's version is worthless as some tried to demonstrate. Only that the way it has been used does not enable Peter's program to extract any significant value.

The version I use has different settings, not a change in trading logic, but in the values serving as variables that do partially control the trading strategy's behavior.

There are reasons why our respective versions of Peter's program including the original program behave differently. Procedures embedded within the program's code and their respective default values.

So, here is my analysis of Peter's version used by one presenter's set of parameter settings.

But first, I'd like to make note that I agree with Peter's findings when he said that the stop loss functions in the original version were practically irrelevant since rarely a stop could be executed, as well as when he showed that the efficiency rating (e_r) used had at most a minimal impact value since it could simply be replace by: close[-1] > close[0], and this meaning that if the latest price was higher than a year ago, it was practically an equivalent selection criterion (see the comments in the Peter's program file).

This took out the idea that the original program was protected by its stop loss functions, and that its “efficient” stock selection procedure had any benefit above: hey, that stock rose last year.

This reduced the original trading strategy to a flight to safety switcher based on a moving average crossover of a market index. It might be able to make some money, but...

Before going any further, I would like to add to Peter's observations. Using a 1% profit exit is by no means too greedy, it is rather timid in fact. However, wishing for a 1,000% profit target on a daily basis might be pushing it! Hope some see that that high mark has not been reached by any of the trades in any of the stocks that composed their 10-stock portfolio over the entire trading interval. So, technically, no value generation there either. This in itself puts all the profit burden to be generated on either the 1% profit expectancy or simply the luck of the draw.

What was left of the original program had absolutely no teeth? It is surprising that it even managed to make a 5.0% CAGR over the period. But note that even in the state the program was, it could still catch some of the underlying upward drift while a trading window was opened for business. As if the market provided a consolation prize just for participating.

When I expressed that some played linear, it was in trying to illustrate that the portfolio inventory never exceeded 10 stocks (a constant is a straight line). At no point in their test have they had more than 10 positions, each close to $1,000 and never exceeding$2,000. Based on my napkin calculations, it was expected it would take some 71 years before being able to add a single stock to the list, thus pushing their stock inventory total to 11. Moreover, my estimate might turn out to be too optimistic. 71 years is far away (2087!).

The only way one could make money was if the upward drift was of significance for the 10 selected stocks during opened trade window seasons. But then, that too was ruined by incessantly switching from stocks to bonds to a new list of stocks. I counted 18 such times of going from one to the other, each time paying in and out commissions on the lot when at most, based on the index's moves, one might have needed to do so maybe 3 or 4 times. But that is of secondary significance just as early in their program, commissions were disabled, but later in the code were reinstated.

What really breaks the version used, apart from the other issues which crippled the strategy's potential anyway, is the use of only $10k in capital. It was showing no respect for that trading strategy. In fact, it completely paralyzed and curtailed any potential it might have had. The strategy, as demonstrated before, is not that scalable, at least, not that way. Sure, one wanted to make the point clear to this old funny chap. A point which, technically, I consider to not even being one since having deliberately rendered whatever was in that trading strategy totally ineffective to then comment that there was nothing there. I totally agree on that, their version of this program produces nothing of significance because they forced it, they programmed it, to do so. So, for me, as long as some will be using that strategy the way they are, I can make this statement: they will continue to get the same kind of results they got. That is my assessment. And I won't even use the words: I might be wrong. Oh, as an added note. The performance chart some displayed using their version of the program, just as mine, are all generated by Quantopian software in the cloud. So, when I see one of your backtests, I can accept it as is. Maybe some could use that program differently... and give it a chance to show its tremendous potential because I can say Peter's version has a lot going for it. Simply look at what I could do with it, and it can do even better. Here is a simplified model of the SMRS where I've idealized market swings. The yellow-green line playing the role of SPY. The green sections for while trading windows are open for business and the yellow ones for when one would be in bonds or cash equivalents. At line crossings, one switches from one to the other. # SMRS http://alphapowertrading.com/images/divers/SMRS_Idealized.png There does not seem to be any structural flaws, it all appears reasonable and logical. After all, it is a simple decision surrogate responding to an index moving average crossover. All that is asked from a stock is to be part of the highest capitalization stocks, be at a higher price than a year ago while the fast moving average is above the slow one (green line segments). One can buy at anytime the green light is on. The first flaw I see in such a design is that it guarantees losses at the trade closing window with an exit all stocks. Some view it as a trailing stop loss, but it is more. All the positions in all the stocks that were bought above this exit point will have to declare a loss, all of them. And this will occur at every market cycle. Much of the benefits of the upswing are lost since all those losses have to be paid for. Depending on where you want to put this exit all stocks, the overstay bill will be commensurate. There is all this time wasted waiting for that exit all crossing which comes in more often than it could be needed. One advantage, however, is that you will not be in stocks under a yellow line when stock prices, on average, are falling. There are still drawdowns, but no major portfolio bear market here, no big drawdowns, no 50% drops in a week kind of thing. You simply won't be in the market when they come, and as such represent an interesting protective measure. The method will spare you that big drop, that you like it or not. You don't expect much during the bond periods, but at least this might preserve accumulated capital to a certain degree. There was no need to implement a stop loss policy on a stock by stock basis, the exit all longs at the crossing acts as a super sized portfolio level trailing stop loss. While in stocks, you are bound to catch the upward drift since in general stocks prices go up when the average is going up. You are bound almost by default to make some profits in the process. However, you have a reduced market exposure, being in the market only during green lines. Your added return will have to compensate for the reduced exposure by working harder. Especially if part of that exposure is not at your advantage. There was no need to buy anything above the close all trading window crossing price. Maybe this is where the second flaw resides. All those trades done above the liquidate all crossing were done while the market was presenting the greatest number of opportunities. This saying that those trades were taken when the supply of candidates was at its highest; as well as their respective prices. There is buying at all the tops that the strategy could find, or was allowed to trade. At or near the top, the strategy has a built in breakout system and then hopes to make a buck on that. May I burst that balloon. At the bottom of the price curve, where one would be interested in buying, the method offers the least number of potential candidates (most stocks are falling). Your selection criterion requires that stocks rise for a year before considering them: p(t) – p(t-252) > 0. So, from the trade open window you have few available candidates and as the market's average price rise, more candidates emerge offering more and more possibilities. Except that from an unspecified point (you will only know it after the fact), all the trades you will take will be at a loss even if you don't know it at the time. Some could wonder why that system is not making that much money. May I say, it is actually designed to shoot itself in the foot. Keep some of the underlying principles where you can find some benefit, and then trade this strategy differently. For starters, do make some structural changes. "All the positions in all the stocks that were bought above this exit point will have to declare a loss, all of them. And this will occur at every market cycle" To combat this, I am working on implementing a support and resistance indicator to open and close the window. Any ideas would be greatly appreciated! Guy, PLEASE attach your backtest so we can all follow along and thanks for your insight! Terry, I have not started doing structural changes to this program. I wanted first to get a good image of what and where I would like to change things using code since up to now all modifications have been done on the default parameter values which have already been given. These value modifications nonetheless forced the program to behave differently. And based on the test results, just playing with the parameters provided a wide range of outputs. Even with those modifications, the program had to contend with the aforementioned weaknesses and still come out ahead. So, first task was identify the weaknesses, and then either corrected them, minimize them or ignore the strategy. However, I think that there is a framework in there that could be usable. From the chart presented in the previous post, the objective is to transform that chart to operate differently. The chart below depicts what could be the ideal configuration (more or less). Like any model, it is an idealization. The backdrop is the same as the previous chart with the yellow line serving the role of the SPY. The SMA is of little use in this case. The green and red segments are considered acceptable trading zones, serving as open for trading windows. What is left of the visible yellow line are viewed as non-trading zones, necessary barriers for prices to cross to insure a Δp > 0. Based on the model, this could be achievable for all the trades should the trading activity be limited to their respective zones. The main idea being to gain the ability to average in and out of positions within their respective open for trading windows. The yellow dots represent incremental buys while the white ones are for the scaling out of the portfolio's existing inventory. # SMRS – NEW Simplified Model http://alphapowertrading.com/images/divers/BuyZones_vo1.png On such a schema, all trades can turn out to be positive. Sure, the price could go higher after having sold all the inventory. You would be totally in cash, or cash equivalents, at what could be considered a cycle top. But, wasn't that the main objective of this SMRS trading strategy, as is for many others, to switch to cash when general prices decline? This design would greatly alleviate some of the problems presented by the SMRS. It would open up the initial price rise segment at the bottom of the curve. Maybe most importantly, it would stop buying all the way to the top and then some to then declare all those trades taken above the stop loss as losses by executing its global stop loss at the SMA crossing. The above chart also help explain some of the transformations I've done to the program (see previous posts). Such procedures have been discussed in my book on trading mechanics. Hello All, I have been trying to improve this algo. As Guy points out the system is buying past the stop loss and experiences big draw downs until the SMA crosses. To close the trading window sooner, my thought is to add a trailing stop on the entire account of say 10%. Is there a way to store the highest value of the entire portfolio and trigger a trailing stop off that value? A rough example would be context.portfolio.portfolio_value_current < context.portfolio.portfolio_value_highest * .9 Thanks in advance for your help!!! I think the sensitivity in start date comes from the fact that the momentum calculation just takes the price from a single day in the past. If a stock has a single day dip or peak between two consecutive dates for example when there is a 20% difference between close[-120] and close [-121]. If you run two simulations where you hit close[-120] and another simulation , one day shifted, that hits close [-121], the resulting momentum will change big, the ranking will change big and the selection can also change big. Therefore giving high variance in returns. I would like to run the algorithm on a smoothed version of past prices, so for example, instead of picking close[-120], we should pick SMA(7)[-120], so not the single day price but a 7-day moving average of the price. I feel this should lower the variance for a 7-day shift in start date....I haven't had the time to test this however..... It has been a hot minute without any updates. What's the news? Yesterday, I re-read all the comments made in this thread. I would not change a single word of what I have said. There was no maybe it could have been something else. Notwithstanding, it could have been better written. Initially, I thought here was a place where people could exchange ideas, look for new ones, improve on the old, evaluate and compare trading techniques. Things that could benefit everyone. Also, mostly a place to learn new stuff. I stopped posting in this thread, felt the “exchange” was going nowhere. The original rotation program by Anthony had absolutely nothing going for it. Did it need improvements? Definitely. Its stock selection process used the same techniques as other strategies I've seen on Q. So, nothing original there. It had an efficiency ratio ranking system to select the best tradable stocks. Interesting. But there, Peter showed that it was totally ineffective. And, I concurred. You could replace the whole process with zero with no impact at all, no real change in performance. The strategy had a defined stop loss, except that no trade was executed on that premise. The same for its coded in stone profit target, there again, not a single trade ever reached it. Technically, anything that is never executed in a trading program is just redundant code, carry-on baggage. The whole program could be reduced to one effective line of code: close[-1] / close[0], using its 252 trading days lookback period. Trading on “News” that was at least 6 months old. Trying to trade something that might not have been good enough to even make a new high for over a year. He could have used the expression: close[-1] - close[0] > 0, it would have done the same thing. At least, those that were already going up were also part of the selectable group. It is only after Peter made modifications to the original code that the trading strategy could gain value, not before. I think Anthony demonstrated pretty well that his trading strategy, as he designed it, had no teeth, no traction whatsoever. In fact, he himself, made every effort to destroy what little was left of his program, a useful step in the strategy development process, probably everyone should do that too. We need, before putting something live, to be sure that a program has some merit, we should know its limits, we should know what it does and have some justification for how a strategy behaves. It's like the efficiency ratio rating used in Anthony's program. I've seen it used elsewhere as well. And here, it was easily shown to be worthless. That is good information to have. You stop wasting time on its code, retesting a concept that brings absolutely nothing worthwhile. On the other hand, because it had no impact, it could stay in the program and “pretend” it was doing something beneficial when in fact it was just some more redundant code. Any stock trading strategy's payoff matrix can be resumed using 3 metric numbers that can be seen as variables in your program. You scale any of them up and the output goes up. That is what was being shown with my default parameter changes. It is up to the strategy designer to find ways to push these variables higher. Only 3 numbers. How hard can it get? You have them in every simulation you make. This is explained in another thread: the payoff matrix. But then, nobody seems to read that anyway, or comment for that matter. After my last post, I made another modification to the parameters: one other number affecting all 3 mentioned. It's a simulation I interrupted even before it was half finished, immediately made copies of the program on my machines, then deleted the trading strategy on Quantopian, and have not touched it since. Don't worry, I made hard copies too. That change in variable totally changed the very nature of the program and since it did increase the performance level a lot, I wasn't to put more fire on the table since I would not share that single number either, or show partial test results. Well, let's say not yet. Pravin's MVO algo throws the error below, any idea how to fix it? There was a runtime error. KeyError: Equity(46631, symbol=u'GOOG', asset_name=u'Alphabet Inc. Cl C', exchange=u'NASDAQ', start_date=Timestamp('2014-03-26 00:00:00+0000', tz='UTC'), end_date=Timestamp('2016-10-25 00:00:00+0000', tz='UTC'), first_traded=None, auto_close_date=Timestamp('2016-10-28 00:00:00+0000', tz='UTC'), exchange_full=u'NASDAQ GLOBAL SELECT MARKET') Volker, like you, I am not that proficient in Python, but I too can read code. Your question was: Ok, can some kind soul tell me the logic of this strategy. It seems "to good to be true". First, the trading strategy you want to study is Peter's version, the one at the top of the page. Before transporting to Wealth-Lab, I would consider cleaning up the code a bit. Here is some of the program's logic. The stock selection is simple. It takes the top 10 momentum plays of the 3,000 highest capitalization stocks of its 8,000+ stock universe. That's okay, if they got that big over the years, they must have gone up in price. And your present bet is that they will continue to do so, go up. A reasonable bet that can apply in the future being based on fundamental data. It also eliminates most of what could be considered lower valuation stocks (read penny stocks, and stocks going bankrupt). All good points and well. Some of the program's code or “features” should be ignored being totally redundant code. For instance, the whole efficiency-ranking (e_r) system thing; that code has no value. It can be replace by zero, as was demonstrated by Peter. And he is right. The higher profit target is never executed. The programmed stop loss is never executed. The profit target that seems to be effective is set at 1.00%. You know that if you did that using Wealth-Lab, you would get miserable results, well, let's say not extraordinary results just to be polite. When you break it all down, you are left with a breakout system coupled with a trend following system where the only rules of entry are if the stock has a higher price than a year ago, and the price is above its slow moving average controlled by the moving average on the surrogate index (SPY). The crossover of the moving average on the SPY enables the total switching of stocks to bonds to stocks again. Look at the idealized SMRS chart posted above, or view its copy: http://alphapowertrading.com/images/divers/SMRS_Idealized.png Peter's version of this program will work going forward, even in Wealth-Lab. The stock selection process will stand. And betting that a stock can go up if it is higher than a year ago provides more than enough candidates to trade since this trading strategy only wanted 10 stocks. So, all this will work going forward. Now, the “to good to be true part”. That must refer to the modifications I made to that program. Well, there is nothing extraordinary in those modifications. And nothing that that strategy would not withstand. So, you should get similar results transporting these modifications to Wealth-Lab code too. The numbers I changed had an impact on the number of trades taken, increased the net profit per trade, increased the trading unit, increased and the number of stocks to trade, with whole thing given leverage (85%). So, yes, it would produce much more than the original version. And yes, it will be executable going forward. Someone not liking leverage, they simply don't use it. I've provided some background explanation of why it will work in a series of notebooks, and HTML files in the following thread: https://www.quantopian.com/posts/the-payoff-matrix Note that there are deficiencies in the original code, not all repaired by Peter's modifications, and that I did not even address. I find that program is very wasteful of capital resources, doing a lot of unnecessary trades producing absolutely nothing, even worse, taking a lot unnecessary losses too. Missing trade opportunities by the boat load. And yet, you could still manage to increase its profitability, even with all its deficiencies. It is worth cleaning up the code. It is worth making the changes I made to the program, even if there are minor (just changed a few numbers). It is worth eliminating the weaknesses and concentrate on its strengths. You should have no problem transporting the code to Wealth-Lab. All my best. Guy thank you for the additional information. Actually my " too good to be true" was already aimed to the original post and the several variations. From my experience results change a lot if tested on real, clean data. For example I am not sure if the data used here at Quantopian to avoid "survivorship bias" is the right approach. 1. Is there any way I can look at the data? 2. Has anyone tested it on just the NASDAQ100 symbols? 3. Anything symbol other then NQ100 and SP500 you will have a hard time to get the execution price you wish to see. Believe me Guy, I have a big problem transporting the code to Wealth-Lab. :) You on the other hand are close to being a genius, so I am counting on you. I would truly appreciate if you could give me and my team a head start. I think it would be interesting to compare results, especially since the expectation would be that the results somehow match. BTW, I will be in Las Vegas for the Traders Expo. Anyone from this group there? I would be interested to meet you all. VK You would almost certainly have had the co-operation of a number of people here if that silly old windbag had not hijacked these threads and used them to spread his asinine, nonsensical, idiotic views. See his other mad thread which no one has bothered respond to except a couple of people heavily tongue in cheek. You don't "test it out on just the Nasdaq 100 symbols" unless you want a curve fit abomination. You test it out here on Q using their data which includes de-listed stocks. Or you subscribe to data which includes de-listed stocks and use your own back testing software. Q's data is "real" and it is reasonably clean. Better than you are likely to afford on your own. If you listen to that old madman you have only yourself to blame for the inevitable calamity. Hi "Big Mouth", I am happy for any cooperation regardless where it is coming from. ;) In regards to my personal background, you do not have to worry about me. I am th co-founder of Wealth-Lab. The truly first web site that allowed you to perform backtesting on the web. If I say: "to good to be true" I mean it. I have seen systems like that on the Wealth-Lab site. That is why I am doubting the Q. approach of how it handles the data. I have proven in some articles that the "random buy" of symbols beats the market. I would be happy to reproduce the results on the current NQ100. However, Wealth-Lab Developer allows me to backtest on real data, only symbols that are in an index at the time a signal is generated. Again, in some articles I have provided evidence on how different backtesting results are if you consider the real symbols in an index. Anyway, I look forward to be corrected and see the system as it is succeed on a portfolio of stocks of different indices (tested by me with Wealth-Lab Developer). I look forward for any help. VK Hi Volker, as “Big Mouth” (a.k.a. Anthony) has said, data on Quantopian includes delisted, merged, and bankrupted symbols. So, survivorship bias is not really an issue. Commissions and slippage are charged by default. However, as you have seen, everybody is cordial and ready to offer help, and a warm welcome. A really inviting place... everybody so thoughtful, so considerate, so kind... A difference you will notice, compared to Wealth-Lab, is that only a fraction of available volume can be purchased or sold at any one time in order to minimize price impact. An attempt to mimic real trading where volume could disappear before you get there, or taking more than what was available. I like that, it renders winning more difficult, but also closer to reality. I find the data clean enough. The stock selection process can also mimic reality with its access to fundamental data (no look forward, no peeking allowed). If you take Anthony's program, you won't go far. It's a linear program. He has shown numerous times his strategy will not outperform. A one year breakout is not necessarily what you would call a “trading system”, lacks imagination, like many you can find on Wealth-Lab. Here, trade decisions are being made 6 months after the whole market has turned around. It will save you from big drawdowns, but you will pay for it in CAGR terms. Use Peter's version, it is a program with potential. My advice, don't cripple it like Anthony does by putting it on a$10k or 10-stock diet. The program is made to handle bigger portfolios by design. Think initial capital of $500k and up. But even there, Peter's version has not addressed some of the limitations, and redundant code of the original program. So you will have to work on that too. The original program is very wasteful of capital resources. You already have on Wealth-Lab many rotation trading scripts with the same theme. Over the years, you have seen them come and go, just as I have, with some: “too good to be true” stuff. But, do investigate this one. I would like you to see, on your own, why this one is different. Because after that, once you will have caught what can make it tick, you will start incorporating in your own programs what is implied in the principles used in this trading strategy. It is like liberating it from its shackles. I could do it, just as you can, with just a few numbers. But, even there, you will want more, and you could do that too. I think you will find many ways to improve on the code. With your experience, you could also prototype it faster in Wealth-Lab code. BTW, I have not seen a strategy like Peter's on Wealth-Lab. Well, let's say, not like I use it, to be more precise. No matter what “Big Mouth” might say, investigate Peter's version. And for “Big Mouth”, I've put up sufficient data, test results, explanations, mathematical formulas, for you to hang and nail me. There are enough equal signs on the table, in plain sight, for you to prove they are wrong. So, try. Hi Guy, At least replies are very fast here. It doesn't give me time to get away from my desk. ;) To clarify again: 1. Within Wealth-Lab Developer I have the option to test on real , super clean data and only on the symbols that are in the index at the time of signals. I wouldn't even consider the ones that dropped out. Mergers, delisted and bankrupted all considered. 2. I also have the option only use x-percent of a daily volume for any position. It is a click of the mouse. 3. I can even use: "Worst Case Scenario". Another click of the mouse and I will get the worsed trades from the bunch. 4. You know, that slippage and commission are easy settings or can be programmed. I wonder what Q. has done to clean the data for false open, highs and lows. It took us years to clean those on SP100 and NQ100. In many strategies that makes a difference of 30% more or less profits. Most of all, many smaller symbols will not be able to make the trades you see in backtesting. You would be surprised how many times one of my systems hit the low of the day and gets partial fills or doesn't get executed at all. In backtesting (not using our data) you would think you did get executed. So data does matter! So all this is taken care of. I am not here to "hang and nail" anyone here, I just try to see if it works under realistic circumstances and rigid testing methods. I believe the intentions here are good and maybe there is something that I have overseen on rotation systems. Always willing to learn and improve results. ;) Anthony, after having seen what you wrote about me on you website, let me doubt that. I find the same words being used here. At least, you deleted that post. Then, let me reiterate for “Big Mouth”: I've put up sufficient data, test results, explanations, mathematical formulas, for anyone to hang and nail me. There are enough equal signs on the table, in plain sight, for anyone to prove they are wrong. So, there is an open invitation there. Anthony, if this is the reply to my post then I am totally lost!!! I am just trying to understand the logic of your strategy so that I can check it with my own data. I did not accuse you or anyone with anything, did I??? I did not even think or talk about you when I replied to the posts. All I wanted is the logic of the strategy in plain english. Thats it. Since you all programmed it and id variations of it I thought it might have been easy to write it down. Volker, those are all good features to have. It enables developing a trading strategy under adverse conditions, stuff that could happen going forward. Not surprisingly, I like adverse conditions, harsh trading environments when backtesting. I am not that much of a fan of having super clean data, for the simple reason, it won't be clean going forward either. For the same reason, I don't minimize outliers, rare events, or whatever might happen. Stuff like trading halts, non-availability of shorts, no significant tradable volume on the bid or ask... Notwithstanding, the following link point to an explanation of what was done with Peter's version of the program. You should find it easy to see what I did to that trading script. Nothing out of the ordinary. But, what I did, did increase the 3 portfolio level metrics that defined that trading strategy, as they would any other. Doing so increased performance 17-fold without changing a line of code except some numbers, some constants. Which also means there is more room for improvements. See the latest notebook (to be added soon) in the Payoff Matrix thread: https://www.quantopian.com/posts/the-payoff-matrix, or read the HTML file at: http://alphapowertrading.com/index.php/papers/226-a-tradable-plan I think it will answer some of your questions. And even some of other people that might have follow this discussion, this exploration on doing things differently. Nothing is set in stone, objectives change, methods of doing things can change too. I want to try this strategy with my IB live account. Is there a solution for this error? Thanks. NotAllowedInLiveWarning: The fundamentals attribute valuation_ratios.fcf_per_share is not yet allowed in broker-backed live trading  Also, i made the following changes to make the code compatible for live trading: - cleaned few depreciation warnings - changed the class_cap class as per Gp lars suggestion  class market_cap(CustomFactor): inputs = [morningstar.valuation.market_cap] window_length = 1 def compute(self, today, assets, out, mcap): out[:] = mcap[-1]  22 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month """ Adapted from "A simple momentum rotation system for stocks" https://www.quantopian.com/posts/a-simple-momentum-rotation-system-for-stocks PF 2016_0807: The unmodified performance of this algorithm is remarkable from 1/4/03 to 11/30/15 Total Returns 1287% Benchmark 192.5% Max Drawdown 50.4% Alpha 0.87 Beta 0.85 Sharpe 3.25 Volatility 0.30 Method outline is: Buy and hold best 10 of 3000 stocks each month During the month sell big losers (stop loss) and big winners (profit taking) Selection considers - Four momentum factors over 20, 60, 125 and 252 days - Efficiency threshold = 0.031 based on 252 day return vs sum of daily High minus Low **PF 2016_0807: Comments relative to the monthly version of the strategy PF 2016_0807: The original post successfully outlines a method outline of use to the community. There was no attempt to make this a production-ready algorithm, so some problems and uncertain features exist. A few of these impacted my ability to understand what was happening so I tried to resolve them (perhaps only to myself): 1) although this is nominally a long-only algo the daily rebalance can result in shorting 2) liquidity problem even when starting with only$100k. Leverage is roughly 35% to 110%. The number of assets is roughtly 4 to 15 vs defined top-10.
3) the algorithm lacks any logic to exit stocks during prolonged drawdown periods. Real investors would have exited a few times from 2003 through 2015.
4) the utility of the efficiency factor test is not clear. The threshold of 0.031 appears to be oddly low as efficiency could easily be much larger.

PF 2016_0807: Below is a summary of what I did to resolve/improve issues 1-3 and my finding that the utility of the efficiency function can be had more simply by requiring the 252-day return (factor_4) to be > 0.0.
I tried to leave the rest of the algorithm as is. Performance is evaluated over the same 1/4/2003 to 11/30/2015 period as the original posting. Another tester might investigate other interesting features of Garner's algorithm (ranking periods, profit taking logic, ...)

PF 2016_0807: Shorting issue (resolved in one change)
I modified the code to issue sell orders for obsolete postions before issuing buy orders for new positions.
This has resolved the problem and improved overall return as the stocks being shorted were probably not good shorting candidates.

PF 2016_0807: Liquidity problems (resolved in three changes)
Leverage often exceeds 1.0 due to an inability to sell obsolete positions in a single trading session.
Leverage 1: Add a function to daily rebalance to continue sales of these positions
This did drive the leverage down to 1.0 quickly in all but a few cases.
As expected the total return also dropped as the average leverage was reduced and more trade fees were paid.
Total Returns 1163%    Benchmark 192.5%    Max Drawdown    52.1%
Alpha    0.78    Beta    0.88    Sharpe    2.85    Volatility    0.31

PF 2016_0807: Leverage 2: Add Average Daily Dollar Volume (ADDV) as a filter factor.
Consider only stocks with ADDV > $500k over the past 20 days This nearly eliminated the need to sell obsolete stocks on multiple days until portfolio size got much bigger ~$500k
This did improve overall returns
Total Returns 1314%    Benchmark 192.5%    Max Drawdown    48.1%
Alpha    0.88    Beta    0.89    Sharpe    3.17    Volatility    0.31

PF 2016_0807: Leverage 3: Allow the number of equities to increase with portfolio value
Try context.holdings = max(10, int( portfolio_value/30e3 )
As expected this reduced volatility. It also had some benefit to overall return
Total Returns 1356%    Benchmark 192.5%    Max Drawdown    48.5%
Alpha    0.91    Beta    0.91    Sharpe    3.72    Volatility    0.28

PF 2016_0807: Drawdown protection (improved to acceptable level)
Add a simple drawdown protection based on simple moving averages of SPY
If SPY_SMA_fast < SPY_SMA_slow, then go to cash; else use the algorithm
Fast period should be on the order of the shortest momentum filter (20 days)
Since SMA filter is slower than EMA a period less than 20 days is desired.
Slow period should be several multiples of the fast period, but not slower than the overall algo.
The geometric average of the four periods (20,60,125,252) is 78 days
A 15/80 day test provided good drawdown reduction (26% vs 48%) with about 10% loss in total return
15/80 Cash  Total return 1204%    Alpha 0.85    Sharpe 4.00    Max DD 26%

PF 2016_0807: Most asset allocation models would exit to bonds vs cash, so that was tried as well
Bond set = [TLT, IEF, AGG]
15/80 Bonds   Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
This is a nice result. A somewhat better result might be had by allowing rotation between stocks, bonds, cash, or some combination of stocks/bonds, but that is beyond my current purpose.

PF 2016_0807: What is effect of the ADDV limit?
ADDV limit. $30k per holding and$100k initial investment.
Exiting to bonds when indicated by 15/80 SMA test
$0.2M: Total return 1810% Alpha 1.34 Sharpe 6.15 Max DD 20%$0.5M:  Total return 1790%    Alpha 1.32    Sharpe 6.07    Max DD 20%
$1.5M: Total return 1546% Alpha 1.13 Sharpe 5.00 Max DD 20% PF 2016_0807: What is the effect of the efficiency threshold? I tried several values as shown below Any limit > 0.0 has a good result until some point above 0.5. Garner's 0.031 recommendation for his top 10 algorithm looks good. My finding is for a variable and larger set of equities (10 to 60 in any trial). PF 2016_0807: Intermediate is the return reported for week of 1/3/2010 (near midpoint) Limit 0.0 total return 1815% intermediate 848% Sharpe 6.15 Limit 0.031 total return 1790% intermediate 836% Sharpe 6.07 Limit 0.1 total return 1786% intermediate 818% Sharpe 6.05 Limit 0.2 total return 1784% intermediate 813% Sharpe 6.03 Limit 0.4 total return 1799% intermediate 791% Sharpe 6.09 Limit 0.5 total return 1764% intermediate 809% Sharpe 5.97 Limit 0.7 total return 1550% intermediate 739% Sharpe 5.20 ==> might as well use a limit of 0.0 ==> This is equivalent to stating factor_4 > 1.0 which is easier to implement. PF 2016_0809: Thomas Chang published a more compact implementation of the four factor ranking. I'll probably use this in a future version of this strategy PF 2016_0809: However problems remain Most notable there is a very large sensitivity to starting date Garner made several posts showing this wildly variable performance. Over the span of 1/4/2003 to 11/30/2015 the total return can be as little as 200% with no improvement in volatility or drawdown vs SP500 buy-and-hold. Starting date sensitivity is a common problem in asset rotation strategies, but this one is particularly sensitive. **PF 2016_0809: End of comments relative to the monthly version of the strategy **PF 2016_0814: Start of comments relative to the weekly version of the strategy PF 2016_0814: Potential remedies for rotation method and timing a) rebalance more frequently, perhaps weekly b) initiate multiple overlapping positions (invest weekly and and hold monthly) c) consider using a small cap proxy for the entry/exit test. PF 2016_0814: Potential remedies for asset selection a) investigate whether the some very simple fundamentals screening could reduce the likelihood of buying troublesome stocks b) investigate different momentum models (slope, percent below high) PF 2016_0814: Rebalance weekly Below are returns as a function of days_offset offset= 0 total return 719% Alpha 0.48 Sharpe 3.90 Max DD 47% Vol 0.28 offset= 1 total return 317% Alpha 0.17 Sharpe 0.73 Max DD 52% Vol 0.31 offset= 2 total return 1142% Alpha 0.48 Sharpe 3.14 Max DD 29% Vol 0.28 offset= 3 total return 1308% Alpha 0.94 Sharpe 3.72 Max DD 29% Vol 0.27 offset= 4 total return 1762% Alpha 1.29 Sharpe 5.35 Max DD 33% Vol 0.25 PF 2016_0814: Rebalance weekly and check that free cash flow is positive Rationale: a quick look of stocks selected during periods of poor algorithm performance showed poor fundamentals. Positive FCF might be one of the simplest tests for "minimally acceptable" fundamentals. See returns as a function of days_offset offset= 0 total return 1024% Alpha 0.79 Sharpe 3.29 Max DD 44% Vol 0.25 offset= 1 total return 1201% Alpha 0.86 Sharpe 3.70 Max DD 34% Vol 0.25 offset= 2 total return 1912% Alpha 1.41 Sharpe 6.16 Max DD 26% Vol 0.24 offset= 3 total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 offset= 4 total return 1533% Alpha 1.12 Sharpe 5.03 Max DD 28% Vol 0.23 ==> This is more consistent with regard to return and volatility is improved somewhat, but max DD is still too high. PF 2016_0814: Effect of proxy choice Rationale: Strategy considers top 3000 stocks, so a broader proxy should be used using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some broad market candidates: SPY (SP500) total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 IWV (Russell 3000) total return 1320% Alpha 0.95 Sharpe 4.13 Max DD 30% Vol 0.24 VTI (all cap) total return 1312% Alpha 0.95 Sharpe 4.10 Max DD 30% Vol 0.24 Unfortunately I can't find an equal weighted fund that from 2003. ==> As expected broad index (IWV or VTI) may be better PF 2016_0814: Revisiting liquidity I'm still encountering some liquidity problems. Several times per year a stock will take several days to sell the position. using a middling SPY scenario (weekly with positive FCF and offset = 3 days, no filtering for price_vs_max) Here are results for some pairings of ADDV periods and values 20d/500k total return 1033% Alpha 0.73 Sharpe 3.20 Max DD 30% Vol 0.24 60d/500k total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 ==> use the 60 day test PF 2016_0814: Effect of limiting performance vs recent maximum Rationale: Some stocks may experience a very large recent spike in price then enter a period of decline. Although in decline the large price jump keeps the stock in our selection set. Implement a simple filter max_N = max close in past N days price_vs_max = close[0]/max_N if price_vs_max > threshold then stock is OK to use using a middling SPY scenario (weekly with positive FCF and offset = 3 days) Here are results for some pairings of N and threshold 20d/0.0 total return 1177% Alpha 0.84 Sharpe 3.69 Max DD 31% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 20d/0.85 total return 1116% Alpha 0.80 Sharpe 3.48 Max DD 26% Vol 0.24 60d/0.7 total return 1182% Alpha 0.85 Sharpe 3.71 Max DD 28% Vol 0.24 60d/0.85 total return 1147% Alpha 0.82 Sharpe 3.59 Max DD 28% Vol 0.24 ==> This seems unlikely to be a beneficial test PF 2016_0814: Investors often chase the shiny object. Define augmented momentum to provide a bonus for the best single day in the period best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) Augmented momentum for Factor_1, simple_momentum for Factor_2, 3, 4 offset= 0 total return 1102% Alpha 0.78 Sharpe 3.30 Max DD 47% Vol 0.25 offset= 1 total return 1167% Alpha 0.83 Sharpe 3.55 Max DD 36% Vol 0.25 offset= 2 total return 2565% Alpha 1.92 Sharpe 8.42 Max DD 25% Vol 0.23 offset= 3 total return 1247% Alpha 0.90 Sharpe 3.87 Max DD 28% Vol 0.24 offset= 4 total return 1481% Alpha 1.08 Sharpe 4.65 Max DD 31% Vol 0.24 Augmented momentum for Factor_1,2,3,4 offset= 0 total return 1392% Alpha 1.01 Sharpe 4.27 Max DD 44% Vol 0.25 offset= 2 total return 2756% Alpha 2.07 Sharpe 9.13 Max DD 29% Vol 0.23 offset= 3 total return 1702% Alpha 1.25 Sharpe 5.50 Max DD 28% Vol 0.24 ==> This is intersting, but using if feels like data fitting so I won't PF 2016_0814: Can we improve max drawdown by adjusting the stop loss parameter? Garner's original strategy used a 75% stop loss limit. This is probably a good value for monthly rebalance, but a tighter limit might make sense for weekly rebalancing Check this vs a middling scenario (weekly with positive FCF, offset = 0 days, no filtering for price_vs_max, simple_momentum model, 60 day ADDV >$500k)
Here are results for various stop loss limits
0%   total return 1263%    Alpha 0.91    Sharpe 3.93    Max DD 31%    Vol 0.24
60%   total return 1231%    Alpha 0.88    Sharpe 3.84    Max DD 31%    Vol 0.24
75%   total return 1177%    Alpha 0.84    Sharpe 3.69    Max DD 31%    Vol 0.24
80%   total return 1095%    Alpha 0.78    Sharpe 3.20    Max DD 30%    Vol 0.24
85%   total return 1036%    Alpha 0.73    Sharpe 3.32    Max DD 30%    Vol 0.24
90%   total return  787%    Alpha 0.55    Sharpe 2.56    Max DD 29%    Vol 0.23
==> This result suprises me. It must be that a significant fraction of the stocks that lose 25% during the week later recover some of this loss.

PF 2016_0814: Weekly rebalance baseline
Let's put together some of the apparently better ideas
1. Rebalance weekly vs monthly
2. Decide whether to be in stocks or bonds (safe) based on fast vs slow SMA of VTI (All cap index)
3. Only consider stocks that
a) are in the top 3000 by market cap
b) have net gain over the past 252 days
c) have positive cash flow
d) have and average daily dollar volume of at least $500k over the past 60 days 4. Select the top N of these stocks based on combined ranking over 20, 60, 125, 252 days 5. Set the value N to be portfolio value divided by$30k with a minimum of 10 stocks
6. Define a safe set of bonds to hold when not in stocks
7. Disable Garner's stop loss and profit taking as these don't benefit weekly strategy

offset= 0      total return 1422%    Alpha 1.03    Sharpe 4.34    Max DD 36%  Vol 0.25
offset= 1      total return 1366%    Alpha 0.98    Sharpe 4.17    Max DD 36%  Vol 0.25
offset= 2      total return 2193%    Alpha 1.63    Sharpe 6.96    Max DD 32%  Vol 0.24
offset= 3      total return 1543%    Alpha 1.13    Sharpe 4.81    Max DD 32%  Vol 0.24
offset= 4      total return 1794%    Alpha 1.32    Sharpe 5.65    Max DD 26%  Vol 0.24

PF 2016_0814: Still have liquidity problems, especially with low share price stocks.
Try filtering those

Using the offset = 0d case above
Price > $0 total return 1422% Alpha 1.03 Sharpe 4.34 Max DD 36% Vol 0.25 ??? Price >$3     total return 1343%    Alpha 0.97    Sharpe 4.10    Max DD 36%  Vol 0.25
Price > $5 total return 1069% Alpha 0.75 Sharpe 3.22 Max DD 33% Vol 0.25 The progress from$0 to $3 to$5 did reduce the number of partial order messages, but also degraded returns for the case of days_offset=0.

Checking the result for all five day_offset cases and Price > $3: offset= 0 total return 1343% Alpha 0.97 Sharpe 4.10 Max DD 36% Vol 0.25 offset =1 total return 1368% Alpha 0.99 Sharpe 4.17 Max DD 36% Vol 0.25 offset= 2 total return 2292% Alpha 1.71 Sharpe 7.36 Max DD 32% Vol 0.24 offset= 3 total return 1514% Alpha 1.10 Sharpe 4.72 Max DD 32% Vol 0.24 offset= 4 total return 1609% Alpha 1.17 Sharpe 5.03 Max DD 28% Vol 0.24 ==> This slight overall reduction is OK, but I'll continue to investigate liquidity fixes. **PF: End of comments relative to the weekly version of the strategy **PF: Parking lot of things to check later. List in no particular order. - how to reduce drawdown spans (strategy can result in ~3y periods with no net gain) - how to avoid occasional liquidity (partial order) problems - evaluating possibility of nonuniform weighting - implementing overlapping holding periods (maybe order every 2 days and hold for 10) - eliminating use of the built-in market_cap() method that is not supported in live trading - evaluating results in a tear sheet - evaluating results with the AlphaLens tool - how to safely use leverage > 1.0 (see Guy Fleury posts) - picking a better safe set (little thought went into this one) - checking momentum of each safe asset before purchase (... in or cash for each) - exploring alternative entry/exit logic (vs the simple fast/slow SMA) **PF: that is the parking lot for now """ # # import methods and data # from quantopian.algorithm import attach_pipeline, pipeline_output from quantopian.pipeline import Pipeline from quantopian.pipeline import CustomFactor from quantopian.pipeline.data.builtin import USEquityPricing from quantopian.pipeline.data import morningstar from quantopian.pipeline.factors import AverageDollarVolume import numpy as np from collections import defaultdict # # define custom classes # class simple_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = close[-1]/close[0] class augmented_momentum(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): best = np.nanmax(np.diff(close,axis=0),axis=0) out[:] = (close[-1]/close[0]) + (best/close[0]) class price_vs_max(CustomFactor): inputs = [USEquityPricing.close] window_length = 252 def compute(self, today, assets, out, close): out[:] = close[-1]/np.nanmax(close, axis=0) class market_cap(CustomFactor): inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding] # not allowed in live trading window_length = 1 def compute(self, today, assets, out, close, shares): out[:] = close[-1] * shares[-1] class get_fcf_per_share(CustomFactor): inputs = [morningstar.valuation_ratios.fcf_per_share] # not allowed in live trading window_length = 1 def compute(self, today, assets, out, fcf_per_share): out[:] = fcf_per_share class get_last_close(CustomFactor): inputs = [USEquityPricing.close] window_length = 1 def compute(self, today, assets, out, close): out[:] = close[-1] def initialize(context): # # schedule methods # schedule_function(func=periodic_rebalance, date_rule=date_rules.week_start(days_offset=1), time_rule=time_rules.market_open(), half_days=True) schedule_function(func=daily_rebalance, date_rule=date_rules.every_day(), time_rule=time_rules.market_close(hours=1)) # # set portfolis parameters # set_do_not_order_list(security_lists.leveraged_etf_list) context.acc_leverage = 1.00 context.min_holdings = 10 # # set profit taking and stop loss parameters # context.profit_taking_factor = 0.01 context.profit_taking_target = 10.0 #set much larger than 1.0 to disable context.profit_target={} context.profit_taken={} context.stop_pct = 0.0 # set to 0.0 to disable context.stop_price = defaultdict(lambda:0) # # Set commission model to be used # set_commission(commission.PerShare(cost=0.005, min_trade_cost=1.00)) # # Define safe set (of bonds) # context.safe = [ sid(23870), #IEF sid(23921), #TLT sid(25485) #AGG ] # # Define proxy to be used as proxy for overall stock behavior # set default position to be in safe set (context.buy_stocks = False) # context.canary = sid(22739) context.buy_stocks = False # # Establish pipeline # pipe = Pipeline() attach_pipeline(pipe, 'ranked_stocks') # # Define the four momentum factors used in ranking stocks # factor1 = simple_momentum(window_length=20) pipe.add(factor1, 'factor_1') factor2 = simple_momentum(window_length=60) pipe.add(factor2, 'factor_2') factor3 = simple_momentum(window_length=125) pipe.add(factor3, 'factor_3') factor4 = simple_momentum(window_length=252) pipe.add(factor4, 'factor_4') # # Define other factors that may be used in stock screening # factor5 = get_fcf_per_share() pipe.add(factor5, 'factor_5') factor6 = AverageDollarVolume(window_length=60) pipe.add(factor6, 'factor_6') factor7 = get_last_close() pipe.add(factor7, 'factor_7') factor_4_filter = factor4 > 1.0 # only consider stocks with positive 1y growth factor_5_filter = factor5 > 0.0 # only consider stocks with positive FCF factor_6_filter = factor6 > 0.5e6 # only consider stocks trading >$500k per day
#    factor_7_filter = factor7 > 3.00  # only consider stocks that close above this value
#
# Establish screen used to establish candidate stock list
#
mkt_screen = market_cap()
stocks = mkt_screen.top(3000)
total_filter = (stocks
& factor_4_filter
& factor_5_filter
& factor_6_filter)

pipe.set_screen(total_filter)
#
# Establish ranked stock list
#

combo_raw = (factor1_rank+factor2_rank+factor3_rank+factor4_rank)/4

#
# Calculate maximum number of stocks to buy
#
n_30 = int(context.portfolio.portfolio_value/30e3)
context.holdings = max(context.min_holdings, n_30)
#
# Screen to find the current top stocks
#
context.output = pipeline_output('ranked_stocks')
ranked_stocks = context.output
context.stock_factors = ranked_stocks.sort(['combo_rank'], ascending=True).iloc[:context.holdings]
context.stock_list = context.stock_factors.index
#
# Use fast/slow SMA test of proxy to determine whether to be in stocks vs safe
#
Canary = data.history(context.canary, 'price', 80, '1d')
Canary_fast = Canary[-15:].mean()
Canary_slow = Canary.mean()
if Canary_fast > Canary_slow: context.buy_stocks = True

def daily_rebalance(context, data):
#
# Do daily maintenance
#    a) sell obsolete positions
#    b) implement stop loss
#    c) implement profit taking
#    d) record values for backtest display
#
#
# Sell any holdings that are not in context.this_periods_list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# update stop loss limits and sell any stocks that are below their limits
#
for stock in context.portfolio.positions:
price = data.current(stock, 'price')
context.stop_price[stock] = max(context.stop_price[stock],
context.stop_pct * price)
if price < context.stop_price[stock]:
order_target(stock, 0)
context.stop_price[stock] = 0
log.info("%s stop loss"%stock)
#
# Profit take if profit target is met
# Skip this for safe set assets
#
takes = 0
for stock in context.portfolio.positions:
if stock not in context.safe:
if data.can_trade(stock) and data.current(stock, 'close') > context.profit_target[stock]:
context.profit_target[stock] = data.current(stock, 'close')*1.25
profit_taking_amount = context.portfolio.positions[stock].amount * context.profit_taking_factor
takes += 1
log.info(profit_taking_amount)
order_target(stock, profit_taking_amount)
#
# Record parameters
#
n100 = len(context.output)/100
record(leverage=context.account.leverage,
positions=len(context.portfolio.positions),
t=takes,
candidates=n100)

def periodic_rebalance(context,data):
#
# rebalance portfolio based on most recent context.buy_stocks signal
#
# rebalance portfolio in stocks
#
context.this_periods_list = context.stock_list
#
# sell any holdings not in this period's stock list
#
for stock in context.portfolio.positions:
if stock not in context.this_periods_list:
order_target(stock, 0)
#
# equally weight portfolio over assets that can trade
# don't buy stock if its 20d momentum (Factor_1) is not positive
# set profit_target threshold based on recent close
#
weight = context.acc_leverage / len(context.stock_list)
p_tgt = context.profit_taking_target
for stock in context.stock_list:
if stock in security_lists.leveraged_etf_list:
continue
if data.can_trade(stock) and context.stock_factors.factor_1[stock] > 1:
order_target_percent(stock, weight)
context.profit_target[stock] = data.current(stock, 'close')*p_tgt
#
# otherwise put portfolio into safe set
#
else:
context.this_periods_list = context.safe
#
# sell any holdings not in safe set
#
for stock in context.portfolio.positions:
if stock not in context.safe:
order_target(stock, 0)
#
# equally weight portfolio over safe assets that can trade
#
n = 0
for stock in context.safe:
if n > 0:
weight = 1.0/n
for stock in context.safe:
order_target_percent(stock, weight)
There was a runtime error.

Maxim,

Below is what I get when I run your algorithm. This is consistent with everything I've seen using variations of this algorithm over the past few months. I gather that Quantopian is still figuring out how to handle morningstar data in general (beyond just market cap) and that we'll have to just sit tight until everyone feels warm and fuzzy. (This may have been addressed elsewhere.)

I hit the same set of roadblocks when I went to run my own version of Peter's algorithm on Robinhood. It definitely was a bucket of cold water at the time.
For those of us engaging in live trading, It's a shame to have to take an awesome algorithm like Peter's back to the drawing board. (I've tried different workarounds, with varying levels of success, but they are all hideous.) It "should" be easy to make use of basic facts like, say, the value of a company...

(Then again, most sentences that begins with, "It should be easy" ought to be banned from the English language.)

286 Warning NotAllowedInLiveWarning: The fundamentals attribute valuation.shares_outstanding is not yet allowed in broker-backed live trading
293 Warning NotAllowedInLiveWarning: The fundamentals attribute valuation_ratios.fcf_per_share is not yet allowed in broker-backed live trading

Disclaimer: My comments are only in reference to the latest post and the original post, in case I gave the impression that I am weighing in on the, er, exchange going on in between...

I do enjoy a lively discussion, however.

Stock Trading: You Think, and The Machine Works

Usually, when changing an automated stock trading strategy, it implies making changes to the trade selection process and trading rules resulting in changes to a portfolio's trading history. But, each time doing this brings changes to trading procedures, and these changes tend more and more to over-fitting the data.

The very process intended to improve a trading strategy might be moving it further and further away from reality. Often, even making it less valuable. Some go as far as actually destroying any chance a strategy might have had of ending with a profit.

Any stock trading system can be expressed as: A(t) = A(0) + Σ(H.*ΔP), or, A(t) = A(0) + n*u*PT. This implies that the payoff matrix: Σ(H.*ΔP) = n*u*PT. These are 3 very common portfolio metrics from which other metrics can be derived. H is the inventory holding matrix keeping the complete historical record of all the trading activity. For a description of n, u, PT, and SMRS, see a previous post.

The original SMRS strategy produced: A(t) = $1M +$ 6.9M = $7.9M. That is$ 6.9M in profits over its 11.9 years of trading, a 17.45% CAGR. This is a lot better than having put one's fund in SPY, which would have produced: A(t) = $1M +$ 1.9M = $2.9M, a 5.57% CAGR. It was therefore preferable to implement the former rather than the latter strategy. There was alpha built-in, exceeding the average portfolio manager's performance over the same trading interval. Peter's modifications to the SMRS program raised the bar to: A(t) =$1M + $8.48M =$9.48M, a 19.6% CAGR. It does not seem like a big deal, but it is. Few portfolio managers exceed the 20% CAGR mark over the long term.

I took the same trading strategy as Peter. But, instead of modify the trading rules, I aimed my interventions uniquely on the 3 portfolio metrics (n, u, and PT). As if controlling from the outside what I wanted the trading strategy to do. For instance, I raised the trading unit (u). This increased the bet size, and therefore scaled up the output: (1+a%)*u*n*PT.

You are the one fixing the size of the trade unit u, not the trading strategy. You can make trades of size (1+a%)*u as long as there is cash in the account. Evidently, if a% > 0, you have scaled up overall profits generated by that strategy by the same factor without changing a single line of code, or any of the trading procedures.

So, the strategy stayed the same, and you still managed to extract more profits. You could do even more by concentrating on procedures, pivotal decisions points that can have an impact on the outcome, namely: n, u, and PT.

The operations I did to improve on Peter's design were minimal. I increased the trading unit by 66% which produced: n*1.66*u*PT, compared to the base design, thereby increasing profits by 66%. There was no change in the trading procedures to do this. It was the same strategy except for that one number.

This also said: anybody could take their own trading strategy, increase their respective trade units and generate more profits. This has no bearing on how the market behaved, it is only a directive giving the trading strategy: make future trades using 1.66 times your previously fixed trade unit.

Other moves that were made to increase performance were: increasing the profit margin PT, and increasing n, the number of trades. It resulted in: 4.3*n*1.66*u*1.25*PT. Again these measures did not change the trading strategy, only how it behaved. If your trading strategy has a positive average profit per trade (PT>0), increasing the number of trades sounds like a reasonable thing to do.

Since this strategy, by design, could easily support leverage, it was set at 1.85, giving some leeway in case, at times, it might exceed it.

This resulted in the payoff: 4.3*n*1.66*u*1.25*PT*1.85 ≈ 16.5*n*u*PT. An output 16.5 times larger than the $6,972,000 returned by the original strategy. Or viewed from the initial capital's perspective, it is:$115,480,000 some 115.48 times the original $1M stake. In numbers,$1M invested in the SPY would have generated $1,918,000, a CAGR of 5.57%. Investing the same$1M in the original SMRS strategy would have ended with $6,972,000 in profits, a 17.45% CAGR. While, using the same initial$1M stake, making minor program modifications, I pushed Peter's trading strategy to: A(T) = A(0) + 16.5*n*u*PT = \$116,480,000. A CAGR of 41.8% over its 13.6 years of trading.

It is not all that can be done since I did not touch the strategy's structural deficiencies, or shown some of the other improvements that can be made.

However, the point I would like to make is this: how easy it was to improve a trading strategy when the effort was concentrated on the only portfolio metrics that mattered: n*u*PT. In what I presented, the strategy itself was not changed, only some of the constants, and it was sufficient to raise the original portfolio's profits by a factor of 16.5 times. Compared to SPY, this is a factor of 60.2 times the index surrogate. The original program would have given a factor of 3.6 times the SPY.

This is a wide range of outcomes for what has been shown to be small changes in the constants governing these 3 portfolio metrics.

Underneath it all, you will find the same trading strategy, the same trading logic, the same trading rules at play. Consequently, the increase in performance can only be attributed to the changes brought to those constants having a direct impact on: n*u*PT.

Furthermore, it did say: A(t) = A(0) + (1+ g(t))^t * Σ(H.*ΔP), that one can control his/her trading strategy from the outside and have a major impact on the total performance level. Raising the bar to a much higher level with very little effort. Increasing it by a factor of 10 or more.

Having used leverage, there is a price to pay. It can easily be estimated. I won't bother with commissions and slippage since they were already included. Say the interest charged is 10%, then: A(t) = A(0)*(1+ 0.418 – 0.85*(½)*0.10)^t = A(0)*(1+ 0.418 – 0.0425)^t. In fact, whenever the added performance exceeds the leveraging costs, it can become a worthwhile proposition, as just shown.

You are left with only one question: How can I raise n*u*PT? The answer is: any method you want that raises either or all 3 of these portfolio metrics will do. It will improve your portfolio's performance.

One should consider that all that was requested to achieve the above results was a slight increase in processing time on a machine.

not that this is a difficult fix, but I just ran your back test and I got one of those lovely "this api is old" messages.
Line 320: set_do_not_order_list(security_lists.leveraged_etf_list) is deprecated. Use set_asset_restrictions(security_lists.restrict_leveraged_etfs) instead.
Line 501: Evaluating inclusion in security_lists is deprecated. Use sid in <security_list>.current_securities(dt) instead.