A Simple Momentum Rotation System for Stocks

I really need to add a provision whereby only "winners" are rebalanced not losers. But there you go. Many people here can't seem to get a handle on simple trend following systems so I thought this might help!

3435
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 566346ef362cbe11635ce112
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
134 responses

Many thanks, Anthony, for sharing your insight and congratulations on an impressive performance of your algorithm!

Could you please explain what the algorithm does and how it ranks the stocks?

Regards,

Tim

Real money trading would hit a high of over $9 per dollar put to work, this code is promising. A margin account would be needed, max leverage is 1.29, altho amount risked is higher than that at$185K due to shorting spikes ~ 11/2010, 8/2011.

Drawdown and Beta, areas to work on, I've found that one way I can sort of address those two is by storing SPY prices in a list and making decisions based on a certain lookback window or two based on slopes:

import statsmodels.api as sm
def slope_calc(in_list):
time = sm.add_constant(range(-len(in_list) + 1, 1))
return sm.OLS(in_list, time).fit().params[-1]  # slope


... tho I haven't quite nailed down a key to good reliability on that yet. If the slope_calc function is perfectly reliable, I think my use of those values may need another ingredient added, recent volatility, amount of time since latest high/low, or something.

One thing in tandem with that I've found helpful is a little routine to log the values (and/or use record()):

def track_val(c, var):    # Diagnostics
if 'val' not in c:
c.val = {
'hi' : var,
'lo' : var,
'avg': [0, 0],
}
avg, num     = c.val['avg']
avg_new      = ((avg * num) + var) / (num + 1)
c.val['avg'] = [avg_new, num + 1]
if var > c.val['hi']: c.val['hi'] = var
if var < c.val['lo']: c.val['lo'] = var
log.info('now {}  avg {}  lo {}  hi {} '.format(
'%.2f' % var, '%.2f' % avg_new, '%.2f' % c.val['lo'], '%.2f' % c.val['hi']))


Together could look something like:

        slope1  = slope_calc(spy_prices_list[-8:])
track_val(context, slope1)         # now 0.14  avg -0.13  lo -3.09  hi 1.01


Actually, this code is far from optimal from my point of view and is only a poor reflection of what I use in trading ETFs. The paucity of this system as coded here is largely as a result of being very new to Python and also I am finding the Q IDE not the ideal work place.

In coding elsewhere (and in real trading), I prohibit leverage. Haven't quite worked out how to do that successfully here. I don't take any short positions - and I don't think I have been shorting spikes in this code - or at least I hope not!

I do have enormous difficulties in relying on signals based on a single factor (such as the slope of or MA crossover on the downside of SPY, S&P 500, whatever). Its a nice idea but the trade sample size is far too small (IE the number of times the signal tells you to stop trading). I have also done tests on trading the equity curve of the system itself which is similarly flawed. Rather than ramble on at length I explain my misgivings in greater detail here:

Post 1

Post 2

Sometimes trading the equity curve / using the S&P etc will work for you, sometimes it won't.

The problem with a system such as this applied to a large universe of stocks is that the will ALWAYS be a bunch of stocks showing positive momentum. Therefore the "don't trade if momentum is negative" provision is useless. As I think you will see in my research on ETF systems using similar principals, because you are there trading indices, and not too many of them, at times of market crisis the system refuses to take negative momentum positions and goes to cash as more and more indices show negative returns at re-allocation dates. This is what supplies a "hedge" and the trade sample size is much larger because you have 60 indices giving you a signal to go to cash not a single one.

In short, if you apply and on/off switch to THIS system trading THIS portfolio, I believe you will be disappointed by the results in real life.

Hope I'm making sense and not garbling!

Anthony, thank you very much for sharing this with the community!

I made some slight modifications to your algo to remove forward lookahead bias on delisted securities (more feasible for live trading). As expected, the number of open positions now creeps up as positions in these securities do open up. I also added a few comments and removed some lines of code that weren't being run. In your version, lines 109-113 didn't seem to be doing anything so I removed them - I'm not sure whether this was intended or not, but I removed them for clarity.

I'm also wondering if you have run this algo in minute mode. The time rules for the schedule_function() method actually only work in minute mode so the rebalance() method won't be run at market open like you might expect.

Thanks again for sharing this! I'm curious to hear your thoughts on my comments.

443
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5672f542366a24117050bed8
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here's a version that tracks delisted securities and ignores them in the trading logic. Note how the positions count doesn't creep up like in my previous version.

443
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 567311457f05c0116db9b421
There was a runtime error.

@James

How would you take something like this, which is backtesting and running pipelines in Daily mode and convert it to run live on IB? Seems like the pipeline would be running every bar/minute? Is this the case or am I missing something here?

Anthony, let me know if you'd like code to make the shorting spikes easily visible.

MB, just don't try the latest with real money on IB, it starts with 1M and goes 10.5M into the hole, margin. Those are dollars put into play to achieve that result, the highest profit per dollar risked is only around $1.3 vs Anthony's$9.

I'm new to Quantopian and have a lot to learn, apparently; why is it that this strategy's leverage creeps upward over time? This is something I've noticed in a few other algorithms I've looked at. In general, how does one go about prohibiting taking on leverage?

Garyha
Thank you, that code would be gratefully accepted. My code is supposed to take profits but not short. If it is shorting I need to investigate further.

Jeremy
There are a number of factors which can cause this (leverage) and a number of ways to alleviate it. The first major reason here is that de-listed stocks are getting stuck in the portfolio and appear as positions whereas in reality they are cash. Or at least they are cash if in reality they could have been sold at the last traded price!

Jamie
No, I have not yet run this in minute mode and am not sure how to do that. I am a mere beginner in this environment I fear! I will have a good look through your comments and alterations and come back on them. I have only really just begun on this algo here in Q and there is a whole pile of stuff I know I need to attend to.

There is also a whole bunch of stuff I have not begun to explain here. For instance it is essential to experiment with different re-allocation dates. And from there an investor is likely to conclude that he needs to split his capital into 4 or so and trade 4 subsystems each starting on an equally spaced weekly re-allocation date.

Liquidity is essential to consider. Note I have chosen a Russell 3000 proxy - fine on very small capital, hopeless for institutional size on a 10 stock rotation scheme. Institutions would need to rotate into 100 or 300 stocks monthly to find enough liquidity.

On de-listed stocks, yes and no. Since this is purely mechanical, it is perfectly fair to assume you would have sold on the de-listing date. In practice that works in real trading also. Let us put it this way: on a takeover, you would get paid out the agreed price. On bankruptcy ...er...that might be a different matter: the stock might be suspended and then crash the next day having been de-listed and re-quoted OTC. But then keeping the stock in the portfolio does not make any difference in the latter case: it is still unrealistic since you would not get the last traded price anyway - you would get the (unrecorded ) price on the pink sheets or whatever.

None of this really matters in a sense but of course it screws the leverage estimate to leave the stock in the portfolio. I'll have a word or two to say on your chosen efficiency rating as well, but more of that anon. Briefly, you need to set the Efficiency limit at 0 < Efficiency Limit < 1 while being careful not to curve fit it to a portfolio. An ER of zero or 1 does not exist in practice other than 1 for time deposits (assuming the bank remains solvent!).

Anyway, you guys have an excellent forum here and a wonderful product which can only get better.

Mouse over the custom chart to see the shorting, I also added track_orders(). Am curious what effect the removal of those shorts will have when you or someone find the best way, could be higher returns, just watch out for negative cash unless intentional.
I wonder if it is feasible for someone to write a function to run at the end of handle_data() that would take a look at open orders and adjust them targeting no more than a particular maximum leverage if necessary.

1970-01-01 initialize:114 INFO 2003-01-04 to 2015-11-30  100000  daily
2003-01-10 pvr:277 INFO 0 MaxLv 1.0 QRet -0.1 PvR -0.1   CshLw 696 Shrt 0  RskHi 99303
2003-01-28 track_orders:323 INFO    Sell -1158 SINA at 7.44
2003-01-29 track_orders:298 INFO       Sold -1158 SINA at 7.935
2003-01-29 track_orders:323 INFO    Sell -1521 CEDC at 8.296
2003-01-30 track_orders:298 INFO       Sold -1521 CEDC at 8.196
...

132
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5673bf74f3b4e3117a08fda1
There was a runtime error.

garyha
Many thanks for the most useful code. I will track down what is happening. Most helpful!

Michael:
Pipeline gets run once per day, regardless of which mode you are in. It gets run before the first call to handle_data() on daily data. While its not available for live trading with IB yet, it's something we are actively working on.

Anthony:
I'll have to admit the efficiency rating is new to me. If I chose something different from what you did, that wasn't my intention! Can you point me to the difference that you're noticing?

For the bit of code on profit taking, did you intentionally increase your position in stocks that were doing well, or did you mean to sell of part of the stock?

Regarding delisted stocks, I agree that it's fair to assume you may have sold the stocks on their delisted date, but I noticed that you seemed to be looking ahead several days and selling the stock before it was delisted. There is one point where stocks already in the portfolio are being closed out if their end_date is today or tomorrow and there's another guard against ordering a security with an end_date less than 35 days away, which is definitely lookahead bias! Either way, the algorithm is still impressive after having removed the lookahead and of course you have the freedom to make whatever guards you'd like. The main reason I drew attention to this was because I'm sure there are plenty of members looking at what you posted and considering it for live trading, so I wanted to make it clear which parts of it won't translate to live trading.

Anthony:
One more note. I was experimenting with converting the algo to minute mode and the period from 2003-2015 took too long to run, but I got about 5 years into it, and the returns were nothing like they were in daily mode. This suggests to me that the intraday tinkering in handle_data was not productive. I didn't look closely enough to determine whether the cause was commission fees or just bad betting, but I figured I'd share my partial results in case you try a similar experiment. I might play around with the algo a bit more to make a "minute mode equivalent" that has the same functionality as this daily mode version, but is compatible with live trading.

Garyha
Wow! Thanks so much for the logging code - its a work of art and makes my task so much easier. Its the sort of thing I have done for years successfully in my current signal generating engine but had not learnt to do yet in Zipline/Q.

Jamie

Blockquote There is one point where stocks already in the portfolio are being closed out if their end_date is today or tomorrow and there's another guard against ordering a security with an end_date less than 35 days away, which is definitely lookahead bias!

Agreed.....it was merely a desperate attempt to stop de-listed stock getting stuck in the portfolio! Which for some reason the first trading guard alone did not seem to cover. What we really need in Zipline is a dictionary of the de-listing dates, a note of whether the stock is de-listed or not and if so a failsafe provision to exit on the last day of trading. To be honest I had forgotten the second trading guard!

Efficiency Ratio
This crept in in line 11:

Blockquote # Only consider stocks with a positive efficiency rating
ranked_stocks = context.output[context.output.factor_5 > 0]

Which I think probably overrides this from line 89:

Blockquote factor_5_filter = factor5 > 0.031
total_filter = (stocks& factor_5_filter)
pipe.set_screen(total_filter)

A filter of somewhere around 0.031 is very suitable for a wide range of futures and stocks. Is it curve fitting? Who knows?
I wrote this about the futures markets a few years back (unfortunately I can not post the chart):

Blockquote Kaufman’s Efficiency Ratio has values ranging from 0 and a theoretical +1 when markets are perfectly directional.

For a given day, Kaufman’s ratio is calculated as: the absolute value of net price change over time (e.g. 120 days) / the sum of the absolute value of all day to day price ranges over the same time period. You can readily see that if a price goes smoothly upwards from day to day with no retracement (ever!) the index for any given day will equate to 1.

I based my code on true range rather than close to close price change (arguably not what Kaufman intended). I ran the code over an entire portfolio of over 100 [FUTURES] instruments from 1970 to date. Each day I added each individual instrument efficiency ratio for the whole portfolio and divided it by the number of instruments for which I had a price on that day.

See the chart set out below for the combined Efficiency Ratio at portfolio level on a daily basis.

It would seem to indicate that indeed [FUTURES] markets as a whole have become less trend friendly/ more noise over time and that the last two years look particularly “inefficient” and trendless on the 120 day calculation period I used. I could of course (and probably shall) calculate the aggregate trendiness indicator for different periods: 1, 3, 12 months for instance. But this is an interesting start.

I simply have not looked at this. I only ever trade end of day on EOD prices and long term. I promised Michael Bennett I would have a look at converting the algo but got waylaid by Markowitz!
NOBODY should rush into trading this algo. At a rotation of 10 stocks there may be instability. Personally I would be trading 20 + for greater stability. Stability disappears entirely in you reduce to 5 stocks. Do NOT rely on just trading at month end when it looks best in back testing. There may be fundamental reason for the success of month end re-allocation, there may not. But I would hedge my bets on rolling allocation days or splitting the portfolio into 4 as suggested above.

Shorts
The algo seems to be taking short trades by reversing positions winning profit taking positions from long to short. Its the order logic and I will take a look at correcting this. In other words there is an error in the profit taking logic...In the vast majority of cases it is working as expected and no short arises.....and then.......

The line:

ranked_stocks = context.output[context.output.factor_5 > 0]


is actually in your code as well (line 114 in the version you posted)!

The good news is that this actually won't be overwriting the >0.031 bit, because the pipeline has already been screened for securities with an efficiency rating > 0.031. However, this does mean that line 111 in my version is obsolete.

Ah! Thanks....I did warn you I am new to Q and find de-bugging rather tough here....chortle.

Hi Anthony,

Thanks for sharing! I ran a back test in minute mode, I modified the reblance function to trade 30 minutes after start and close_orders to 30 minutes before close.

schedule_function(rebalance, date_rules.month_start(days_offset=5), time_rules.market_open(minutes=30))
# half_days is True by default
schedule_function(close_orders, date_rules.week_end(), time_rules.market_close(minutes=30))

26
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56743993697d71116732a9d6
There was a runtime error.

That's brilliant lukas, thanks.
I believe the unintentional "shorts " are as a result of double selling from the various exit rules so I will look at putting in some flags to prevent it. I will also look at why the minute mode results are so horrible......

No worries! Its been my experience on quantopian that daily mode cannot be trusted except for quick tests.
I usually just use daily mode to see if my code has syntax / programming errors and so on but actual backtesting.

Seong Lee had a very good explanation on this thread on the difference between minute and daily mode.
Also Grant has a solution to test in daily mode but I havent tried it myself yet.
https://www.quantopian.com/posts/backtest-results-different-in-minute-and-daily-mode

Anthony: Thanks for sharing your algo! I have recently learned enough about pipeline to know that you can accomplish the same functionality by using a single class for the first 4 factors.

This backtest is a copy of Gary's, except I reuse the same class four times.

104
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56746c38cdcb491185f10cff
There was a runtime error.

Ah yes, it looks like you are reducing position in stocks that hit the target profit of 25% down to 1% of their initial position. Were you looking to reduce the position by 1% instead of to 1%? I can take a stab at correcting it if you explain the logic!

The difference is SO serious it can't be left at that. I will investigate when I get the chance. It makes no kind of sense at all unless it is some kind of very severe difference in the price of illiquid stocks.

For me, this casts grave doubts over my entire understanding of the Quantopian back testing engine.

It is simply not the sort of puzzle I could leave investigated if I intend to continue to use Q and Zipline!

I'll report back when I get a chance.

Tristan, very nice indeed.
No, I was looking at reducing the position TO 1% although clearly this is a user definable parameter. So my logic is correct although I could have achieved the same thing in a more obvious way:

 context.profit_taking_factor = 0.99
profit_taking_amount = context.portfolio.positions[stock].amount * context.profit_taking_factor
order(stock, -profit_taking_amount)


I believe the unintentional shorting is arising because of multiple exit rules so that you can get a double exit on certain rare days.

The answer is to ascertain which orders are triggered first in the loop and then flag them:
Set "Exit OK flag" to 0
Exit order 1 is triggered
Exit order is processed because "Exit order flag" set to "0"
Switch "Exit OK flag" from 0 (OK to process trade) to 1 (don't process trade.
Exit order 2 is triggered
on the same day but will not be executed because "Exit OK flag" no reads "1" - IE don't process trade

Ah, that makes sense to me, though I would use the get_open_orders() method to see which orders you've made that have yet to be filled. I should have added that in my version of the algo. In general it's good practice to make a check like

open_orders = get_open_orders()
#some more code here
#for loop
if stock not in open_orders:
#order


before placing any order.

Jamie
That is very helpful, thank you. Yes, a very nice shortcut to what I was trying to achieve.

As I suspected, the question of the vast difference between the daily results and minutely results boils down to one of liquidity. On a daily back test basis Zipline/Q lets you take [up to?] the whole day's volume at a single price.

On a minutely basis you get much more accurate data and Q lets you take [up to?] the full volume at price for the minute in question.

In practice this means that if you trade a tiny amount of capital you may very well be able to profit handsomely on trading the top 10 or 20 out of the Russell 3000 but since there is a whole lot of illiquid junk in there you may be better advised to stick to the top 1,000 by way of market cap even on tiny capital such as $5,000. This exercise has emphasised a number of factors to me. Above all the enormous benefits of co-operation - so thanks to all those of you who have made such valuable contributions to the code. I am going to work over the next few days on correcting all remaining errors in the code and unifying the system to take account of all the excellent features contributed by others in the various different posts above. I am going to provide a switch so that you can see the effects of not re-allocating to losing positions. And I am going to get rid of the unintended shorting. Then I will publish the amalgamated system here together with one or two examples to "prove" that momentum can achieve better returns than a conventional market cap index both on an absolute and risk adjusted basis. "Prove" is of course a ridiculous word to use in this context. Excellent work everyone, Jamie, thanks for the info on the pipeline, when I ran it over a longer period in minute mode it ran out of memory a few years in and I thought it would have been the pipeline but perhaps there is a memory leak elsewhere in the algo. I think once you all have the algo settled down a bit I will try and run it again! I was away for a day looking at this site and I got 15 emails about this thread! Thanks again everyone, especially Anthony, for sharing with the community. I use a technique similar to Jamie's to prevent orders from going through multiple times and causing over-leverage or unintentionally causing shorts. But instead of looking for stocks not in open orders I look for stocks that are in open_orders and then use the continue command to skip that loop iteration. I just find it cleaner to do it that way rather than nesting the order logic deeper and deeper into if statements. It makes debugging that much easier. open_orders = get_open_orders() for stock in [some_iterable] if stock in open_orders: continue #order management logic here  Once again, thank you to all contributors for their hep. I believe I have now incorporated most suggestions and added some notes explaining what most of the parameters are designed to do. The system rotates into the top ten of the 500 biggest market cap stocks each month and sells 99% of the relevant holding each time the stock concerned doubles. The back test is run in daily mode. Try it in minute mode. Some shorting is still occurring but has largely been eliminated. It is hard to believe that there are still liquidity problems for S&P 500 stocks and that volume at price is still causing such problems in minute mode. This needs looking at and considering further. Interesting to note that I have NOT had these problems running systems on highly ETFs liquid in minute mode...so.....come to your own conclusions. Typically an institutional momentum system on stocks will tend to trade the top 300 out of 1,000....see AQRs product. Typically a product like AQRs does not invest equal weightings but uses market cap weightings - and hence avoids liquidity problems. 56 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 567c03a80621ed1184f9fe9c There was a runtime error. I increased the logging and found the problem; it was in the re-balancing function. The open_orders variable was not reset in between exiting positions and readjusting weights. This allowed for a situation in which orders were sent to close the position and sell part of the position at the same time, leaving behind a negative holding. I fixed it by combining the two checks into and if / elif. This way the algorithm does not even bother adjusting the position size of positions that are being closed. I also added a few other checks. The logging revealed that the strategy was attempting to close and take profit from flat positions. I added logic that prevents that. 86 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 567c6bd512060d117c8ff713 There was a runtime error. Thanks all for your sharing of knowledge. I have learned a lot from this thread. I will play with this algo and share any improvements I come up with. May I humbly suggest, NEVER use Daily mode. Paper-trading and real-world-trading only uses Minute mode, so the results from Daily mode are not very useful. This is just the Minute version of Shawn's most recent algo. 151 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 567c7f3776067f11840ef86b There was a runtime error. Wonderful thank you to you both. Very, very helpful. The benefits of co-operation are great indeed. I am still not satisfied with minute mode and had a thought in the middle of the night which I must investigate: where a trade is split into numerous parts I suspect the USD 1 minimum per trade is being (incorrectly!) charged on each part. This may have some bearing. I noticed there is significant negative cash (~ -7000) after 2012 in this algorithm. I am wondering whether there is an intention or I am missing something important. Or there is any way to avoid it. There is no intention to use leverage. One trick I use elsewhere is to invest only 95%...or whatever. This can be achieved in this system by reducing the weighting: weight = context.acc_leverage / len(context.stock_list)  Yes, there are other ways to avoid it but in practice...... Hi, I have learned so much reading this single thread! Thank you all. Has anyone live tested this strategy using RobinHood. Any specific addition to ensure compatibility with the broker cash trading requirements? Hi Lionel, Since Pipeline isn't yet available with real$ trading, you won't be able to hook this algorithm up to your Robinhood account. We're currently working on making Pipeline available in real money trading so stay tuned for that! I'll leave it to Anthony to respond to the cash trading requirements question as I'm not sure of what's required for the strategy.

Hi Anthony,

Thanks for sharing your code. Could you elaborate more details in the following line of code:

a=np.array(([high[1:(lb):1]-low[1:(lb):1],abs(high[1:(lb):1]-close[0:(lb-1):1]),abs(low[1:(lb):1]-close[0:(lb-1):1])]))

my understanding is that high or low are not time series as they represent the low or high of all security in the universe.

Happy new year!

The best way to understand it is to look at in debug mode and look at the structure and content and shape of the various arrays. You will find there is a separate column for each of the 7000 odd different securities for each of the H, L and C and rows for each date in the test giving the H,L and C for each stock for each day. So the Efficiency Ratio is calcluated separately for each stock in the universe. Later the universe is whittled down to the top X and when looping through candidate stocks the ER is linked by indexing to the correct stock so that a
trade can be accepted or rejected.

You will only really be able to grasp this with the help of the debugger and the Numpy manual.

Thanks for your suggestions, Anthony. Sometimes it was frustrating to understand the data associated with the API in run time.

Great code, Anthony! I am a little concerned with the drawdown, however. This sort of drawdown could wipe out your account, though it obviously is much less drawback than the overall market in 2009. Is their anyway to put in a line of code to move to cash or a short ETF if drawdown hits a certain number? I havent been able to figure out how to do it.

Posted some basic starter code for liquidating, thanks for mentioning that Daniel.

Hi , Anthony, can you explain the meaning of efficiency_ratio (factor 5) ? Why it's used as a filter and how did you get number 0.031 ?

Efficiency Ratio devised by Perry Kaufman - see his book Smarter Trading page 134. The figure I used is merely an example.

Hi Anthony, I'm trying to use debugger to understand your algorithm, but it crashes from time to time. Take the line 51 for example,
a=np.array(([high[1:(lb):1]-low[1:(lb):1],abs(high[1:(lb):1]-close[0:(lb-1):1]),abs(low[1:(lb):1]-close[0:(lb-1):1])]))
I tried to check the content of the variable a, the IDE became irresponsive as soon as I clicked the variable. After a few secs, the IDE threw up a runtime exception and exited the debugging mode. My guess is the variable consumes too much memory, but not sure. Any idea how to debug the algorithm without crashing? Thanks

Since there is a lot of interest in these type of strategies, it will be good if this can be made available for lie trading as well soon as possible. Eagerly awaiting the role out of these features for live trading.

OK , I get it . Thanks very much.

No idea why it crashes during debugging. I never had that problem but have not revisited the algo recently. Perhaps the Q team can help?

Hi BO,

I suspect the timeout is occurring because the components of the a array are quite large. Can you try inspecting just a subset of them?

Hi

I'm new but I really like this concept. Is there any chance someone could provide instructions or help with making this usable for live trading using Robinhood?

@Anthony With the recent upgrade to Quantopian 2, and the work that was done last month to handle delisted securities, this algo can now be run in minute mode. It occasionally times out when run from 2003-2015, but can be thinned out by scheduling the stop-loss check to run less frequently than minutely (for example, every 15 minutes).

138
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5718e234806b101117f9ca95
There was a runtime error.

I moved the stop-loss check into the daily rebalance. It's much speedier than in handle_data and interestingly has even higher returns.

828
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5718f6221db18d0f83e2964f
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Jamie, Karen thank you indeed for your work. I shall check it all through, run a few tests and report back with any interesting findings. One thing people should be very wary of is the rebalance date. You will get very different results depending on what date you choose (1st day, 15th whatever). For what it is worth it is best to use rolling dates and to split the portfolio into 4, each with a different rolling monthly re-allocation date.

Hello Anthony,

Impressive algorithm. Just one question: what do you mean with "rolling monthly" date ? Do you mean at a fixed interval, for example, starting on 1 jan and then every 30 days (loosing the month designation by time)

Yes, exactly. Starting on any given day and then the next allocation date will be 20 business days later or whatever.

My belief is that this algorithm should probably not be used to rotate into a small number of ordinary stocks (such as 10). It is an entirely different matter with index tracking ETFs where I am happy to rotate into 10 (and do). You will find a low number very unstable and unreliable - make the smallest change and the results will radically alter. Try running it on different days of the month and you will see what I mean.

Use bigger numbers and the stability begins to disappear - not surprisingly. The problem is that the Q back framework does not appear to be up to it. I have been trying to run rotations into 50 stocks for instance and I keep getting:

MemoryError
Algorithm used too much memory. Need to optimize your code for better performance

Please also see line 132 in Karen's source code:

# Increase our position in stocks that are performing better than their target and reset the target


This was not my intention. The algo actually takes profit rather than pyramids. Or at least did.

Sorry but i am a beginner in python and Quantopian, and i don't really understand what "context.acc_leverage" is in the code above (can't find it in the help pages either). Is this the total value of everything (ie sum of all stock values at current date + cash amount) ? I understand that when a new position is taken, it is taken as 1/n th part of this, but if it includes stock value, how is enough cash freed if 1/n of total value is larger than the amound of available cash ?

It is a user defined variable. It works only because reallocations are done on the same date. 1/total number of stocks means the account will be fully invested at max....IE no leveraging

Ok thanks Anthony, i get it now (i didn't see the global definition of the variable). So acc_leverage is not an amount, it is just a way to set how much you want to go in "debt". By setting it to 1 , you say "no debt positions". Anyway, taking this into account, the weight will actually always be 1/10th if you are evaluating many many stock. In fact, you are just ranking them, the only way that a stock can drop from the list is by the filter (ie market cap & efficiency ratio). So most of the time you are going to have 10 stocks in the list, but you are only really buying into one, if its short term (one month) momentum is positive, which is not the same as being in the list.

Only one remark: ignoring the filters of efficiency rating/market cap etc, you are ranking by average momentum and then taking top 10. Then you are only checking the short term momentum for ordering. This means it is still possible that a stock with negative average momentum can make it to the TOP10 and then be ordered, if its short term momentum is positive. Is this intentionally ? Or shouldn't a stock with negative average momentum be ignored from the buy list , even if short_term mom is positive ?

That is the way I originally drafted it and I agree with your view. I have not yet looked in detail at the helpful Q2 redraft.

Analysing the transaction history, i noticed that a lot of trades are spread across multiple minutes because the volume at that minute was 0.
This leads to many small orders on the rebalancing day spread across multiple minutes.
I removed these small orders by disabling the slippage model basically:

Because we are trading high market cap stock, and with relatively low money (10.000$), order size is small and we can assume orders will be executed at order price and in one block. This reduced the number of transactions heavily. Another change i would like to do is in the rebalancing. Today the rebalancing is done for every stock, sometimes leading to small orders like 1 or 2 stocks to get the weight right. I'd like to rebalance only when the trade size is considerably so only when the stock is considerably overweight, not a small bit (ie min 5% overweight), still looking how to do that. The same goes for the selling: or stock in context.portfolio.positions: if data.can_trade(stock) not in context.stock_list or context.stock_factors.factor_1[stock]<=1: order_target(stock, 0) This will sell a stock immediatly when its one month momentum is negative, even if his average momentum across the 4 periods is positive. Is this intentionally ? I always wondered why you calculated the average rank number. I would calculate an average of the return (momentum) itself and pick the top 10 with highest average return. So i took the code from Tristan Dec18th and changed the following lines: combo_raw = (factor1+factor2+factor3+factor4) / 4 pipe.add(combo_raw, 'combo_raw') pipe.add(combo_raw.rank(mask=total_filter,ascending=False), 'combo_rank')  So instead of averaging the rank numbers, i am calculating an average of the momentum itself and then sort by highest average momentum across the 4 periods. Now that was not a good idea ! Below you can see the partial results, since it crashed, but it is very clear, the return is very bad. So it seems ordering according to the rank is vastly more superior then ordering on the real average momentum itself ! Partial Run I am also having some doubt about the efficiency ratio calculation. When debugging, i am seeing er higher than 2 which should be impossible. Of course, your definition is not the standard one, but still:  combo_rank factor_1 er Equity(19660 [XLU]) 738 1.096878 1.407129 Equity(21757 [EWZ]) 750 1.141883 1.223757 Equity(19658 [XLK]) 993 1.027546 2.334963 Equity(19656 [XLF]) 1101 1.014971 0.676471 Equity(19654 [XLB]) 1238 1.013098 0.439252 Equity(19920 [QQQ]) 1324 1.009743 2.215429 Equity(21520 [IWV]) 1510 1.005367 1.552392 Equity(8554 [SPY]) 1559 1.003080 1.460942 Equity(21516 [IWB]) 1571 1.002289 1.457763 Equity(12915 [MDY]) 1583 0.995635 1.049301  I second Geert's concern about er values outside (-1,1) and have a basic question on the strategy. While the standard Kaufman ER (KER) value must be in the range (-1,1) it is possible for the Garner ER (GER) to be exceed this range. This should be rarely encountered except for short windows since it requires behavior that is unlikely a) the daily price change (Close-Close[prior day] ) must be consistently of one sign (nearly monotonic trending) b) the daily High-Low to be consistently less than absolute of the daily price change (overnight change dominates daily change) KER seems to be used by others an indicator of trend strength over short periods (periods < typical trend durations) What is the rationale for using a similar measure (GER) over a long window (252 days)? No shorting in thid algo? Indeed, no shorting. A trend follower wants and needs to avoid false signals due to noise. A trend following model will reap the greatest and cleanest profits when a market breaks out in one direction and never looks back – a perfect (non-achievable!) trend would go from point A at the bottom left hand corner of the chart to point B at the top right hand corner in an absolutely straight line with no retracements. Kaufman’s Efficiency Ratio as I had meant to draft it has values ranging from 0 when markets are very noisy and a theoretical +1 when markets are perfectly directional. Any value above 1 indicates an error in the coding. 252 days was chosen because round about a year is generally considered the maximum sensible look back for a momentum indicator. I have not looked at Q 2 and regrettably do not have time to. Haven't looked at the calculations for a while. Trendless Markets The one hundred instruments tested were all concatenated futures prices back adjusted to eliminate gaps. #calculations for efficiencyRatio a = abs(data - data.shift(periods=1, freq=None, axis=0)) a[a.isnull()] = 0.0 b=pd.rolling_sum(a, efficiencyLookback) b[b.isnull()] = 0.0 efficiencyRatio = abs(data - data.shift(periods=efficiencyLookback, freq=None, axis=0))/b efficiencyRatio[efficiencyRatio.isnull()] = 0.0  data = Closing price pandas data-frame with 2 columns - date and Close. So, this is how I calculate it in a simpler Python interpretation and the values are between a theoretical 0 and +1. As indeed happens. My article referenced above might help to explain how and why I use the indicator / find it useful. I am afraid I really can not face Pipeline and Q2 just at the moment but if the code IS wrong, perhaps you can at least see what I was getting at. Not at all keen on logging and de-bugging in Q. On my python back tester at home I would typically download data, a, b and efficiency Ratio into a csv spreadsheet for inspection. Does make life a bit easier. Hi Anthony, thanks for the awesome lesson. i am constantly searching the internet for expert lessons like that and they are rare. i have a question if you dont mind. what is collections, defaultdict [lambda:0]? i am trying to make a trail stop but running into trouble when i try to do it for each stock i use. Thanks, Tyler In my opinion you need to learn Python and Pandas from the bottom up. There is no shortcut. I am still in that process myself. Anthony, you mentioned earlier why you do not think these results would hold up in real money trading. I will read that part again but can you elaborate on this as to why? Can anyone else who can chime in? i'd really like to know. I do not believe one should trade a model such as this by rotating into a small number of stocks, especially if the universe is large. The concept "works" - take the actual trading history of the Guggenheim S&P 500® Equal Weight ETF by example and compare it to the SPY. But my belief/preference is that one probably wants to rotate into at least 50 stocks and choose a relative small universe - perhaps the top 1000 by market cap rather than 5000 for instance. You will see what I mean if you back test this system on a mere 5 or 10 stocks out of a relatively large universe on different days of the month. The results will differ hugely. As you increase the number of stocks you rotate into and/or decrease the size of the universe somewhat a measure of stability returns and it begins not to matter which day of the month you choose for your allocations. In my own trading I divide my trading into 4 subsystems each reallocating on an equally spaced different monthly date. Currently I trade ETFs not stocks and can therefore afford to rotate into a small number (10). But I would not use 10 for individual stocks, personally speaking. I'm sorry I can't be of more help on this actual Q version....I have lost track. And am concentrating on my own version in my own back test engine. I wrote the custom factor below to calculate the efficiency ratio in pipeline for some research I was doing. It requires a window_length input for which the ratio is to be calculated over. There is no null value handling, so whoever decides to work it into the algorithm may need to do some debugging. class Efficiency_Ratio(CustomFactor): inputs = [USEquityPricing.close] def compute(self, today, assets, out, close): direction = np.absolute(close[-1] - close[0]) volatility = np.sum(np.absolute(np.diff(close, axis=0)), axis=0) out[:] = direction / volatility  I modified the code to look more like Anthony's original ETF trading idea: • You can now run the test on a limited number of specific stock or ETFs, in this case around 60 ETFs • I removed the market cap filter factor since it is irrelevant for ETFs • I also moved the Efficiency Factor filter from the pipeline to the ordering process. In previous algos, stocks with bad ER were filtered even if their return would be very good (ie TOP10). This algoritme orders and selects only based on average momentum rank. It then orders only when the ER is high enough. So it allows a high momentum quote with bad ER to take a place in the TOP10, while previously this quote would never get into the TOP10 because it would already be filtered by ER • Efficiency Factor used in the real KER (Kaufman), based on code above • Because Quantopian can't backtest 4 portfolios in one time, i increased the rebalancing of this single portfolio to once every 5 days. • ER is calculated with 20day window_length instead of 252. NOTE: when you change the window_length of ER, you should also adjust the ER threshold. Threshold was 0.031 for 252 backwindows, for 20 backwindows i choose 0.31 • There is also some more advanced zero and NA handling in the ER and combo_rank columns. Sometimes i saw a combo_rank of 0, which catapulted it directly to the top of any TOP10 list :-) • Some more parameter tuning: • buying is only done when in TOP10, short term (20d) and longer (3month) momentum are > 1 and high ER • increase the stoploss limit to 0.9 for ETFs instead of 0.5 for stocks Now for the results: - They are not so good. - The system does avoid nicely the 2008 crash. - The system does go to cash when no uptrend in the market - However, it fails badly the last 3 years. I don't yet understand why. It selects ETFs with higher performance than SPY, yet fails to make a profit with it. Very strange. Improvements: - I still need to look at the ER code. It returns often zero for some reason on a number of ETFs. This should only be possible if the close of the lookback is exactly the close of yesterday - I am looking to improve the profit taking procedure. Instead of a fixed percentage, i want to calculate the target based on average true range, like Anthony describes on his system. - I also feel that simple momentum is not a good indicator to buy or dump stocks. The fact that it depends only on 1 price in the past makes it jumpy. For example, take a quote that has been flat forever, only 30 days ago it dropped 10% and the next day it recovered. The 30day momentum will show that it is a "good" stock. The same on the opposite side: one 10% jump up 30 days ago (even intraday), can give a -10% momentum on the rebalancing day and cause the system to sell the stock. Therefore i try to take long term momentum also into account during buying. But in the end, I feel that calculating the slope of a 20day linear regression line is much better to indicate "trend" compared to 20day momentum 49 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 573b65bde872fd0f8c0ba772 There was a runtime error. One outstandingly good opportunity to curve fit is in portfolio choice. I tried to avoid this. It is often trivially easy to cook up good back tests by careful choice of rules, parameters and portfolio. The choices on my website for TAA1 for my own trading attempt to avoid hindsight bias and perhaps should be simified further. I would be alarmed if my back test s had shown brilliant recent results since last August looking back over the last few years the US had been the only game in town. My object was to diversify very widely without taking too much heed of backtest results. A better benchmark is the MSCI World but even then it is of course very hea ily weighted to the US. There is no guarantee that my objects have been achieved and only time will tell. This month for instance I am heavily into emerging markets and down twice the decline of the benchmark. We'll see. Anthony, you've been extremely helpful and I appreciate it. Thank you. I plan on spending today looking over your code. cheers! I think I have been honest. Whether anyone finds that helpful or not...is another matter. On the various videos and dull postings on my website I try to make it clear that I have no idea what the future holds and that my approach is therefore to allocate to as wide a group of asset classes as possible. And then let the matter simply float on the waves and hope the result is positive. I make the point that overweighting could be a great mistake. Hence my preference for more equally weighted schemes rather than market cap approaches adopted by most index providers. Japan represented a 40% market cap weighting in the MSCI World back in the 1980s. Was this sensible? The US represents 50% of the MSCI World today. Do you want 50% of your assets tied to one nation, one economy? Would you have been happy had you allocated 50% to Germany, Russia or Argentine back in 1900? I suspect not. If people want to gamble heavily in the short term on a scheme for trading which happens to rake in good returns for a period of time, then fine. But that sort of approach so often ends in disaster. The long view is perhaps a better approach. Anthony, you are right. It is not fair to compare the performance of a system that trades worldwide with an index of one country. I tried to use the MSCI World index as a benchmark, however i don't seems to find a ticker that goes back to 2003, the start of the backtest. Maybe someone of Q can help ? Has anyone edited this code for live trading?? Thanks! I gave it a go at editing for live trading but not sure if I got it right. Could someone please double check?? Im still in the learning phase and just experimenting with different things. 7 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 5792221ec0447812298142ed There was a runtime error. thanks for Anthony sharing this great algo. Indeed impressive. I backed tested the version Karen edited in 2003-now and 2011-now period (or look at alpha table in the backtest). In recently years, the alpha is much smaller than it was for early 2000s. I guess many investors adopt momentum strat, market became more efficient and thus alpha became narrowed. Any another possible reasons/explanations and outlook for performance of momentum strat in future 5 - 10 years? Thanks Lake, I've been at it a long time but still have a way to go. I'm not too worried about momentum in the long term, especially after the very long term momentum tests I outline on my website. These go back to 1700-ish in the UK and the mid 19th Century in the US thanks to data from the Bank of England and the NBER in the US. I think the difficulty is in the middle ground and for those trying to shoot the lights out. I'm always looking at promising ways to shoot the lights out but they mostly turn out to be moonshine. Such as the recent algo here on Q looking at profiting from bounces in low liquidity stocks. Great idea, but when you start adding more realistic constraints, the unicorn vanishes to be replaced by a more commonplace beast. My latest obsession is AI and machine learning and one academic paper referenced on my website and here on Q talks of a back tested 45% cagr momentum strategy. My suspicion is that it will turn out to be just that: "academic". I do believe that one can better buy and hold of index funds but that the more return you seek, the greater the dangers of disappointment. And those who use leverage will probably die by it. thanks for the comments. I see your points. However, with computer, internet, and greate platform like Q , the speed of information /knowledge spread may exceed what we think. and thus premiums like momentum may become narrow in future. just gut feeling no hard data/evidence to back it up. saw your share about the AI paper, will take a closer look :) thanks for sharing. at profiting from bounces in low liquidity stocks. well, I agree with you. i also think it depends on size of found the algo is going to manage. in general , i tend to think traditional analysis is required for investing in very small cap (e.g. < 400M market cap) and low liquidity stocks. I got some free time to look at this algorithm again and to clean up a few features that were affecting my ability to understand how this was to work. In particular 1) the problem of short sales in a long algo is resolved by changing the sequence in which orders are placed 2) liquicity problems are greatly reduced by three changes. They are small enough that leverage is very near 1.0 except when the algo sells during the month and leverage drops < 1.0 as expected 3) max drawdown is reduced by a simply entry/exit model that puts the portfolio into bonds during indicated downturns in the SP500 4) the effect of the efficiency test may be had by requiring that the one year return be > 0.0 (factor_4>1.0) Other notes are in the attached file. The result is what I think the asset allocation promoters like Faber want us to find: Long term growth > SP500, max drawdown < SP500, slower pace of trading (monthly) keeps transaction costs down. [edit: corrected the statement 4 to read"(factor_4>1.0)" vs "(factor_4>0.0)"] 270 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a94d22f69d180ffe2dc28a There was a runtime error. Peter Wonderful improvements and detailed explanation. Wholeheartedly agree. Had meant to get round to putting in the S&P on / off switch but got diverted and never got round to it. My research elsewhere suggests the S&P switch and the like will usually help greatly but be no means always. The Efficiency Ratio is merely an attempt to filter out noisy stocks which wander too far on and off track in their journey to any specific price point. Run a few tests to see what effect re-balancing on different days of the month has. It can be very alarming. I am currently working on deep learning - will this improve momentum? Can a deep learning algo spot patterns in momentum that a simple lookback can not. I love your notes - my lack of notes is a disgrace. Many thanks for all this work - much appreciated. Take a look at the back tests of the system (as so admirably re-drafted by Peter Falter) and note that the only difference is the date on which the monthly re-balancing occurs. OMG! How do you begin to explain it? This one uses "22" which I think is probably explicable in that it is a nonsense day many months. But the other tests below are sufficiently worrying to make this first nonsense test irrelevant. 25 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a97445dd6d2c12d68d664e There was a runtime error. And this. 25 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a973e8d0e2cf0ff9018763 There was a runtime error. And this. 25 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a973c5a2d39c1003e1e65b There was a runtime error. And this 25 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a973a980ebaa100ca66ab2 There was a runtime error. And this! Not very amusing is it? 25 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a97378e1da7412f1d9122f There was a runtime error. This one uses "22" I believe the date offset is measured in business days, so it's also possible test would somehow be affected by months with less than 22 business days. Edit: Sorry, I just reread your post and noticed that you were already pointing that out. It would be interesting to dig into the results and figure out how quantopian handles those months. Thanks for Anthony's great work and Peter's improvement. I've made some code optimation based on Peter's code. So one can use the built-in factor Returns instead of self-made factor. And the calculation of the momentum ranking can also be done in a self-made-factor. The returns of the back testing is almost the same. Cheers Thomas 404 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a98a6e39abd210047b8138 There was a runtime error. "The unmodified performance of this algorithm is remarkable from 1/4/2003 to 11/30/2015 Total Returns 1287% Benchmark 192.5% Max Drawdown 50.4%" I admit I struggle with quantopian and python I guess due to older age and not having programmed in long time. I don't understand what is remarkable about a 50.4% drawdown. In real trading this will be probably 65% drawdown. With so high drawdown ruin is almost certain. If this is just an exercise in programming it's fine but this has little to do with profitable trading. BTW modified algo ytd DD is 37.4%, still too high by any standards. Awesome improvements James. I was thinking about combine those mom factors into one. you moved ahead. also observed better alpha in recent years not sure the source but your min num of stocks, safe asset may both contributes. Anthony, Day-of-month sensitivity was one of the next features that I was going to evaluate. The other asset rotation models that I've done quick looks on have shown significant day-of month sensitivity. Several posts on Quantopian have shown this behavior for various algos. What you show above is worse that I expected. Community: what is the most convenient was to do a parametric study of sensitivity to a parameter? Thomas, I like that more compact implementation of the momentum filters. Ricardo, I agree with your dislike of an SP500-like 50% drawdown. Within the file I posted you'll see that drawdown reduction was an objective of mine and that typical values achieved were around 20%. Given the aggressive investing style, high returns and simple entry/exit rule I think 20% is a respectable result. Ricardo, I agree with your dislike of an SP500-like 50% drawdown. As do we all. However the ideal of high return and low draw down is, over the long term, an unachievable fantasy. By the way here is what I do in my own trading to combat day of the month sensitivity: I divided the portfolio into 4 and trade each so that it rolls on a different day of the month - IE one re-allocates each week. In US stocks, you can reduce sensitivity and increase stability while reducing returns by trading 50 to 100 stocks not ten. I would not trade 10. I am not currently trading a momentum model on US stocks but will get around to it sooner or later. Here I am using Thomas Chang's most welcome further clean up with 50 stocks using 10, 15 and 20 days. 244 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a9bd7ee1da7412f1d91568 There was a runtime error. Next...... 244 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a9beab2cc6f110084ec541 There was a runtime error. Next.... 244 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 57a9bec3d0e2cf0ff9018b56 There was a runtime error. Hi Anthony and Peter, I see the line as follow: current_year = get_datetime('US/Eastern').year But the 'current_year' is not used anywhere. Why? And I see a commond as follow: # if current_year > 2003: What does this mean? Cheers Thomas, that has no effect in the published model. I missed that code fragment when I cleaned up my working file for posting one this site. The code and comment are remnants of a trade study in which I changed the starting date to be earlier than the date at which the bond funds were available/liquid. I had placed a date test to allow for different safe set before/after 1/1/2003. Anthony, I did some reading last night and found several articles about starting day sensitivity in rotation strategies or futures rolling strategies. The effect is well known by researchers and significant. The authors note very large performance variations especially for non-averaged measures (like max drawdown) and the returns of individual commodity futures (frozen orange juice was particularly sensitive). I won't have time to write a summary or try a mitigation strategy for several days. Peter I have done a lot of work in the past on rolling strategies for futures. In particular for Crude Oil to try to take maximum advantage of the backwardation effect common in Crude's history. Also on short term interest rates like the Eurodollar where you can increase bang for buck by rolling a little further out than the nearest contracts. On rotation strategies fro stocks I have also played a lot with rolling dates. My TAA1 strategy uses 4 roling dates (as a opposed to fixed weekly dates, evenly spaced). I'm afraid at the end of the day this is probably about the best you can do. If you are a very big fund you could spilt the portfolio into 20 and roll 1/20th of the portfolio on each business day. Do also take a look at increasing the number of stocks to 50 to 100. It makes for a much more stable performance. Hi Anthony, thanks for sharing, I just change the backtest period from 2015/01/01 to today and get a bad result Total Returns -6.4% Benchmark Returns 9.5% Alpha -0.09 Beta 0.69 Sharpe -0.19 Sortino -0.26 Information Ratio -0.34 Volatility 0.29 Max Drawdown 34.2% Even I change the test period start date back to 2012/01/01, the strategy is weaker than SPY Quite interesting thread. Thanks Anthony. Thomas, great analysis and impressive results. Thanks. I will have to study the strategy as I get reacquainted with Q. Hi Kusina any everybody, Anthony's algo is just a starting point how to use the momentum stratergy. One can't simply use this by live trading. One has to do many fine tunings and this is realy not easy. To create a money printer is a hard work. :-) Thomas, yes, I agree. What I look for in a trading strategy is replicability. Will this trading strategy behave the same going forward, and is its trading concept sound? The concept, in this case, of sorting by highest past momentum has some basis. It makes the assumption that this momentum might continue. A reasonable backtest for a sufficiently long time interval generating a sufficient number of trades should show this. Your modifications, as well as Peter's, over Anthony's trading script show that there is positive alpha there. There is something that can be used going forward. As you said, and again I agree. There are some weaknesses in this trading script but I think they can be addressed. Removing them should only enhance the strategy's performance level. What I am more interested in, in a preliminary overview, are the underlying strengths of a concept and why it should work going forward. I can always address the weaknesses later. One key concept you've added to this strategy is the increase in the number of trades as the strategy progresses. For me, it says this: expressed in a single payoff matrix equation: A(t) = A(0) + Σ(H(1+at).*ΔP) which I expect to outperform. What I want is: Σ(H(strat_A)(1+at).*ΔP) > Σ(H(strat_B).*ΔP) > Σ(H(strat_S&P500).*ΔP). And this strategy (stratA) does it. Also, as you surmised, ER has no positive impact, therefore it does not matter, it is not what is driving this strategy. It could be removed or put to 0.0 as you suggested, since there is no value there. If you reverse the momentum sorting, you will see all the performance disappear to the point that: Σ(H(strat_S&P500).*ΔP) > Σ(H(strat_B).*ΔP), it will not even outperform the index. Therefore, I say, there is some value in playing the momentum continuation gambit. It's an interesting script and worth investigating. Interesting interpretation of momentum. The instability due to the time period of backtesting is a bit worrying. What interests me in Thomas' version is the increasing position size as the portfolio grows. There is more power in there than meets the eye. In fact, I see it as the way to open up to higher alphas. But first, some of the funny stuff. Say you reduce the stock universe to be treated. It will also reduce overall performance. Anyone could test this simply by replacing the 3,000 stock universe used in the strategy to 2,000, 1,000 or even lower. The impact will be to reduce performance. The reason is simple, one is taking out “potential” trade candidates and thereby reducing the center of mass of the selectable ones. One could increase the average daily volume value as selection criterion which would also reduce potential trade candidates with a direct impact on the bottom line. Try it, this is also done by changing only one number (0.5e6). If you reduce the number, you will get more illiquid stocks being able to pass through on momentum alone. Not so good, you also want an exit. Once you have accepted that there is merit in this momentum concept, and that in the future, the strategy should behave somewhat the same as in its past, it could be used going forward. Naturally, after you would have corrected its weaknesses, and there are some. Here are tests I've made. I generally use leverage. Some trading strategies don't support leverage very well, however, this one is quite suited for it. It will add a little bit more in drawdown, but nothing scary. Change one number in the program from 1.0 to 1.5 to use at most 50% leverage. The test gave the following result: 1 SMRS 50% Leverage http://alphapowertrading.com/images/divers/Momentum_Rotation_System_by_Thomas_Chang_50pct_leverage.png Pushing the leverage to 85% generated: 2 SMRS 85% Leverage http://alphapowertrading.com/images/divers/Momentum_Rotation_System_by_Thomas_Chang_85pct_leverage.png From the above charts, putting leverage at 1.50 produced: 4,289.4%; while at 1.85 generated: 6,491.9%. Their respective drawdowns were: 28,39% vs 31.6%. This meant putting$3,200 more at risk, or 3.2% more, which I find more than acceptable.

Another test that was done was a walk forward.

All presenters stopped the strategy on 2015-11-30, just as the two tests above using leverage. Doing a walk forward would add 9 months of unseen data to that program. It is a perfect way to show if the trading strategy would have behaved the same going forward as it did in the past. The test was easy to make, change the test's end date. Here are the results:

3 SMRS 85% Leverage + 9 months walk forward

What can be seen is that most metrics improved: alpha, beta, Sharpe, Sortino and the information ratio. The volatility and the max drawdown remained the same. So, we can't say that the risk increased, but the overall return surely did.

Overall, that is about a 23.7% CAGR and that is a pretty good number to have.

I could be wrong but when calculating the out put of each momentum factor shouldnt it be close[-2]/close[-1] just because otherwise it is considering information at time 0 that hasn't happened yet.

Dan, most interesting. Can anyone clarify Dan's observation? And what would be the modification to the code since Close[0] appears more than once.

Guy,

Since I am the first one to see your post I guess I will clarify what I can. My certainty before is only from a lack of knowing the inner workings of Quantopian ie how they treat their data index and while it can be dangerous to lag a variable that is already lagged especially with one like this that reweights on a periodic basis given changing factor values but for 90 percent of the strategies I have seen on here it would not matter much.

In actuality, if you were to employ this strategy using real funds and an ETP chances are your algorithm would take longer than a minute to run so lagging a period or even two to be safe, is probably not a bad practice in general.

But as I said it all depends on how they handle their data index and that is the part I really don't know. I guess the question I have is, when an order is executed is it done so with the price at close[0] or c lose[1]?

Dan, yes. What you raised has a major impact. I too am not familiar enough with Q's series indexing. Just getting back after 3 years of non-use.

Under no circumstances should a trading strategy look at anything that is in its future. All it should be allowed to do is make any kind of projection it wants, but never use what would be actual future unknown data in its calculations.

The backtesting environment does not allow access to future data. Depending on the context the most recent close refers to either the current price or the close of the previous day (the previous day would be within pipeline, which runs before the market open).

You're also confusing the indexing in python with the indexes used in many other trading environments. Close[0] refers to the oldest price stored in the series. Close[-1] refers to the last price. Positive numbers index from the beginning and negative numbers work their way back.

How can i modify the last version added by Thomas Chang to show what stocks are bought and sold each month? Thanks

Shawn, thanks for the explanation.

The original program is aiming for a 25% profit target on its trades. Now, if you reduce this target, the impact will be to also reduced profitability. But what about increasing it?

You are with a trading strategy that is amplifying its trading volume as you progress in time. Reducing its profit target should allow for more trades to be closed, returning cash into the account that could then be reused down the line since you are still trying to leverage up to 1.85 of equity. Nevertheless, you make less because you are also curtailing the program's ability to increase volume by providing it less ongoing equity, therefore reducing the general volume of trades as the program evolves. Not by much, but enough to significantly reduce its potential. In a way, you've reducing trade volatility excursions, reaching earlier stopping times.

8 SMRS 85% Leverage : 125% profit target

You still beat the index, but, one could do better... Just doing 10 #7 would already provide 4 times more performance than #8 with the same initial capital, with little more effort, since a machine would be doing the job.

Of note, such a trading method can not know in advance what stocks will be treated, that it be past or future. But it can always sort its past.

A better mix. The same variables, but pushing each in the right direction.

9 SMRS 85% Leverage : 125% profit target : faster response : $100k http://alphapowertrading.com/images/divers/MRSystem_Thomas_Chang_85pct_leverage_wf_9m_125pctp_SL75_100k_sp60_15.png Hope it's helpful. Just to push it a little bit more... 10 SMRS 85% Leverage : 125% profit target : faster response :$100k + residual

Guy, you can share the underlying code & the backtest results natively, by using the Attach button on the top-right hand side of the commenting box. (Screenshot)

That way, we can see exactly which parts of the code you modify in each backtest. It makes it easier to follow along in your thought processes.

Adam, most of what you request is already there. Clone the last version done by Thomas Chang. I haven't changed a single line of code.

As for the parameter changes, they are given in the posts, except for a few that I would like to keep private at the moment. The script is getting more and more valuable, except, naturally, if the results presented are too low, or are just common place on Q.

Following Thomas' suggestion, and he is right, e_r was set to zero since it does not bring anything to improve performance. Leverage was set to 1.85 since I don't mind using leveraged as long as it can be worthwhile. It was set to 1.85 to keep a safety margin since at times it is slightly exceeded. I also requested a higher profit_target. Experiment with these. You will need to understand the “why” it was done that way, not just as here are the mods. You need to have confidence in your trading strategy, otherwise it will never be applied.

Note that even the original author of the script, Anthony, thinks that his trading script has little value due to timing rebalancing issues. I think his concerns can be addressed too.

I work from the payoff matrix of a given strategy: Σ(H.*ΔP), so I look for ways in increase the average ΔP, increase the number of shares in inventory H, and increase the number of columns (read stocks) in the matrices. The edge will come from an increase in average ΔP, an increase in the inventory function H(1+at), and an attempt to increase not only m but n as well in this m x n matrix: H.*ΔP.

I do have a “novice” question: how do I remove a stock from being tradable. For instance, I got the following error messages when running my latest modifications:

13 GOOG error message:

My solution, for the moment, is simply to not have them in the tradable list. Since, I would like each strategy to go to completion. Here is where the strategies crashed:

14 SMRS 85% Leverage : 125% profit target : faster response : $100k + residual+plus http://alphapowertrading.com/images/divers/MRSystem_Thomas_Chang_85pct_leverage_wf_9m_125pctp_SL75_100k_sp60_15_resid_plus.png 15 SMRS 85% Leverage : 125% profit target : faster response :$100k + residual+plus+

I would like #15 to go to completion, since what is in store is an even higher performance level. Where it was interrupted is not the strategy's fault. Still, it would represent a 63% CAGR over the entire 13.6 years duration of the test.

I'm still in the process of studying what is under the hood. I am of the opinion that there is more than just something there. This thing can outperform.

Hi Anthony, the solution you seek is already there.

That is what Thomas' modifications are all about. His version of your program enables the increase in the number of stocks to be traded as equity increases. This in turn increases the trading activity which in turn increases profits which in turn enables buying more and more shares of more and more stocks. And, if you remember, it is also all you see in my own trading strategies. It is why I find it easy to detect this king of thing in somebody else's program.

What Thomas presented was a solution to your original question. What I see in it is kind of a bit more as chart #15 could attest, if it could be completed.

As mentioned before, that strategy is not that scalable in its current design, in the sense that if you put in more capital it will produce more. A 10 times increase in initial capital does not translate into 10 times more in generated profits as it should (see a previous post which makes that point). And this even when the number of stocks increases over time as it does presently. Scalability is reachable, but should be reached by other means.

You might be interested to know that your monthly rebalancing is very expensive and often times not even necessary. But I will have to look closer to see how expensive it is, all I see for now is that it is very expensive. There might be some other benefit to it... I can say that in chart #15 there is a tremendous amount of trading going on. How else could it reach those levels?

Hope it can help.

Anthony, you think that the market is going to go down, to be of consequence, and I would say at a minimum, there would be no reason to buy or hold any long positions, even for a machine. And having designed a long only machine, you might not have designed the right one for the job, considering your views. I guess you would be mostly in cash equivalents all the time in continual fear of a market crash that would always be coming.

The asset switcheroo thingy you designed was to protect you from such catastrophic declines. You technically went to bearing interest instead of seeing your whole stock inventory decline in price (be it 10, 50, or 100 stocks). That's okay. You went as far as repeatedly paying in and out commissions, slippage, and opportunity costs as an insurance policy against those losses without any demonstration that that was the thing to do.

You are hedging you could be wrong being long, so you delegate a decision that you can not take by yourself to a moving average crossover system. To me, that's not good. When ever I see a system like take, I don't even give it a second look, which was my assessment when I first looked at your code. There was simply nothing there and unfortunately your simulation showed that too. The only salvage value I saw was as a coding example to refamiliarize myself with Q. I cloned again your initial program this afternoon, redid a simulation and got a 4.45% CAGR. All I could say was: it is positive...

It is only with the modification brought on by Peter, and especially Thomas, that the strategy gained some value. But I don't think Peter or Thomas saw what I saw in their code. Otherwise, they might have displayed the same kind of charts I did.

You start with 10 stocks just because you really can't afford to play more. Do the math. $100k to start with, 50 stocks that's$2k allocated per trade. Now, estimate the number of shares per trade. So, your point seems trivial. You want to trade more stocks, increase your initial capital, or let your system grow the cash. Thomas' solution is a better choice. It is such a better solution that I could take his version and push its performance to what I consider high CAGR levels. You have several charts that show that.

Whereas, I could not do that with your code, especially starting with 100 stocks. Try it, you only need to change one number in your code. That's the difference. With Thomas' code, the inventory could start with 10 stocks to eventually reach over 900 stocks traded in a single day on the condition that there were sufficient funds. Your code did not allow this, and to reach the performance level showed in chart #15, it is almost required.

Note that at every sign of a possible decline, the strategy reverts almost to cash, that is its mission. So, why the fear a of depression? See how it behaved during the financial crisis. It went through it with flying colors. If at anytime you feel the market is going to collapse, the solution is simple: get out of the way. Your program will never feel that..., so you take charge.

When designing trading strategies, we should be consequent with what we are trying to do, and where it leads to. A long term simulation will show this, just as a simulation on your own program showed it was not enough, one needed more.

I showed several tests made using modified versions of your program. They were ran under the same conditions, the same software, the same cloud environment, the same data provider over the same trading interval. I did not change any of the functions, not a single line of code. But changing the default values has, I would say, totally changed it mission, it gained some long term vision.

Don't you see a methodology choosing its own compromise between risk and return? But, still wanting higher returns, considering that to gain a profit, one has to participate and take a position or multiple ones, and not hide underneath a blanket with the cash under the mattress.

Guy,
I added the feature of adjusting the number of assets based on portfolio size to combat liquidity problems that eventually get quite bad when the base strategy is effective.

As for displaying charts, I'm not sure I have tools to do this well and realize that I am just a quant dilletante.

If the backtests ever finish I'll post some results for weekly rebalancing and days_offset of 0, 1, 2, 3, and 4.
I think I'll start a separate thread for that since the weekly strategy is much modified different from Anthony Garner's original.
Also, my PC lags so badly when I load this thread that I have trouble scrolling or editing and my tablet can't load it at all.

Peter, sorry about that, my mistake. I mostly studied Thomas' version, not tracing back its history.

Thanks a lot, it is a great feature. It enables one to be fully invested at all times if desired. Have the stock inventory fluctuate at will, go in liquidation or acquisition mode in no time. It indirectly becomes an equity controller since you can at most put all available funds to work.

Thanks again, much appreciated.

I ran this strategy through the new Alphalens tool. It's currently set to look at 2003. As expected by looking at everyone's back tests, the early years show high alpha in the 5th quintile... Not so much in recent years.

24
Notebook previews are currently unavailable.

@ Peter - This thread lags like hell for me too. I doubt it's my computer, or yours. Large threads like this would benefit from being parsed into pages and not having all the content on one URL.

Andrew, thanks for posting the Alphalens output. I need to read about that tool before commenting on the result.

The weekly rebalancing thread is here

Ok, can some kind soul tell me the logic of this strategy. It seems "to good to be true". And that is usually the time when I get curious. Reading Python is a PITA. I would like to reproduce the code in Wealth-Lab and test it with real data.
Any help is much appreciated. Thanks.
VK

Hi Volker, nice to see you here. Having used Wealth-Lab since 2004, may I offer to bridge the gap.

But first, I suggest, like Peter, to switch to the weekly view of this strategy, as it loads a lot faster than this thread. And, it provides added information that could help answer your question.

Thanks. I will ask the other 10 questions there then. You might be able to help coding it, if you haven't don't it already. Hardly anyone programs better in Wealth-Lab than you. ;)