Back to Community
Stocks On The Move by Andreas Clenow

Book: http://www.amazon.com/Stocks-Move-Beating-Momentum-Strategies/dp/1511466146
Website: http://www.followingthetrend.com/

This is my attempt at a faithful recreation of his system. Sometimes it dips into margin a little, I haven't isolated why. Comments welcome!

Also, due credit to Ted, who shared another implementation here: https://www.quantopian.com/posts/anyone-found-a-substantial-momentum-effect (and James Christopher who says he's done one too).

Clone Algorithm
925
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56bce47e611ec212a436ff8e
There was a runtime error.
165 responses

I've just read the book, so thanks for implementing this!

I ran your code through Pyfolio's full tear sheet, and noted a couple of interesting things:

Top 10 long positions of all time (and max%) [u'BTU' u'BBD' u'AGN' u'OIBR' u'TII' u'GENZ' u'AGNC' u'DELL' u'KMP' u'CAG'] [ 0.525 0.159 0.149 0.147 0.141 0.14 0.129 0.124 0.124 0.123]  

This means BTU gets up to 52.5% of the portfolio. As far as I can tell, the divisor is the total value of the portfolio including any cash. So this is a huge concentration, which doesn't seem right. From the charts it looks to be around end of 2007 when SPY falls below its 200 day moving average, and the algorithm stops entering new positions.

Secondly, there do seem to be times the leverage jumps above 1. For example, right at the end of 2012, the leverage increases briefly to 1.2x

I will look into it more. I suspect the error is in the logic of rebalance_positions

I am new to Python, so I haven't quite been able to fix it, but the cause seems to be the calculation of ATR for the symbol BTU on 2007-12-05. Using www.stockcharts.com I get an ATR(20) of around $30, which makes sense as the share price is around $800, so that's a normal daily move of 4%, which is high, but not ridiculous for late 2007.

Breaking the code at this point:

def desired_position_size_in_shares(context, data, sid):  
    account_value = context.account.equity_with_loan  
    target_range = DailyRangePerStock  
    estimated_atr = context.pool['atr'][sid]  
    return (account_value * target_range) / estimated_atr  

The estimated_atr is around 2, which is wrong. I'm having trouble debugging the ATR CustomFactor, as I'm not sure how to slice into the arrays using the index columns to get the window of prices for BTU.

Dan, this might help for debugging.

class ATR(CustomFactor):  
    [...]  
    btu_sid = symbol('BTU').sid

    def compute(self, today, assets, out, close, high, low):  
        btu_idx = assets.get_loc(btu_sid)  
        close[btu_idx] # use me like this to get BTU prices  
        [...]  

Thanks Luca, super helpful.

OK, so the bug is to do with back adjusted prices:

Our [quantopian's] data uses adjusted close prices. That means that the effect of all stock splits and merger activity are applied to price and volume data. As an example: A stock is trading at $100, and has a 2:1 split. The new price is $50, and all past price data is retroactively updated 2:1 (volume data is correspondingly updated 1:2). In effect, that means you can ignore stock splits unless you are storing prices in a live trading algorithm. In a live trading algorithm, the stored variable needs to have the split applied to the price and volume. In a backtest, the split is already applied retroactively to all data.

The ATR of $2 is calculated using closing prices, which were around $55-60 in the 20 days leading up to the calculation date of 2007-12-05. This means the algorithm correctly calculates it must purchase 50 shares, worth around $3000. The actual purchase takes place using back adjusted prices, which is around $800 at the time. This is because BTU had a 15-1 stock split in 2015. In other words, the algorithm is trying to buy 50 of the "new" shares, which is equal to 50x15=750 of the "old" shares.

I suspect the best fix is to calculate the ATR using back adjusted prices, rather than close prices.

Further to this, I note this algorithm uses pipeline, which adjusts for splits and dividends very differently from the old backtester:

https://www.quantopian.com/posts/the-pipeline-api-dividends-and-splits-what-you-need-to-know

It seems the ATR is calculated correctly using prices that are split and dividend adjusted to the date for which the function is being called. However, it seems the order_target function is using the old backtester paradigm of back-adjusting the data for all past and future splits, before the algorithm starts. (I note this is what Yahoo does in their Adj close column if you grab historic data for a specific year.)

I am not sure what to do here. Is there a version of order_target that is compatible with pipeline? Position sizing is critical to the success of this (and most algorithms), and ATR position sizing is pretty common.

Excellent catch. Since ATR isn't used in te actual pre-universe screening, only for actual position sizing, it would be simplest to just calculate ATR on data from 'history()' and use that to determine position sizes.

I'll do that when I have time, or of course anyone else is welcome to.

OK, Simon, I've done as you suggested. Somethings not quite right, as I still get an oversized position.

Top 10 long positions of all time (and max%)  
[u'LIFE' u'GENZ' u'AGN' u'TLD' u'AGNC' u'ELE' u'NTT' u'T' u'YPF' u'TXU']
[ 0.545  0.11   0.107  0.103  0.102  0.101  0.101  0.099  0.098  0.097]
Clone Algorithm
511
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56c2279c8a01f41198062bb1
There was a runtime error.

Nice thanks! Looks pretty good even as is, I am sure this could be turned into something tradable.

Hi Simon

I was wondering why you have a CustomFactor called Momentum() that you don't use, and instead you define and use MomentumOfTopN() that internally applies the market cap filter. I can see your comment "# get our universe in here again because lame", which made me smile. Why couldn't you just use Momentum() -- was it a speed issue?

Yes, it times out. Won't be necessary to do the universe selection within the factor once factors can be masked with a screen.

re: the negative cash, I think this is because in the rebalance, we don't check available cash to see if we can afford to increase the position size. For my version, I am going to do two things: only allow rebalance to decrease position sizes, and cap all positions at, say, 5% of the portfolio. That also seems to be important for stocks which suddenly go very low vol, which happens a lot prior to takeovers/mergers. Definitely don't want to be mixing in merger arbitrage into this strategy! Could also use the EventVestor M&A dataset, but $100/mo is too much for me when I only plan to invest $25k or something into this strategy.

@ Simon,

  1. Did you finally find evidence of a momentum effect
    (https://www.quantopian.com/posts/anyone-found-a-substantial-momentum-effect)?
  2. It seems you are pulling from everything in the Q database, versus filtering out undesirable stuff (e.g. pink sheets, when-issued, LPs, etc. as illustrated on https://www.quantopian.com/posts/equity-long-short). In the code above, you mention "crop down to our pseudo-S&P 500 universe" so I'm wondering, are you intending to grab stocks from a S&P 500 proxy? If so, is that actually being done?
  3. From the results above, I don't see the allure of the strategy. I guess the idea is that if there's another bubble/run up to a crash, as occurred prior to the recent Great Recession debacle, the strategy might outperform and automatically avoid going into the red. Beyond mid-2009, it just appears to sorta track the S&P 500.
  4. Is your overnight gap computation at all realistic, since you can't use USEquityPricing.open (since it is mysteriously broken, per the Q help page)?
  1. Not really, this is just a backtest, and I am not yet convinced. There are some nice techniques in it though.
  2. This is just a filter by market cap, no other filtering. Some people might want to filter more out.
  3. Yeah, it's not outstanding.
  4. If there's a gap of over 15%, chances are most of that gap was over night, not in continuous trading, so it doesn't make much of a difference whether you calculate from open or close.

I fixed a couple of things mentioned in this thread, and simplified the algorithm.

  1. Fixed the BTU (ticker) issue, where ATR is calculated using different backadjustment paradigms
  2. Fixed the LIFE issue, where ATR becomes super narrow due to acquisition target. Filter out stocks with ATR/Price below <0.5%, which is well below the normal range. This avoids ending up with huge dollar values on very low volatility stocks.
  3. Removed maxgap and replaced complicated R2 x regression with simple momentum. For me, Andreas included these as his system is semi-manual, and these make the trades more palatable: to make the trends look really obvious on a chart, which is a nice to have in my opinion.
  4. Collapsed all lookback periods to 252 days (1 year), to reduce the number of parameters. 100 days works well too.
  5. Tidied up the filters/factors, to use some cool features of pipeline
Clone Algorithm
511
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56c9d3e0081e330de4368e8b
There was a runtime error.

I was wondering why the backtest gives a Sharpe of 2.82 (which would be amazing) but the pyfolio tearsheet gives a much more reasonable Sharpe of 0.97.

Backtest  
annual_return          0.17  
annual_volatility      0.18  
sharpe_ratio           0.97  
calmar_ratio           0.79  
stability              0.93  
max_drawdown          -0.22  
omega_ratio            1.19  
sortino_ratio          1.38  
skewness              -0.32  
kurtosis               2.70  
information_ratio      0.04  
alpha                  0.13  
beta                   0.56  

EDIT: not sure why the notebook is showing that pane.

Loading notebook preview...
Notebook previews are currently unavailable.

I don't trust the backtest sharpes, I think they are not annualized correctly or something, or perhaps use the price return of bonds as the risk-free rate, rather than the continuously compounded yield of bills.

Hey guys, we're aware of the fault numbers in backtesting, they have to do with the risk-free rates being used to calculate the values. It's on our short list to fix. For now, use the values returned by the tearsheet in pyfolio, they are correct.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@ Simon and Dan: I did a little modification on the original algo because morningstar.valuation.shares_outstanding is not supported for real money trading yet. The only change I made is class of MarketCap

class MarketCap(CustomFactor):  
    inputs = [morningstar.valuation.market_cap]  
    window_length = 1  
        def compute(self, today, assets, out, market_cap):  
                out[:] = market_cap[-1]  

The attached backtest is based on modified algo for real money trading. However, the total return drops pretty much compared to Dan's result. Do you have any clue why this custom class kills so much on return and sharpe?

Thanks.

Clone Algorithm
122
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56d1078ce295740df68322c3
There was a runtime error.

Hi guys,

I just found this thread and thought I'd throw in a couple of points:

  • Bear in mind that the model in the book, just like in my first book, is meant primarily as a teaching tool. That is, it's a method that works as it but can be improved upon in various ways. It's not meant as an optimal model ready for deployment, but rather as a sound foundation. It's likely to yield nice results over time, but I'm sure you guys can do better.

  • Turnover is an issue. If I could redo something in the book, this would be it. As a fund manager, you don't have to worry about capital gains taxes and trading costs are so low that you can almost disregard them. Even for private accounts, here in Switzerland we don't have capital gains taxes. This of course is not the situation for everyone. I should have designed the model to greatly reduce turnover, as that will be more of an issue for most readers than for me. I suggest to modify my code to reduce trading. For starters, move to monthly rebalancing and raise the hurdle for position size rebalance.

@Simon

Hi Simon,

The commission and slippage model is set to,

set_slippage(slippage.FixedSlippage(spread=0.01))  
set_commission(commission.PerShare(cost=0.0035, min_trade_cost=0.35))  

Are these more reasonable parameters for this kind of algo as opposed to the default setting?

Regards,
Mark

I think so, yes, but it depends on the precise stocks you are trading and the nature of your account with Interactive Brokers.

This is a variation of the model, which does not use a trend filter and has simpler rules. It's more reactive but has a little deeper drawdown. I plan to show this model at QuantCon and will release the full source code after the conference.

Instead of a trend filter, it has a minimum adjusted slope as a requirement.

Clone Algorithm
447
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56f00288fa82710dfd14aec3
There was a runtime error.

@Andreas - just to check, you know the source code is attached to the backtest?

@Adam - the equity curve looks quite similar, just lower overall return. I would guess that market_cap is somehow only available for lower volatility (ie larger cap) stocks, and this is dampening the return.

I wonder if market cap in fundamentals is only updated quarterly, so the list of highest market cap stocks is "out of date" relative to my list (current price x last quarter's shares outstanding). My list would bias to higher current price shares more than your list. I can see how this could pick out recently trending stocks.

Apologies for all the posts, I can't edit my previous comment on phone.

A good workaround for live trading would be to divide market cap by last quarters price (from a look back on close or from fundamentals) and multiply by current price (from close)

@ Dan: It looks a good way to workaround. Do you have backtest result can share with us?

Thanks.

@Adam: I don't, but maybe the invisible hand will deliver

@Andreas - just to check, you know the source code is attached to the
backtest?

Well, now I do. :)

No matter, I was going to release that in the wild anyhow. I hope you find it useful. James Christopher has kindly helped me a lot with the code, since I'm not at all a Python coder.

Many thanks for the algorithm and the code, Andreas!

Can the indicator that you are using -- the product of the slope and the R^2, if I understood correctly -- be given a straightforward (statistical) meaning? Is there an intuitive way of understanding what it signifies and why it works so well?

Thanks again,

Tim

What? Didn't everyone read Stocks on the Move already? It's all explained in there. :)

Ok fine. Here's the logic:

First, we use exponential regression to measure the momentum. As opposed to its linear cousin, we get the slope expressed in percent. The daily slope gives you a number with many decimals that's hard to relate to, so let's annualized that sucker. Now you have a number which answers the question "How many percent would this stock make in a year, if it was to continue the same trajectory as the recent past?". No, we don't expect that to happen, but it gives us a number that we can relate to.

But what about volatility, you ask. Wouldn't this reward extreme situations, like takeover bids and crazy vola? Yes, but that's where our friend R2 comes into play. The R2, of coefficient of determination, is a number between 0 and 1 which tells us how well our regression actually fits the data. It would be 1 if all the actual data points are exactly on the line.

So, now we simply punish the volatile stocks by multiplying all of our annualized regression slopes by the R2. A stock with a nice gradual slope will have a high number and won't take much of a hit. An overly volatile stock filled with big gaps will get pushed down the ladder.

Now we simply buy stocks from the top. Positions are vola parity and rebalancing done monthly. Voila. Simple and robust.

Now go buy my book! :)

Sincerest thanks for a very clear explanation, Andreas. You are probably hardly going to believe this but I have independently developed a very similar strategy, as far as I can judge, in which I use the ratio of the average return and the volatility as a ranking criterion (both estimated on a year's-worth of daily close prices), but it does not work nearly as well as your method. I suspect the crucial difference may be that I do not carry out any volatility based adjustments of the positions.

No problem, Tim. Doesn't surprise me at all if you did something similar. I'm convinced that a lot of people in the business already know all of what I write about. Hopefully I can explain it better and contribute by teaching and improving upon existing concepts. As opposed to many authors, I never claimed to publish or sell some sort of super system that will make millions in up and down markets, compound at hundreds of percent a year, revolutionize the field and such.

I hope to publish slightly improved variants of concepts which most in the quant HF field already area familiar with, making it accessible and explaining it to people outside this particular field. Without the silly hype.

But that's not going to stop me from calling the annualized regslope multiplied by R2 by the name Clenow Momentum from now on. Yes, you heard it here first. The all mighty Clenow Momentum indicator is here. Because why not. And because I would find it funny if that name starts showing up in standard charting packages. And because putting your name on stuff seems to qualify you for running for president these days.

Hi Andreas,
I tried to run your code but it does not seem to generate any orders? BTW i just finished reading your book and i absolutely loved it. I am new to quantopian and i think your book is a great place to start!

Did you run the algo in minutely mode? If you run it in daily mode, it will probably fail to make orders.

Thanks, Dan. Exactly right.

I'm glad you liked the book, Shrikrishna!

Thank you for your explanation Andreas. Can you tell a bit more regarding your exposure. Because that seems to be the biggest driver of your positive relative performance. It seems that you are 'long' at the right time, and not exposed at (almost) the right time. Obviously that is what a momentum indicator has to do, but yours seems to be very good at timing. Can you tell a bit more about that?

Simple really, Wouter. The model buys positive absolute momentum. If there are are not many stocks available with positive momentum, there's nothing to buy. As you see in the code, there's a variable for minimum allowed momentum.

There's no special timing mechanism at work. It's just that we don't buy stocks that aren't performing. And in a bear market, almost nothing is.

Why in minute mode we see trades while in daily mode it fails? Can someone explain the technical reason?

Hello Andreas,
I was working with your algorithm before Quantopian upgraded to "Q2". I became interested in your momentum ranking after watching your "Trade like a Chimp" presentation via the quantcon live stream, so I was happy to find the algo here. But I ran into an issue when trying to resolve the "shares_outstanding" live trade limitation that Dan H pointed out. However, when converting the MarketCap function to use the morningstar.valuation.market_cap data directly from pipeline the results were affected more than I expected.

First things first, I modified the code to take advantage of the Q2 enhancements. A summary of the changes are below, along with a backtest of the updated algorithm. I have to say, Q2 really did increase the speed of this backtest significantly.

Revisions:
Corrected the following errors to migrate the algorithm to "Q2"
Line 60: Function update_universe is deprecated.
Line 104: The history method is deprecated. Use data.history instead.
Line 105: The history method is deprecated. Use data.history instead.
Line 106: The history method is deprecated. Use data.history instead.
Line 137: Checking whether an asset is in data is deprecated.
Line 139: data[sid(N)] is deprecated. Use data.current.
Line 141: data[sid(N)] is deprecated. Use data.current.
Line 117: data[sid(N)] is deprecated. Use data.current.
Line 119: Checking whether an asset is in data is deprecated.
Line 122: Checking whether an asset is in data is deprecated.

Also added error checking to the position sizing function. The
get_poistion_size function threw an input length exception during
one of my backtests. I assume that for whatever reason the backtest
environment wasn't able to collect full price data for a particular
stock and this broke the talib atr function. To fix this the function
will return 0 if the atr function throws an exception. It will also
log a warning. This seems like a fair way to avoid ordering a security
for which a risk adjusted position size cannot be caluculated.

Clone Algorithm
141
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 57190b259403381118eb6e51
There was a runtime error.

Here is a second version of the algorithm. The only change is that this version captures market cap as:

class MarketCap(CustomFactor):  
    inputs = [morningstar.valuation.market_cap]  
    window_length = 1

    def compute(self, today, assets, out, mcap):  
        out[:] = mcap[-1]  

as opposed to the below, which requires the shares_outstanding data which is unavailable in live trading:

class MarketCap(CustomFactor):  
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding]  
    window_length = 1  
    def compute(self, today, assets, out, close, shares):  
        out[:] = close[-1] * shares[-1]  

I expected the results to be nearly identical, but as you can see the returns are significantly lower. This suggests that the universe of stocks used to trade has a large impact on the system's performance. I found this post a while ago which makes a strong argument for why a difference of stocks in the universe can lead to these results, but it doesn't explain why the results of two versions of the MarketCap function are so different.

The issue probably stems from errors with the back adjustment of the values, possibly the same errors that motivated Quantopian's developers to disable the shares_outstanding data in live trading. They could be causing one version to return a list of large cap stocks and the other (relatively) mid/small cap stocks. If this were the case the performance differences would stem from the difference in returns of large and small or mid cap stocks over the backtest period.

Either way, the results motivate further testing. I am working on moving the slope calculation into the pipeline itself so that the full 8000+ stocks can be ranked and tested against, rather than a market based subset. This will allow a test of the performance of the "Clenow Momentum" ranking independent of market cap. I am also working on a research notebook so that I can replicate your random simulations with a faster vectorized backtester. Do you have any suggestions of what else could be causing this that I could look into as I test?

Clone Algorithm
141
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 571926bb8425b30f73f39a47
There was a runtime error.

A silly question. I tried to clone it and do the full backtest. But the most updated data I can retrieve is until Apr 1st. Is there any limitation on quantopian or I got anything wrong?

Hi Shawn,

Sorry for the late reply.

I have to admit that I'm not overly familiar with the Quantopian environment and technical details. You seem to have found an interesting issue though. I didn't write the Quantopian code at all actually. I gave my spec to James Christopher at Quantopian who kindly coded it for me.

My local solution, built in C#, is not looking at market cap at all, but rather at historical index membership. I made an 'indicator' where you input a stock ticker and an index ticker, and it will return 1 on any day that stock was part of that index, else 0. This way I can make sure that the simulation only considers stocks that were part of a particular index on a particular day.

I've done this on many different indexes, American and international, large cap, mid cap, small cap etc, and the concept appears robust across all of these. Of course, it works better on larger indexes with at least a few hundred stocks to pick from.

Andreas,
Thank you for the reply. I did some comparisons in the research environment and found that the manual calculation returns values that can be many times higher than directly querying the historical market cap. The unadjusted number of shares outstanding is causing stocks that have split to be valued in multiples higher than they should be. This is bumping small(er) cap stocks up into the list and increasing the portfolio exposure to small cap stocks. It also explains the performance boost as small caps have tended to outperform the market.
But, as you've said in your book, it can be ok to have an exposure to a risk factor as long as you're aware of it. And in this case I think this is a chance for smaller momentum investors to take advantage of their size and capture returns in stocks that might not be liquid enough for large momentum funds.

Actually seems that based on the code below some negative slopes might be promoted over some positive slopes trough even exp in pow

annualized_slope = (1 + slope)**250  

So, a slope of -2.9 will yield higher result in above expression over slope of 0.8. I don't know how realistic these slope numbers are, in my research with OLS I haven't seen such steep slopes.

I made a little correction to annualized slope calculation for negative slopes
annualized_slope = (1 + abs(slope))**250 * (-1 if slope < 0 else 1)

Bruce, I've been noticing that issue lately, but it seems to resolve itself with a browser refresh.

When is the portfolio sold and re-adjusted based on the new ranks? In the code I only saw a rebalance trigger based on +/- 0.5% change in portfolio value. Is there also a time-trigger (monthly / weekly) for readjusting the portfolio with new stocks based on ranks?

thx
Kiran

I took Shawn's version of the program, since deprecation issues had been resolved.

Made some modifications intended to raise total return.

Each modification is listed at the end of the program with its corresponding output and reasoning. Each test was done once.

Here is the summary of those tests:

Stocks on the Move – With Mods

http://alphapowertrading.com/images/divers/Stocks_on_the_Move.png

These modifications rose the CAGR to 24.82%. This without any change in the program's logic or procedures. This is more interesting than the 11.46% of the original version.
I went for the pressure decision points in this program. Did not change code, only assumptions. Meaning that default value of specific variables were changed with the intention of having the program produce more. Technically, I have not read the whole program in detail yet, but I expect to be able to do even more when I do.

So, here is the last backtest:

Clone Algorithm
563
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 57c4c28c30fa74100ba5c84b
There was a runtime error.

@Guy

I've been implementing this strategy myself before coming across this post. Very impressed with your results, but i have one question. You state:

4th mod: context.risk_factor = 0.010; was0.001. Will allow more positions to be taken

My question is how does that allow more positions to be taken? Doesn't it result in larger position sizes and fewer positions?

Can we add SPY as reference in the following way?
if SPY has a negative slope (which indicates a downturn),short certain amount of SPY for hedging?

Can you someone help on coding:
I intend to replace the hard-coded criteria by comparing slopes > context.min_momentum
slopes = slopes[slopes > context.min_momentum]
But rather compare recent slopes > slopes_lag by x days?

Clearly this has spectacular returns and is well written... the draw-down, however, is significant in the 2008-09 period and early 2016 drops... what can we do to reduce the draw-downs on this?

@caleb It results in allot less positions. It translates to 10% of the account being used for the ATR of each position. If you run the backtest you'll see that only 3-6 positions are open at a time. e.g. if you've got a 100,000 account and an ATR of 1.4 it'll buy 714 = ((100,000*0.01)/1.4). If the cost of the security is say $50 you'll end up spending $35,700 on the first stock or 35% of the account. The cheaper the securities the more positions you'll have. FYI same period with a fixed 10% per position results in 820%

I'm a purely systematic futures trader, teaching myself python so I can use quantopian, and reduce my reliance on contract programmers and ninjatrader (which I hate with a biblical fury). I read Mr Clenow's excellent book, and believe the concept is both valid and robust, and not difficult to improve upon in easy ways.

The most obvious way to improve upon it is to lower the drawdowns and then use those lowered drawdowns to up the leverage.

One way that I have used in the past I borrowed from a hedge fund friend of mine, and simplified. It works like this.

Plot your equity curve. Plot an all time high on it, and a 20 day low. It should look something like this.

https://snag.gy/AB7pfE.jpg

Measure the distance between your all time high and the 20 day low (or monthly low, or whatever, it doesn't matter and shouldn't really be optimised too heavily) and calculate the percentage from the bottom.

If you are half way between the 20 day low and the all time high, rebalance so that your positions are half size. If it's 25% of the way, reduce position sizing to 1/4, etc.

There are a few variants of this I have tried, and I favour turning the system off and paper trading (to preserve an equity curve so you know when to turn it back on) below 25%.

On intraday systems or very prolific systems, a good option is to ONLY trade the system for real money when equity curve is within 25% of the all time high (against the 20 trade low or whatever you are using as a low). Intraday systems tend to run "hot and cold" going through periods of superperformance then underperformance, so this neatly avoids having to do lagging, and inevitably backward looking market regime filters like, for example trading only when the market is above 200 SMA.

After years of trying to match market regime filters to systems, I gave up when I saw this. Fidelity hedge fund in Japan has used something very similar in production for a long time.

I really think this would work very well on this system :-)

@Guy Fleury --

I have cloned your algorithm and played around with it a bit. I'm a bit green when it comes to investing, but I'm toying with testing a few strategies with a small amount of capital. A few questions before I jump into the deep end with this one:

1) Sorry if this is a naive set of questions -- but can this strategy work as well without utilizing leverage? I notice that it appears to stay below 2x exposure, but I don't see how that is controlled within the algo. It appears to leverage and rebalance shortly thereafter. Does that lower the risk of using leverage? Would a first time investor like myself be granted 2x leverage or need to provide collateral in case of a margin call?

2) The backtests on a very low amount of capital (3-5k) appear to perform really well -- I recently started a paper trading test with 4k and will see how that pans out in the next month or two. In the meantime, is there anything I should watch out for with live trading with real capital at this level of investment?

Thanks.

@Marc,

Strongly recommend you read both Mr Clenow's books before trading this live. Without a more than superficial understanding of the mechanics you will almost certainly be tempted to turn it off or tinker with it when it goes in drawdown. I think he intended this system (and his previously published one) to be illustrative of a concept and not a real production system. It should be quite easy to improve on the concept in material ways without sacrificing robustness (the more conditions you add to a system the more fragile it becomes, meaning the return improves, but one day it falls apart inexplicably).

The low hanging fruit might be to improve on the trend filter (a single MA is quite lagging), adding some short SPY as a hedge when market goes bearish, doing momentum by sector as well as individually (and allocating to the strongest sector first), or expanding the universe (more universe - more opportunities). Also my previous suggestion of scaling up and down depending on how well the system is performing generally makes solid improvements.

Has anyone got other simple ideas for improvements. The concept seems very sound to me, and the fact it works across different markets gives me more confidence in it's robustness.

Treatment of margin with interactive brokers is covered here https://ibkb.interactivebrokers.com/article/2085

Absolutely right, Scott. All models I've published in my books are designed to be teaching tools, not The Clenow Super System. The idea is to teach the concept and explain the logic.

I honestly think that it's of much greater value to readers if I publish simplified learning models than a polished production model. My intent isn't that everyone start trading my exact rules. It's that readers learn how to do these things for themselves, how to adapt models to their own need and develop their own. After all, as the old proverb goes "If you give a freezing man a match, he'll stay warm for a second, but if you set him on fire he'll stay warm for the rest of his life". Or well, something along those lines. Proverbs are hard.

I am actually working on a new book, where I intend to show much more advanced momentum models, among other things. My current plan is to provide Quantopian code wherever possible, to demonstrate the models. They will be a bit more complex models and will be user configurable.

I'll have futures models in the new book as well, and I hope the Quantopian futures functionality will be finished enough so I can include source for those too.

@Scott,

Thank you so much for posting the link regarding IB's buying power and margin. Their customer service is the biggest pile of S**T I've ever dealt with(pardon me...). Have been opening five different tickets regarding API and its integration to leverage. All they ever do is referring you to their section for portfolio margin. You are a true lifesaver! Thank you again.

I played a bit with it, activating only the market (SPY) trend filter and not the SMA filter on individual stocks gives a bit more juice.

DD is also down at 35%. Next mod: when the market trend filter is down, sell faster... stay tuned

I'm halfway reading the book, will finish it first :-)

Clone Algorithm
97
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d1cc40cf5d331d7971b029
There was a runtime error.

Hi Charles,

I just had a quick look at the code, and have a couple of comments.

The model in my book is meant for you guys to experiment with and improve. There are plenty of ways to improve it, but it's a good base to start out from. I'm planning on releasing a more advanced version soon, with more features and a lot of settings to play around with.

There's a dangerous thing I see here though that I'd caution against. I see that risk factor is raised to 0.01, with an interesting comment (4th mod). "Will allow for more positions to be taken". No, it will do the absolute opposite. It will reduce the portfolio size to 3-4 positions, providing an extreme portfolio concentration, kill diversification and add a huge event risk.

The current iteration has the following portfolio composition, as of March 17, 2017:
AMD: 20.5%
NTAP: 56.9%
NVDA: 28.0%
UAL: 35.5%
Cash: -40.9%
Total equity exposure: 140.9%

The backtest looks good, in theory. But make no mistake, this is an extremely dangerous strategy in that shape. 57% in a single stock is not something I'd recommend.

I'd suggest to either lower the risk factor, or change the logic to have a fixed target number of stocks.

I like Charles' modifications to this program. In periods of possible turmoils he reduced leverage to close to zero by turning to cash as a drawdown protection. It does answer the question: why be long if the market is going down? Also, his methods only touch the highest capitalization stocks. Meaning stocks with a very low probability of going bankrupt the next day or so. In fact, over the 14.42 years, none did.

As for the times leverage was used, it appears as a reasonable price to pay for the added excess return.

Leverage will have to be paid, but only for the periods where it exceeded 1.00. It appears as if it was in spurts amounting to about one third of the time or less with an estimated visual average of about 1.10.

I pushed a little bit more. First, I set the initial capital at $1M. Then set leverage explicitly at 1.30. A procedure which would entail higher leveraging fees. The question being: would the strategy withstand the added pressure. The following formula was used to estimate the leveraging charges: A(t) = (1 + L)∙A(0)∙(1 + r + α – 0.03∙L)^t.

Result: 26.61% CAGR, after all expenses paid. Before leveraging fees: 28.41% CAGR.
Total net liquidating value: $ 30,071,752 after expenses.

Note that my estimate in this case does over-charge on leveraging fees ($6,791,416).

Clone Algorithm
304
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d29699ff51221bfedba318
There was a runtime error.

Guy Fleury it's the first public algo that is see that generates a positive return having:
- Comissions set
- Multiple positions opened everyday
- 10.000$ as initial capital

Time to scrutinize how exactly does it work

I'm only looking at the most recent algo. It is not what it appears to be due to over $12M hidden margin. So you would have to tell an investor, this can turn 13M into 36M in 14 years and its the same as if you just buy SPY except we haven't considered margin costs so nevermind.
Try scheduling closing of any positions that need to be closed at market open separately to help improve on that, and then I'm pretty sure you'll have something impressively profitable. It is some good work. Just has that one boo-boo, hidden margin that is so common.

Edit: Now I see that function is per security (not enough sleep), so the following comment doesn't apply here: When dropna() is used on a dataframe with more than 1 column, in this case any time any stock has a nan, that date is removed for all stocks. Might not be what you want. In place of highs.ffill().dropna(), try highs.ffill().bfill()

Use PvR to see clearly.
2017-03-17 13:00 _pvr:222 INFO PvR 0.0733 %/day cagr 0.3 Portfolio value 36863169 PnL 35863169
2017-03-17 13:00 _pvr:223 INFO Profited 35863169 on 13477370 activated/transacted for PvR of 266.1%
2017-03-17 13:00 _pvr:224 INFO QRet 3586.32 PvR 266.10 CshLw -12477370 MxLv 1.83 RskHi 13477370 MxShrt 0
2017-03-17 13:00 pvr:310 INFO 2002-10-17 to 2017-03-17 $1000000 2017-03-22 16:06 US/Eastern
Runtime 0 hr 57.7 min

There's a kind of nifty alternative that I like for lines like this:

    if current_market_price > average_market_price:  
    #if 1 > 0: # dummy  

To make the line always run in testing you can do this since 1 evaluates to True:

    if 1 or current_market_price > average_market_price:  

On the other hand to always skip, since 0 is False:

    if 0 and current_market_price > average_market_price:  

I use those quite a bit.

Following Clenow's comments on position concentration. I changed one number. Generated a smoother equity line, increased number of positions, reduced concentration, reduce drawdown, reduced leveraging fees, and reduced volatility at the expense of a reduced overall profit. There is a price to pay for everything.

Nonetheless, it is still a 23.36% CAGR with a relatively low volatility number.

Clone Algorithm
304
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d30d354ae017176dd78430
There was a runtime error.

Hi Guy,
I think that instead of multiplying the ATR*5 you could just change the risk factor (context.risk_factor = 0.01) to 0.002. Thats the point of this variable if Im not mistaken.

The same way we could add a variable for the leverage to be able to play with it and try different scenarios

Hi Andreas,
It`s such a pleasure to have you reading the thread. Guy Fleury showed that lowering the risk by having more positions and compensating with a bit of leverage can give the same end result.

Now in you opinion, because you talk a lot about managing (buying) the risk and not the money in your book, is there any difference from changing the risk factor from 0.01 to a lower number like 0.001 and add leverage to achieve the same yield? Less risk from one stock crashing our portfolio since we have more positions,more diversification but more chances of going under because we borrowed.

Thanks!
Charles

Followed Charles's advice and eliminated the ATR. Put the risk_factor at 0.001 which should satisfy Clenow's observations.

CAGR = 27.26% including all frictional costs and leveraging fees.

Net liquidating value: $32,368,742.

Clone Algorithm
304
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d32fa190dbc61764ecec4d
There was a runtime error.

Here are the tear-sheets analysis on the above simulation.

Loading notebook preview...
Notebook previews are currently unavailable.

Now in you opinion, because you talk a lot about managing (buying) the
risk and not the money in your book, is there any difference from
changing the risk factor from 0.01 to a lower number like 0.001 and
add leverage to achieve the same yield? Less risk from one stock
crashing our portfolio since we have more positions,more
diversification but more chances of going under because we borrowed.

Remember that the risk factor here just impacts your position sizes, not the overall portfolio risk. Assuming that you keep the maximum allowed exposure constant, a risk factor of 0.01 will result in a higher defacto risk, as we'll end up with very few positions. Each stock will have an enormous portfolio impact, leaving you exposed to event risk and without diversification.

In a simulation, fewer stocks usually look better. In reality, that means that you will likely take a big hit when some unexpected event occurs. Just imagine the pain of having a 60% position, and suddenly see it fall 20% on a profit warning. If your portfolio holds 20 stocks or more, the event risk is lower.

Also keep in mind that this is beta heavy strategy. Your primary exposure is to the overall equity markets. You get a lower beta in a long term simulation, because of the built in downside protection. But in normal markets, you're just heavily long. Leverage can be great, and it can be dangerous. I see no issue with leverage in itself, as long as you are very aware of what you are doing and why. I prefer an unlevered approach when it comes to stocks, but I enjoy taking heavy leverage on futures models.

I didn't yet look closer at Guy's latest iteration, but on the surface it looks like it's not showing exposure over 100% very often. It would be interesting to see how his version would do if you hard cap it to 100% (no leverage). I'm not saying his approach is wrong in any way, but it would be an interesting test. When I find a bit of time, I'll dig into his version and check what other improvements he made in there.

In case you guys feel like tinkering more: My own models of this kind usually have a lot more settings, for trying different variations. For this model, you could try using a fixed number of stocks, a minimum slope, removing some of the filters like percentile and trend filter, using different allocation models like equal, market cap, inverse market cap, multi factor etc.

And finally, if you'd like to study how the big boys do it, read the methodology paper for the MSCI USA Momentum Index. I wouldn't copy their approach, since their purposes are different from yours, but there are some interesting points in there. It's all public.

In Charles' program version (which is quite different from the original) the risk_factor is used to determine the bet size. Making it, de facto, a fix fraction trade allocation method. A very old method dating back prior to the sixties. See the line:

context.portfolio.portfolio_value ∙ context.risk_factor

The fix fraction has some advantages and some drawbacks. For one, it is an easy allocation method. In this case 1% of portfolio value is allocated per position. Meaning that you can have up to 100 stocks in your portfolio at any one time. This is more than enough for diversification purposes. Its drawback is that it is a limiting factor. Irrespective of the merits of a position, all it gets is this 1% of equity. It is a good thing that only the highest caps that were going up are trade candidates in this case.

Any trading strategy can be expressed as: Σ(H.∙ΔP). You can scale it: k∙Σ(H.∙ΔP) by providing more capital. The scaling will increase the position size. Compared to the first iteration of this program, I went for a k = 10. Doing so should have generated 10 times the profit. To do this I had to put down 10 times more in capital. If the trading strategy scaled well, overall, it should have provided the same return, percentage wise, as the first program. Saying that the equity line would be the same as in the first program. Scaling gives: A(t) = k∙A(0)∙(1 + r)^t. Notice that there was no alpha in the first presented program in this thread.

If I wanted to go further, I could use some leverage which would result in: (1+L)∙k∙Σ(H.∙ΔP). Evidently, I would have to pay the leveraging fees: A(t) = (1+L)∙k∙A(0)∙(1 + r - lc%)^t. This would be bad since overall it would reduce the strategy's CAGR. You would simply be assured to underperform the no leverage scenario.

However, the outcome changes a lot if you can add some positive alpha to the mix as Charles' version did. A(t) = (1+L)∙k∙A(0)∙(1 + r + α - lc%)^t.

The question should be: are the relatively small and short excursions above 1.00, which were all paid for, worth the $32,368,742 net liquidating value?

Now, that is a nice question?

Guy, you're way past my knowledge of the finance world (I'm an engineer, managing my money is more of a hobby). I'd love to give my opinion but I`m not sure I completely understand your question with the Alphas

With the added leverage (1.4), we still rarely go over 1 but have a lot less cash available if the market crashes. Depends of how risk tolerant you are. Yes it was worth it for the backtest but the future? Who knows!

Regarding the position size, it should be dependent of the risk of each individual stock according to Clenow`s book. The decision of picking the stock is based on the slope and the r^2. Then we go down the list from the highest to the lowest score. The position size is determined with the risk coefficient (more or less different positions) and the volatility, that's why you see the division by the average true range. The more volatile, the more you divide (lower) your position.

I agree a very volatile stock shouldn`t make the list in the first place because the r^2 will be low but it could happen if for example a company receives a buying offer and jumps radically to the amount of the offer. R^2 will be low but the slope high and we could end up buying it even if there is no money to make out of that stock.

ATR is there to get lower position on high volatile stocks but is in my opinion way too agressive. I added an ATR factor, I turned it off for this share. Maybe I could use a fix ATR treshold instead just to eliminate the crap, I`ll have to try.

I took your latest version, also added a variable to turn On and Off a few filters and control the ATR. The results are the same, I turned off all the things you did and also set a variable for the leverage.

Clone Algorithm
97
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d46ac4c0b6811c1adda930
There was a runtime error.

Charles, I like your modifications and will study them. You raised many points.

You say: “...we still rarely go over 1 but have a lot less cash available if the market crashes”. Note that the strategy whenever there is some market turmoil, it turns to cash. This explains the dips in exposure. It is the strategy's way of hiding in a safe place. The strategy spent most of its time in cash during the financial crisis and was ready to buy shortly after it was over.

I am still looking at why the strategy behaves the way it does. If we set the leverage at 1.40, that is what we should get on average. And yet, we get much less. I will need to look into that.

The strategy takes up to 700 of the highest market caps that satisfy the condition of a positive slope >= 2% dampened by r^2, this out of some 8,000 stocks. The R-squared sorting makes the strategy favor lower volatility stocks. But, 0.03∙0.70 is still larger than 0.02∙0.80, or 0.05∙0.20. The question should be: can such a general rule apply across board? There was no proof or demonstration that this selection process provided any benefits above the slope >= 2%.

However, it does partially rearrange the order in which the top 100 stocks will be selected by favoring more those stocks sticking the closest to their respective regression lines, meaning having a lesser variance. If I change the dampening order by reversing it (1 – r_value^2), I get a lower performance level since this would favor higher volatility and lower sloped stocks.

However, if I increase R-squared, it has minimal impact, close to none, say for example: (1.4 ∙ r_value^2). And if we remove R-squared, we get something in between (1.4 ∙ r_value^2) and (1 – r_value^2).

You can increase performance by changing a single number. This one by favoring the stocks that are the most behaved in their rise: (1.2 ∙ r_value^2). This would raise the CAGR to 28.47% before leveraging fees, and to 28.17% including all expenses.

However, that is not what improved performance since (1.2 ∙ r_value^2) or (1.4 ∙ r_value^2) had the same output as (1.0 ∙ r_value^2). It was another number that improved performance: the one that raised the min_momentum to 0.05.

Clone Algorithm
304
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d54b26a3e4d117953847b3
There was a runtime error.

Guy Fleury if i clone & rerun your notebook, charts like the one showing how the algo behaved in events like Fukushima disappear :S
I click on the Run button, i am missing something?

When it comes to matching the intended model better, it will have to come from those of you who understand the theory well. You might find some pieces worth adopting here.

$2.66  GF version (including the 12M margin)
$8.11  This version attached, it has no margin
-----  
  ^  
  |______ Profit per dollar risked/activated  

comment out def handle_data() and its contents for speed.

Clone Algorithm
46
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58d4b56894b5c81ccecb04f3
There was a runtime error.

Hi Blue,
The big drop in 2013 comes from TSLA being 2/3 of the portfolio and then dropping.

I know the ATR is supposed to deal with the risk but don't you think we should hardcode a value of each position as a max percentage of what we own? I don't think in the end the growth would be bigger, but it would be more linear.

Andreas,

On Mar 27, 2016 you wrote:

... we use exponential regression to measure the momentum. As opposed to its linear cousin, we get the slope expressed in percent. The daily slope gives you a number with many decimals that's hard to relate to, so let's annualized that...

In source code by James Christopher you published on Mar 24, 2016 line 17

annualized_slope = (1 + slope)**250  

Can you or somebody else prove that annualization done properly.

This is very important because it is the base for most strategy calculations and used as is in other versions by Guy Fleury, Charles Pare, Blue and others.

To my mind it should looks like:

annualized_slope = (1 + slope)**252 - 1.0  

Correct me if I am wrong.

Here's my own function for the slope. I'm from a C (++/#) background, and still learning Python.

Whether you want to use 250 or 252 day count doesn't matter here. Either is fine, and it won't have any impact on results. In fact, if you prefer, you could skip the annualization. The only purpose is to get a number easier to relate to, instead of a daily percent slope with a lot of decimals.

def slope(ts):  
    x = np.arange(len(ts))  
    log_ts = np.log(ts)  
    slope, intercept, r_value, p_value, std_err = stats.linregress(x, log_ts)  
    annualized_slope = (np.power(np.exp(slope), 250) - 1) * 100  
    return annualized_slope * (r_value ** 2)  

Also to keep in mind (not a criticism, just an observation): The code you're using does not look at 700 stocks. More like 400. The pipeline starts off with 700 and cuts ADRs and non-primaries, resulting in around 400. Do a record on the length of the selected universe index, and you'll see. Not necessarily an issue, as long as you're aware of it.

Edit: Using np.power on that last row instead of ** should probably be more efficient.

Increase of 330% from Vladimir's change. Q return, my version, still no margin.

Andreas,

Thank you very much for quick response.

I agree that using power 250 or 252 does not really matter.
The key difference between my and as is version
is -1 at the end.
If you will not do that the slops all the time will be positive.

I have tested all 3 versions on static symbol list on 2009-01-16

2009-01-16 07:35 as is version
PRINT
Equity(24 [AAPL]) 0.191235
Equity(26578 [GOOG_L]) 0.206391
Equity(5061 [MSFT]) 0.286681
Equity(16841 [AMZN]) 0.119070
Equity(8347 [XOM]) 0.136967
Equity(4151 [JNJ]) 0.376904
Equity(25006 [JPM]) 0.160242
Equity(8151 [WFC]) 0.208580
Equity(3149 [GE]) 0.169359

2009-01-16 07:35 my version
PRINT
Equity(24 [AAPL]) -0.391464
Equity(26578 [GOOG_L]) -0.389349
Equity(5061 [MSFT]) -0.483870
Equity(16841 [AMZN]) -0.249095
Equity(8347 [XOM]) 0.023829
Equity(4151 [JNJ]) -0.243910
Equity(25006 [JPM]) -0.532544
Equity(8151 [WFC]) -0.352849
Equity(3149 [GE]) -0.565920

2009-01-16 07:35 new version
PRINT
Equity(24 [AAPL]) -12.412274
Equity(26578 [GOOG_L]) -9.765479
Equity(5061 [MSFT]) -20.992760
Equity(16841 [AMZN]) -8.511544
Equity(8347 [XOM]) 0.509771
Equity(4151 [JNJ]) -7.030777
Equity(25006 [JPM]) -23.293110
Equity(8151 [WFC]) -14.046448
Equity(3149 [GE]) -28.590373

As is version shows all positive slopes which is not right.

What should be minimum adjusted slope requirement for new version context.min_momentum = 0.30 , 30 or 0 ?

Vladimir,

I multiplied with 100, again just to arrive at a number friendlier to the human brain. If you want to keep that, you need to raise your momentum score limit by a factor of 100 as well.

The slope output of the regression is a straight line across the 90 days of the momentum_window_length. If one wants to extrapolate, well, it is a straight line. The rate of return does not change, that was it. At the end of the year, the slope will be the same.

That you sort the 90-day slopes is perfectly acceptable. It gives, on average, the highest return performers of the group over the past 90 days. That you multiply this slope by a scalar has no effect on the sorting itself. All the stocks considered remain in the same order.

However, the scaling will move some slopes higher or lower relative to a certain threshold. In this case, the min_momentum serves as this threshold. It is why raising it to 0.05 in my last modification produced higher overall results. You change one number in a strategy and it generates millions. This to show how sensitive a strategy can be to certain numbers.

What I have noticed is that this strategy is also very sensitive to the annualized_slope. If you reduce or increase the 250 exponential, it will reduce overall performance. As if numbers in the vicinity of 250 are at a local maxima.

As Andreas explained, the use of the exponent 250 is simply to spread out the return numbers. What it changes is the number of decimal considered, not the stock ordering. And as Andreas noted as well, if you use slope alone, you will need to adjust the min_momentum threshold.

It's the added sorting that could be interesting: the dampening by R-squared. What it does is shuffle some of the stocks around, changing the ordering, favoring low volatility stocks. To get an R-squared close to 1, the 90-day price series needs to hug the straight line (slope). And for two stocks having the same slope, the one closer to the straight line gets preferential treatment.

The R-squared dampening reorders the stocks' slope ranking. It will favor stocks with the highest auto-correlation, lower market beta, lower volatility. Technically, it will favor a stock having a short-term trend. That the trend continues is an entirely different problem.

A funny observation, you could increase performance even higher by reducing the min_momentum threshold to -0.98. The move will allow some marginally selectable stocks to be considered and have an impact on overall return.

Andreas, All,

Here is a version with new Clenow Momentum function "slope", keeping it on the same scale as "as is"
1.0 target leverage, equal number of shares, 0 minimum momentum and risk_factor = 0.003
A lot space for improvement.
To my mind equal weighted slightly hedged by bonds version will perform better.

Clone Algorithm
315
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58da60696b34f71b97497529
There was a runtime error.

Guy,

Some minor misunderstandings there, it seems. Let me try to explain. I spent about 60,000 words explaining it in the book, but I'll try to keep it brief. :)

First off, we're dealing with exponential regression slopes here, not linear. The slope will therefore tell us the percentage change per day. That number tends to be hard to relate to for humans. The daily slope might be something like 0.0012567. Now if you take that number, add 1 and raise it to 250, you get an annualized number. As in, "what would happen if the same slope continued the whole year?". We don't expect this to happen, but it makes the number easier to relate to.

If no human is looking at the number, skip the whole annualization part. 250 is just the approximate number of days in a year here. Feel free to use 252 for a more exact day count.

This process of course makes the number larger. (if positive). So if you use an absolute slope floor, trading will be impacted by your day count. But toggling around the day count doesn't make sense. Pick one and stick to it.

The original method from the book can certainly be improved upon. That was the purpose of publishing it. I would caution against overly curve fitting it though. Tinker enough with parameters, and you'll get very high return numbers. Just make sure it's robust, and not dependent on exact parameters. This is about a broad concept. If the model gets too dependent on exact parameters, it's not likely to work in reality. Go for a version where parameters can be changed quite a bit and still have fairly similar performance.

Has anyone modeled the tax ramifications of short-term capital gains? What would be the approach to incorporating that into the model (i.e., paying gains from the invested capital)? Or does this strategy preclude the ability to pay tax annually from outside capital?

Really awesome dialogue and development happening here.

Thanks.

@Andreas, maybe there is a misunderstanding, but, it is not here. All the modifications I've presented are based on Shawn's version of this program. All my examples used: linregress as defined in:

https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html

which have a linear output, a straight line.

In the context of linregress, my statements hold. I used (ts) as in Shawn's version, and not (log.close) as in yours.

And yes, I do agree, the strategy is very sensitive to certain numbers. You can change just a few of them and have a very different picture.

Andreas,

Here is a version with new Clenow Momentum function "slope", in original scale were it is multiplied with 100 to be friendlier to the human brain.
The same randomly chosen parameters and absolutely the same results.

Clone Algorithm
315
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58da815725b22b17932b7256
There was a runtime error.

Guy,

I had a quick look. It looks like you're using the same as Shawn, originally written by James. That version does use exponential slope, not linear. The difference is that the log transformation isn't encapsulated in the function. It's done like below, from your latest iteration:

slopes = np.log(closes[context.selected_universe.index].tail(context.momentum_window_length)).apply(_slope)

My version (actually suggested by Shawn in a private email), simply encapsulates the log into the function, negating the need to use the np.log statement in the call above. The result is the same.

What we do is to run a linear regression formula on log data, resulting in an exponential slope. The slope is expressed in percent, not in dollars.

The output is an exponential regression slope. Linear wouldn't make any sense. That would mean that the absolute price of a stock would be the main factor. A stock with a price of 500 will show larger linear slopes than a stock traded at 10 could ever do. Using linear regression would mean that you no longer deal with momentum, but rather trading stocks based on their absolute price.

@Vladimir, remarkable work. I now have to study the impact of your changes.

This puts your strategy version at a 27.44% CAGR all expenses paid since you allowed commission defaults and had no leveraging fees to speak off.

Vladimir,

The version of the slope function I posted encapsulates the log function. You need to remove that part from the call in the rebalance.

slopes = np.log(closes[context.selected_universe.index].tail(context.momentum_window_length)).apply(slope)

should be

slopes = closes[context.selected_universe.index].tail(context.momentum_window_length).apply(slope)

@Andreas, I stand corrected. Note that using the slope of a linear regression on the close of a price series or its log will maintain the same ordering. All the stocks, once sorted, will be in the same order. It is only after applying R-squared that the order might change.

Using Vladimir's version since it was leaving some room for leveraging I went for the 1.40 setting as in a previously presented version. I wanted to know if at least it could pay its leveraging fees.
A way to search for the strategy's limits.

With leverage at 1.40, Vladimir's version came out with a 41.59% CAGR. That is, it ended with a net liquidating value of $150,924,996.81 after commissions and before leveraging fees. The estimated total leveraging costs came in at $ 21,492,749.

That is not pocket change. Nonetheless, it would have left the account with a net liquidating value of: $ 129,432,247. That is a 40.09% CAGR, all expenses paid.

A 1.40 leverage setting might be just a number. But it can have a tremendous impact on a trading strategy. Especially, if the trading strategy can support such an undertaking.

Notice that the very first iteration of this strategy in this thread was going absolutely nowhere. You could not even say that there was some alpha there. If there was, it certainly was not visible. Also note that using leverage is a matter of choice and there is no obligation to do so. Already, Vladimir's version of this program is remarkable.

Using Vladimir's version again, I increased the leveraging factor to 1.50 compared to the previous test using 1.40. This will evidently increase trading and leveraging costs.

At 1.50, the strategy returned a net liquidating value of $ 209,247,238, a 44.84% CAGR before leveraging expenses which were estimated at: $ 34,537,174. This left a net liquidating value after all expenses of $ 174,710,165. Resulting in a net 43.04% CAGR.

To the question: was the 1.40 setting the upper limit for this trading strategy? Probably not.

Without testing to see what a trading strategy might do using leveraging, we might not even be able to make a reasonable estimate. At least, doing such tests answer the questions: is it feasible? Would it be worth it? How much would it cost?

What are the limits of a trading strategy?

How could you find them if not by doing some simulations seeking them out? Once you find them, you can then opt to stay within the confines of these limits.

Again, I changed a few numbers and pushed the strategy to behave as if on steroids.

No change in the trading logic. No new change in the trading procedures. Only constants that can have an impact on the final outcome. These are not just constants or default values, but numbers that can affect the strategy as it evolves over its entire trading interval. This also puts some emphasis of the wide range of results that can be derived from what should be considered minor parameter changes.

Changing just a few numbers generated the following:

http://alphapowertrading.com/quantopian/Vladimir_with_costs_1M_15L.png

One reads correctly. That is $ 554,200,338 net liquidating value before leveraging costs.

Even if leverage was kept at 1.50, I opted to charge 0.80 instead of the 0.50 in the previous example. Total leveraging charges were estimated at $ 111,734,086, leaving a net strategy CAGR of 52.56% over its 14.42 years.

The end result has a portfolio account valued at $ 442,466,252 after all expenses paid. With such numbers, one should be ready to trade big.

I would conclude that the two previous examples were not the limit. BTW, this one is not either.

I'm curious, @Guy, did you test on different intervals, including starting the strategy right before the 2008 downturn or before the max drawdown in the longest backtest? Would be curious to know how it performs if entry timing is particularly bad for the strategy, and if it recovers.

Also -- Can you summarize specifically which variables you're playing with here? :)

Thanks,
Marc

My contribution -- a variant on Vladimir's version with Stock trend enabled, higher risk factor, raised market_cap_limit.

I do have a few questions. I'm not quite sure how the leverage_factor variable is working. I have set to 2.0 but my leverage still hovers around 1. Also, @Guy -- are you calculating the leverage fees manually? I don't see any code that does the calcs in any of these. Still curious about tax implications as well -- anybody modeled those? How would a casual investor plan to pay taxes if most disposable income is tied up in short term stocks?

Incidentally, this algo returns around 1100% if started October 2007, shortly before the downturn.

Clone Algorithm
90
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58db2f7d6914821787931410
There was a runtime error.

No, the order will be dramatically different, and that is the entire point to using exponential regression. If someone wants to trade this model, it's important that they understand what it is.

Example: (very approximate math)
We're considering two stocks. ABC Corp is traded at $10 and XYZ Corp is traded at $500. In the past 90 trading days, both stocks gained about $5 in a nice smooth trend. The approximate linear regression slope is around $0.056 for both. They would show a similar ranking using linear regression.

But the exponential slopes would be a little different. ABC made 100% return in 90 days, so the slope would be around 1.00773 per day. XYZ just did 1% in the same time, resulting in a daily slope around 1.000112. If we annualize them, to find out what this means on a yearly basis, we get quite a difference.

ABC: ((1.00773 ^ 250)-1)100 = 585.8
XYZ: ((1.000112 ^ 250)-1)
100 = 2.831

The ranking order changed from about equal, to an extreme difference. We are looking for percentage moves here, not dollars and cents. If we would use linear math, a stock with a higher price would only need to make a tiny percent move to be at the top of the list.

Also, be aware that the latest iterations are based on a typo. The log function is applied twice. The slope is calculated on the logs of the logs, so you need to consider what you are actually measuring here.

A problem with trading strategies is to find out if they are scalable or not. And be assured, not all trading strategies are.

You can do a simple test to see if a strategy is scalable. You increase the initial capital put to work by the factor you want.

Using the same equation as before, we have: A(t) = (1+L)∙k∙A(0)∙(1 + r + α - lc% - fc%)^t which accounts for commissions, leveraging fees and alpha. Here, k is the scaling factor. If I set k at 5, the total profits should come close to 5 times the no scaling scenario, giving 5 times A(t). k=1 is equivalent to no scaling.

With the same program as the last chart presented, here is the outcome with $5M as initial capital.

http://alphapowertrading.com/quantopian/Vladimir_with_costs_5M_15L_NLV.png

Based on the simulation, the strategy showed itself to be 100% scalable. It ended with a 54.96% CAGR. In numbers, the initial $5 million grew to $ 2,771,164,252.

Accounting for leveraging expenses which were estimated at: $ 558,701,163, the account would have ended with a net liquidating value of: $ 2,212,463,089 after its 14.42 year journey.

Say a $100 million hedge fund took a 5% position flyer in such a trading strategy. They would get what is above, plus what they might get on their remaining $95 million. $95 million at a net CAGR of 15% (which is above average) would result in: $ 713,294,976 over the same period. Giving the fund a grand total of: $ 2,925,758,065. Thereby raising its total CAGR to: 26.37%.

One has to look at the possibilities to see how far a trading strategy can go.

It looks like a lot of attention has been going to tweaking parameters and then checking the bottom line results, without assessing the full implications. This type of approach raises the risk curve fitting.

For example, one suggestion made was to change the risk per stock. Yes, that does have an effect on the total percent returned, but why? With leverage held constant changing this setting has an effect on the number of total stocks held. Raise the risk per stock and decrease the number of stocks held. In an extreme example Andreas pointed out how that could lead to a portfolio of only 3-4 stocks. Not only does that take away the risk benefits of diversification, it also reduces the statistical significance of the backtest. I have a much higher trust in the results of a backtest that made 50 picks per month than one that made 3.

Going in the opposite direction, changing it so that the total number of stocks increases can also have an unexpected affect. There has already been research done on the effects of factor concentration and the size of the portfolio (search the alpha architect blog). Over time a portfolio of 50 momentum stocks should outperform one of 100. But in the short run momentum can go through periods underperformance. During those periods I would expect a portfolio of 100 to outperform 50, as it would decrease the factor exposure and cause the portfolio to perform more in line with the market.

It's easy to celebrate squeezing a few extra points of gain out of a small tweak, but until considering the full implications you cannot be sure you have actually improved the strategy. What were the market conditions during the backtest? What actually caused the performance to increase? Do you expect those conditions to continue?

It looks like a lot of attention has been going to tweaking parameters and then checking the bottom line results

I posted a version that was not just tweaking. The original is more popular so far even though mine has a much higher profit per dollar. I was prepared to go further if anyone noticed, for example ranking table can be removed. Why? I can't get excited about the version with the 12 million margin people are paying attention to, not just because I don't like margin, instead because the margin is accidental due to unfilled orders, completely unpredictable in the future. My version solves that, or at least addresses it, doing what we should all be doing, sell before buy. Never all in the same minute, too unpredictable and unscalable.

The VY minus-1 change increasing returns by that whopping 330 percentage points (40% of the previous 800) in mine had something to do with a greater quantity of negative slope values, the debugger breaking out when trying to examine that object made understanding it difficult. However I did spend the 10 hour run-time later to insure it was not just due to margin, and edited my comment to reflect that.

Margin looks like profit in the chart. It is an illusion, that's the more serious underlying trouble with tweaks. Margin looks like profit in the chart. A higher return feels good however a seeming increase can actually be a lower profit per dollar put at risk where, often, what really happened is deeper margin passing itself off as profit and there's no way to know, without at least charting cash. The reverse also happens, a positive change uses less margin and the chart shows lower returns. GF is moving in the right direction with margin cost awareness. Wouldn't it be great if that were built-in.

I started modifying this strategy after Shawn made modifications to Andreas' simpler version of the original program. Sorry to say, but the original version was of no interest (that is my point of view). The first time I looked at the original version, I classified it as a throwaway. It could not even generate a speck of alpha. A lot of time and work with nothing to show profit wise. It ended not even beating the index.

I liked Charles' modifications, they were making the strategy behave differently and showing some alpha. From there, I started using his version of the program.

I use the following equation to keep me on track as to what can impact a trading strategy:
A(t) = (1+L)∙k∙A(0)∙(1 + r + α - lc% - fc%)^t. For sure, frictional costs (fc) are a drag on performance, so is leveraging (lc). Both need to be accounted for. But, notice that alpha is also compounded, not just r which here is viewed as the market average. If a trading strategy does not generate alpha (has it negative), it too will be a drag on performance. It certainly can not be considered a plus.

One can multiply all these negatives by scaling up (k). The scaling is to answer the simple question: can this strategy withstand more capital, or is it to break down under the pressure? Will it maintain its CAGR? It is a CAGR game after all.

This is not asking: can we leverage the thing? It would make no sense if there was no alpha.

I used Charles' modifications to stress test the strategy using leverage since there was now some alpha generation in the design. Also scaled up the stakes to $1M. This raised performance to a 28.17% CAGR including all expenses paid. Already interesting levels for any fund. Most of them still tend to r over the long term.

Valdimir made some nice modifications to the program making it behave differently. It continued to move away from the original design making it quite a different program altogether. You could push on this thing and explore its limits.

The real objective of the game is to squeeze out more alpha (more performance) even if it is more expensive to extract it.

As long as the added alpha can pay for the added expenses and some, you continue to grow.

I showed that one could leverage and also scale this program to higher performance levels.

I understand everyone's objections. Too much of this, not enough of that, don't forget this. Sure, we have to consider everything.

But, we also need to know what can we do once a trading strategy has achieved alpha generation. Do we accept it as is and do nothing more, or do we explore how far it can go even if there are some added costs?

For instance, if you change the rank_table_percentile, you change the number of stocks that will be considered as selectable candidates. That criterion already select a lot more candidates than you need. Nonetheless, it will have an impact on the outcome. If you take the top 50 of a 300, 400, or 700 list, will it really matter? Well, the answer is yes it will, even if the top 50 are still the top 50.

You have a very simple strategy here. It bets on the top market caps having the top CAGRs over a 90-day period. That's it. It amounts to a trend following after an upside breakout. There is nothing original in that process. Variations of this are probably used by more than half of the visible strategies on Q.

Some are afraid of drawdowns which BTW are inevitable. And yet, they could easily make estimates of it using the above formula:

A(t) = (1+dd%)∙(1+L)∙k∙A(0)∙(1 + r + α - lc% - fc%)^t

A -0.50 drawdown can happen at any time over a trading interval. With time it gets more and more expensive. For instance, in the last presented test, if there was one going forward, it could exceed a billion. However, one does not design for the drawdown, even if it is a consideration, one designs for the incremental alpha. That is the quest. Just as a side note: during the financial crisis, Mr. Buffett's drawdown was in excess of 100 billion, and he smiled through it all. He too lives by the same formula.

It is good to know how far you can go in order to stay within acceptable limits. And these acceptable limits is what differentiate every one of us.

I stated earlier that one could push for more. So, here is the same program with a minor modification (1 number) added to the last iteration. Also, the scaling factor was set to: k=10 instead of k=5. Making it a $10 million initial capital scenario. It follows the equation:
A(t) = (1+v∙L)∙k∙A(0)∙(1 + r + α – v∙lc% – v∙fc%)^t, as previously defined.

http://alphapowertrading.com/quantopian/Vladimir_w_costs_10M_15L_impr_w_NLV.png

Yes, the number at the bottom of the chart is the net liquidating value at the finish line!

It shows that a trading strategy can have a wide range of outputs depending on the setup of its parameter and default values. Note that no program logic was changed, only a number. Raising the stakes did not alter the program.

62 thousand percent returns in the screenshot, would of course be margin masked as profit. Am I missing something? Please chart cash, knowing how much the broker is owed is essential.

Gary, as you know, Quantopian does not monitor leveraging costs. Notwithstanding, here is a formula to obtain an estimate:

LevCost(t) = (1+L)∙k∙A(0)∙(1 + r + α – fc%)^t - (1+L)∙k∙A(0)∙(1 + r + α – lc% – fc%)^t

In the last test, r + α – fc% = 0.5625, k∙A(0) = $10M, L = 0.50, -fc% is included, and t is for 14.42 years. This puts the leveraging cost estimate at about $ 1.250 B for a net liquidating value of $ 4.997 B.

You are using leveraging only when the exposure exceeds 1.00. This means that any time you get above 1.00 the cash account goes negative, you owe that money back. And, you have to pay for leveraging on this fluctuating amount. As shown above, it is not a trivial sum.

One could have quit the game at any time during those 14.42 years and receive as net liquidating value the blue line on the chart. One would have had 5,625 days to make that decision, and for each one of those days, it would have been to one's benefit. The strategy outperformed the benchmark from the start.

But, the game is not about how much frictional costs or leveraging fees you will have to pay. It is about your net CAGR over the long term. That is, what is the score at the end of the game? And in that department, the strategies ends with a 53.85% CAGR (net of all trading costs).

At that level, why should I care if there are business expenses. Sure, I will look at ways to reduce them, but not at the expense of the overall CAGR.

It is a very simple problem, if you want more than the other guy, you have to do more than the other guy.

In a previous post, I said: this strategy is very simple. One could make a picture, draw a few lines, and the job would be done.

Out of some 8,300 US stocks, 1500 are selected based on Q's selection process, then further reduced to the top 700 that could make the list of the highest capitalization stocks over its latest evaluation period. This is further reduced to the number of stocks having the highest CAGR above a minimum positive threshold. For each period, all stocks are reordered in this fashion. You could hold the stocks while they stayed on the selected list.

This is kind of a recurrent theme on Quantopian. Many strategies start with the same premises. By this is meant: take the highest cap stocks, sort them out and pick the top of the list. It is all good. You don't want to trade illiquid stocks. You want that there be volume to get in and get out of a trade for whatever reason you may have.

Most importantly, it is the kind of principle that will apply that it be over past or future data. But then again, everybody seems to be doing the same thing or some slight variation thereof. How could they differentiate from others?

Here is my viewpoint using a few lines to describe this strategy:

This trading strategy is not only a breakout system, it is a trend following system as well. The assumption being that the 90-day upward trend in the selected stocks might continue. There is no guarantee of it, but it is a reasonable assumption. Notwithstanding, if it was not, one could not extract any alpha.

Note that the original version of this strategy did just that: generate absolutely no alpha. Explicitly stating that there is none to be found, none to be had. Also, statistically stating that, on average, stocks did not have that much memory beyond their 90-day marker.

Yet, what we can see from the above picture appears reasonable. You are taking stocks that have shown to have positive returns over at least the past 90 trading days (4.2 months), and making the assumption that the upward trend might continue. And why not? At least, the stocks making the list have shown they have gone up over the period. It is not an estimate, it is based on actual recorded data.

Nonetheless, the strategy as programmed does not behave totally as intended due to the other settings that somehow control part of the strategy's behavior. You would expect that the selected stocks would be the likes of AAPL, AMZN, GOOG, MSFT, XOM, etc. An ordered list of the largest capitalization stocks.

But, that is not what you get. Those stocks are not fast movers. Of maybe the 200 or so stocks selected to trade, those 5 stocks might not make the top of the list that often. In fact, of the 5 stocks above, only AMZN was traded once in the first year of this test which had for duration 5,265 days. That is more than one occasion to trade.

You will have to go down the list to get tradable candidates, where momentum becomes an added differentiator.

There are economic reasons for my changes in parameters. They were guided toward increasing trading activity and expanding the average profit per trade, two basic portfolio metrics. And based on my variations to Vladimir's program version, I would say my presented simulations left more than enough room to do less.

BTW, to have an idea of the impact of commissions on this trading strategy, I only needed to run a simulation twice. Once with, and then, without commissions. This can be solved using about the same equation as presented before:

CommCost(t) = (1+L)∙k∙A(0)∙(1 + r + α – lc%)^t - (1+L)∙k∙A(0)∙(1 + r + α – lc% – fc%)^t

For the latest $10M scenario, the net liquidating value, including leveraging fees, came out as:

without commissions: $ 5.096 B
with commissions: $ 4.997 B

Cost of commissions: $ 99.013 M.

Not that trivial a sum over the life of the portfolio. It was, nonetheless, the impact of one penny a share...

I hesitate to write this post. I considered not to. The fact that my name, and that of one of my books is in the thread title makes me write it anyhow.

I have no interest in arguing and I will not continue discussions in this thread. I can already see that it is a touchy subject.

The fact still stands that what we are seeing several versions posted which are based on a misunderstanding of the mathematics and the code, and the result is that the model trades something very different from what the descriptions posted along with them say.

As it stands at the moment, there is a sharp disconnect between model code and model description. It does not do what the descriptions say. The results are extremely unrealistic and I would strongly discourage readers from trading in this shape. Given the multiple mathematical errors in the code, the results are accidental and the output is merely curve fitted data.

Having said that, I don't want to discourage people from tinkering with the code and trying variations. That's why the code is posted. There are some good improvements in this thread, even if the bulk of the return boost came from mathematical errors.

I understand that posting this may upset some people. It is not my intention. I don't write this to anger or embarrass people who wrote the modifications.

I merely want to caution readers from trading these versions. They do not do at all what the descriptions state. Be very careful in analyzing the code as well as the trades.

My advise to those following this thread would be to write down the logic of the model on paper, consider why each piece is there and what it does, and then rebuild it from scratch. That is likely the easiest way to spot the issues. The output will be quite different.

With that, I will step out of this thread.

Hi Andreas,
In my case and this is probably the case for everyone, you input is much appreciated. I understand someone searching on Google for your book's algo and arriving here will have to be careful and understand because most variations are not even related to your book main idea: managing risk. Now the latest backtests published don't even do the linear regression as it was…. Yeah we're very far

I think everyone here understand that but also from what I've seen on Quantopian, the “main goal” here is to stretch the elastic to the max to get the highest returns possible even if 1. The variables are probably fitted too much regarding previous test and will not behave the same in the future and 2. There is no risk hedging or any other security measure and it's more related to gambling. This is unfortunate Quantopian's community is not broad enough to attract people with different objectives. Maybe the forum format where there is no category where we could put our posts is not helping, maybe it's because publishing algos is giving someone your secrets and people don't do it more I don't know why it's like this. I agree with you that almost all the posts with active people are the ones where we're trying to reach the moon while we should aim for lower and a more secure flight.
I (and other) may have given the impression we don't understand everything we do which is partially the case but I do my best to read other people's code and understand the logic behind before publishing a backtest to explain my changes (even if they make no sense I know), at least it has given me the occasion to practice and learn.

That being said, I would never trade algos as the ones I'm posting. In my case it's more of having fun and learning along the way. If I intend to put money on one algo, that would clearly be the one you describe in your book because I would never gamble my retirement money.
Now if I could suggest something for the future, in your book, at the last chapter you said you would not publish your version of the algo because of the specifics of your trading software, data source and so on. I think things have changed with Quantopian and other players giving everyone the possibility to play with algorithms on an equal field. In your next book which I'm sure to buy ;-) You could do an entry with the “official algo as described in the book” implemented here or elsewhere. This way, your name won't be associated with people trying to poorly duplicate your work. Your post will be the first one with your backtest on the top. Anything happening further down the thread will be the writer's version but won't be yours, you could add a clear warning. On the business side, I'm sure you could deal some kind of benefits if you were to host your “official version” on a site like here ;-) I know you said you didn't write for the money because there is not much to do but you could get something I guess hehehe

Thanks for writing your book, I really appreciated it!

Thanks, Charles. Much appreciated, fully understood and agreed.

Based on your reply, and despite my last post, I'll add this one to explain.

  1. If accidentally applying the log function twice triples the returns, it's worth to stop and ask why. Bear in mind that after that happened, we are no longer measuring momentum, and annualizing the resulting regression number makes no sense.
  2. The reason why this random change in selection mechanism is able to have a great impact on results, is that we only trade 2 to 4 stocks at a time. Meaning, any change in selection will have an enormous impact, since we no longer have any diversification. Luck impact is through the roof.
  3. The reason why we hold portfolios of only a couple of stocks at a time, is that variables are added with claim to impact leverage, but in fact only impacts position size and number of positions. Risk factor is set to extremely high, resulting in small stocks. Then set 'leverage factor' to 2, and you get double position sizes and half the number of positions.

There are more critical mistakes in the code, but this is what accounts for the seemingly magical return. A tiny number of stocks, randomly selected at huge position sizes. Change any variable, and you're likely to get an extreme impact, up or down. Results have no predictive value.

Other interesting parts is how the position size is calculated if use_atr and the atr_multiplier are used, again having quite random impact and not doing what they claim to do.

Less important is that we are not considering 700 stocks, but rather around 400. Check the pipeline code and record the size of the investment universe to verify.

Another way to look at it: If we could triple return by applying log function twice and reducing number of stocks, why not apply the log three times and only trade one stock? Point being, if you get seemingly fantastic return boost without knowing how it happened, then something is probably not right.

And, as for my next book: I absolutely do plan to make all the code available, either for Quantopian or for Zipline. Everything will be released in the wild, in Python shape. And writing books for money... Well, if I wanted to write books for money, I would write about wizards, vampires or cheesy bdsm. Based on author payout in the past couple of decades, those are unfortunately the most profitable categories, and sadly I am unqualified to write either of them.

@Andreas, I did not hesitate to write this one. Actually, I was waiting for it. You already had provided clues. And like you, I don't want to upset anyone. I am only looking at a trading strategy, a mathematical problem, a sequence of mathematical operations. It is just a trading script after all, just a program.

However, I do understand your concerns. I would probably give the same advice, if.

But, as I have said before, this program is now something quite different from your original program. It changed its stripes, it changed its trading behavior. And it is all based on what you might call mathematical errors since it departs from your original strategy design. So, I do understand your disconnect. But,

I built on Shawn's version which was improved by Charles which was further improved by Vladimir. I was changing programs with each iteration changing the code and moving further and further away from the originally posted do nothing trading strategy at the top of this thread. I have not used the original posted trading strategy. I found it uninteresting. It is only with the successive modifications brought by other members in the forum that I explored what one could do with it. Especially, using Vladimir's version.

Like I have also said before, I did not change the logic or trading procedures as presented in Vladimir's program. However, I did explore how far one could go, an attempt to find the limits to then slow down a bit and making sure never to exceed them or even get close. To do this, you need to find them first.

That is why you simulate trading strategies in the first place. Not only to find if they are productive, profitable, but also to know if they can or will survive in their future unknown trading environment. It explains why you put out general stock selection principles that could apply on past data as well as on future data. And that you set similarly general trading procedures.

There is only one thing, and not many as you state, that could be considered some kind of “mathematical” error by some. It is not my case, I don't see it as an error. You already classified it as a “typo”. This might be the origin of your concern when, in fact, it is perfectly acceptable. And that is Vladimir's double log iteration. You see it as a “typo”, an error, and I see it as a welcomed feature.

Here is why I used and kept Vladimir's program version. The double log is giving the equivalent of the second derivative on price. In this context, it translates to the price acceleration. And when transposed in Vladimir's trading script, it not only selects the stocks having the highest positive 90-day CAGRs, it offers preferential treatment to those having the highest accelerating CAGR.

I use the following equation to keep me on track as to what can impact a trading strategy:
A(t) = (1+L)∙k∙A(0)∙(1 + r + α - lc% - fc%)^t. For sure, frictional costs (fc) are a drag on performance, so are leveraging costs (lc). Both need to be accounted for.

The important point here is the alpha generation. If α > lc% + fc%, it will cost you more to do business, but it will also return more. The r + α is compounded, and to improve overall performance, all it needs is to be positive, therefore greater than the costs incurred to obtain it: α > lc% + fc%.

I don't look at a trading program the same way you do, but that does not make it wrong. It only makes it different. All the trading procedures I used are perfectly legitimate operations, and they all survived within their coded limitations. There were no errors in the code I used, mathematical or otherwise.

I can tell you what I consider the important part in above equation. It is: ^t, time. That is what makes my modifications to Vladimir's version fly. They are catching random-like exponential drift in a rising market. The obtainable net alpha is compounded and it reverberates across the entire time interval.

I too have covered the extent of what I would have liked to do with Vladimir's trading strategy. Will probably use some of the things I used here in other programs. But this terminates my participation in this thread.

I have covered what I wanted to see and explore. Hope some have found it interesting.

Wishing you all the best.

To get back on track, two things:

  1. Stop buying and selling in the same minute. The unfilled orders make for out-of-control unpredictability and impossible margin that turns into fake returns. My version shows how to avoid that and was its main point.
  2. Recalling some of the things Andreas recommended:

For this model, you could try using a fixed number of stocks, a minimum slope, removing some of the filters like percentile and trend filter, using different allocation models like equal, market cap, inverse market cap, multi factor etc.

I think we are all on the same page here, great!
Blue: I really liked the way you put a queue when buying stocks. I think it's something we could add to Guy and Vlad's version and see if it changes the behavior. It should not change that much. If it does we would need to investigate why.

I also like your idea of putting the score in a dedicated table with the slope instead of calling a function to get the position size. It makes the algo easier to add parameters in that table if we want to put a weight on different parameters. For example just add a column i.e. we could add the SMA there too or the log of the log ;-) and look for more than one clue to buy.

However I need to put some time to look at the scores because I'm puzzled why TSLA has such a high score in October 2013. Also your ago buys a lot of different stocks. When comparing to what we should have in Andreas's book, maybe 20 max but not more. This way it makes it easier to start when you're a home invester like me and don't want to put 100k$ down. I haven't played with the parameters yet so I don't know if it's just the risk not being high enough.

Vlad, we need the truth, was the log of the log a typo or a genuine idea hahaha it turned out great but I need to know.

Guy, thanks for your input, it gives me a different perspective on how things work in the major league  and pushing us on completely unintended path.
Now I wish I could have more time to implement all of these plus Blue's comments ahhhh eventually I'll do it

Applying the log function twice would measure the acceleration of momentum if the difference were taken in between the operations. And I remember reading an academic paper that found that momentum acceleration might be a more powerful factor than momentum itself. However, I might have missed it, but I didn't see the second difference being taken in the code. By just applying it twice I'm not sure you get the same result.

Accidental changes sometimes lead to great results. But even when intentionally making modifications care must be taken to test thoroughly. In this case which change had the greatest contribution to the recent results? And why? Was it the change that reduced the number of positions held to less than 5? What type of risk does that expose the portfolio to? Is it worth the risk, even if you can be confident that the past results weren't luck?

Hi Andreas,

Sorry that I did not implemented in time yours recommendation to remove double log.

Here it is.
It slightly less productive with the same randomly chosen parameters but still looks like on steroids.
So not to much came from that "typo".

Clone Algorithm
315
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58daa941f55bc617991544a7
There was a runtime error.

Vladimir if you see in daily positions & gain every stock has the same quantity. This is weird or not?

Vladimir,
Have you had a chance take a look at the pyfolio analysis of your latest backtest? I have attached a notebook of the position tearsheet. It looks like FSLR made up close to 80% of the portfolio in 2008. There are some other heavy concentrations at other times as well. Was that the intention of your changes?

Loading notebook preview...
Notebook previews are currently unavailable.

Pietro, Shawn Emhe II

In previous my posts in this thread I use not optimized but randomly chosen parameters just to illustrate how new slope functions behave.
In latest backtest I just tried Equal number of shares option

context.use_average_true_range = 0  

proposed by Charles Pare.

In the following backtest I am using less aggressive options:

original ATR weighting

context.use_average_true_range = 1  

and

context.min_momentum = 30.0  

to show what you can do with different Clenow Momentum functions

Clone Algorithm
315
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58e2c5dbe77b981c09843e4b
There was a runtime error.

Since brokers will not make margin loans beyond 100% of stock value, my version is the only tradable version in this thread, the opposite extreme with no margin. The code provided here (based on VY above) moves toward addressing that, it can make them somewhat viable maybe, if modified.

A far better way to go: Start with the principles in my version that prevent margin, dump the other portions of my strategy, insert your own since you guys understand what Andreas Clenow is working toward better than I do anyway, then deliberately place orders for intentional margin to whatever level you feel is reasonable, this places you in control of it.

In this Algo you buy at limit price in open (the first day of the month) where the limit price is the close price of the last day of the month, is that correct?

In quantopian do i have some solutions to know which are all the stocks potential candidates to be bought?
In other terms, if i want to manually insert orders on the market i need to know which stocks are picked from the algo

Pietro,

In this Algo you buy at limit price in open (the first day of the month) where the limit price is the close price of the last day of the month, is that correct?

The order executed not at open but at 07:30 according to

schedule_function(rebalance,  
                      date_rules.month_start(),  
                      time_rules.market_open(hours=1))  

In this code limit price is defined as data.current(security, "price"),
that is the price at 07:30

do i have some solutions to know which are all the stocks potential candidates to be bought?

You may look at log output of backtester

2017-03-01 07:30 PRINT

Equity(351 [AMD]) 616.067663
Equity(18113 [URI]) 321.834520
Equity(1937 [CSX]) 301.071667
Equity(19725 [NVDA]) 210.879091
Equity(43124 [TSRO]) 192.551963
Equity(8132 [WDC]) 187.978477
Equity(39840 [TSLA]) 183.388283
Equity(5121 [MU]) 181.775619
Equity(6897 [SIVB]) 174.127478
Equity(50242 [DVMT]) 153.901290

I was trying to isolate where the outperformance comes from, using the latest version of the algorithm posted here: https://www.quantopian.com/posts/stocks-on-the-move-by-andreas-clenow#58e2d91535f9ba47be57ec23

It turns out that most of the outperformance comes from the MarketCap calculation, namely:

class MarketCap(CustomFactor):  
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding]  
    window_length = 1  
    def compute(self, today, assets, out, close, shares):  
        out[:] = close[-1] * shares[-1]  

For the shares_outstanding field used in the above calculation, as per the documentation at: https://www.quantopian.com/help/fundamentals#valuation

  1. This field is updated only quarterly (whereas the algorithm trades monthly based on this data, so it could be using outdated data)
  2. This field is is not adjusted for corporate action events including splits (so if there are corporate actions that have happened after the date this field was last updated, we would not have the correct value here)

So, I changed the algorithm to use the built-in MarketCap factor instead of the custom factor above, keeping everything else the same. See the attached backtest for the results with this change.

Clone Algorithm
44
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58e49eae69a08e65768093cf
There was a runtime error.

Vladimir, i need to understand.
Why you say that the limit price is the price at 7:30?

The function "time_rules.market_open(hours=1))" does mean that the code runs one hour after market open, is it correct? In my log i read 4:30

What i don't understand is why you buy with limit price. Where the limit price is the price of that moment (4:30, bid or ask). In this way, in real life, there is a risk of not execute orders.
For a better valuation of this strategy why not use market orders?

Pietro,

The function "time_rules.market_open(hours=1))" does mean that the code runs one hour after market open, is it correct? In my log i read 4:30
It is correct.
...why you buy with limit price?
It is better to ask this Andreas Clenow, I just used the modification of his code.
But you may yourself replace limit order for market order and see what will be the results .

Has there been any thought to disqualifying stocks wherein the adjusted slope is declinging yet still "ranked" at the top of the selection? For example, AMD has been on a steady decline in adjusted slope since 3/1/2017, dropping from above 600 to roughly 200 today. If you compare this to the price chart, you can see that AMD hasn't made a new high since 2/28.

Andreas please can you explain why limit price?

In any case, Vladimir, in log output you see stocks after you've bought them not before, is it right?
Is there a way to know before what to buy so one can have time to execute orders on the markets?

Pietro,

You may put algo in live paper trading, schedule it 15min earlier and get from there live orders.

Pietro: The limit orders were done by James, before I learnt to build my own Python stuff. My own code uses market orders.

I also use the Q500 universe, instead of the market cap workaround. That workaround was built before the Q500 pipeline was available.

My local C# code uses the historical membership of the S&P 500, reading a joiners/leavers database table.

Keep in mind that a model of this kind trades a broad concept. If it becomes too sensitive to exact execution or exact definition of investment universe, it will not be robust.

@Andreas Clenow

After some unsuccessful attempts to develop market timing models for investing my wealth I finally turned to passive investing and specifically to Vanguard funds.

I do not particularly regret the money I spent to buy your books but little or nothing in them has worked in the last 5 or even more years. Your volatility targeting method generated large losses and was a disaster. I followed your fund in Bloomberg (Yes, I have a terminal in my office still) but now it is not listed after several years of negative returns as shown in attached.

My question to you: do you believe after your failures that market timing is still viable?

Ha! It's this one again.

Yes, of course I know which individual it is who has spent so much time and effort around this story, telling everyone to contact me about my supposed fund blowup. A person whom I have no connection to, but who has apparently been upset with things I wrote on the internet. The individual behind this rather childish campaign will of course not dare to contact me directly about it, nor has he bothered to do any research.

Your post is practically identical to the many newly created Twitter accounts who sent me the same thing. Your account here also seems to be brand new.

What you link to was a tiny fund of a few million, mostly owned by myself and my business partner, with an experimental strategy that didn't work out very well. In the end, we took it private and continued in-house. It's one of five funds, 12 structured notes, countless of individual institutional mandates, a few private equity ventures and one bond which I have launched over the years.

So what did you plan to accomplish here? Are you here to contribute to the conversation or did you just hope to cover ad-hominem attacks in rhetorical questions?

Just a few comments here:

-People should be intelligent enough not to base their investment decisions on some book author they never met.

-People should be intelligent enough to test a strategy before putting real money on the table and understand what they are doing.

-Managing your money is your responsibility and there is nothing to gain on blaming someone else except making a fool of yourself.

-In my opinion the few pages on this thread show there is some advantage of implementing this strategy. The only point is people are arguing is if the CAGR is 12% or 3000% (warning Canadian humor here) ;-)

It looks we have a bullyish momentum here. The individual behind such Alexander Shannon account is breaking the Quantopian Terms of Use:

3 Prohibited Activities.
- use the Site or Services to advertise, market, sell, or otherwise promote any commercial enterprise that you own, are employed by or are otherwise compensated by, either directly or indirectly;
- transmit any Content that is unlawful, harmful, threatening, abusive, harassing, tortious, defamatory, vulgar, obscene, libelous, or otherwise objectionable or which may invade another's right of privacy or publicity;

Sorry for that Andreas. Please keep providing your input/feedback/ideas as they are very much appreciated by this community.

Quantopian?

I'm watching the thread closely. It's definitely close to the edge of acceptability, and I encourage everyone to keep it civil.

As a reminder, we welcome debate and disagreements about ideas. The way that those disagreements are expressed matters quite a bit. We want the Quantopian community to be accessible and welcoming for as many people as possible. We don't want people to be afraid to post because they might be cut down. Debate is healthy; attacks are not.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I don't thing we have a bullyish momentum here, Shannon has exposed himself as "i ran the algo without fully knowing how it works and how it may fail and i lost evertything, blame Andreas for not making me rich by just copying an algorithm"

Some of these comments are embarrassing to read. The phrase "pearls before swine" comes to mind. Andreas is one of the few professionals willing to share practical and quality research to the retail crowd.

Personally, I can definitely vouch for the credibility of Andreas. I have read both of his books and found a wealth of information in both of them. In my mail communication with him, where I had some queries after reading Stocks on the Move, he responded immediately in the most helpful manner. Even in his book, he does not claim a CAGR of more than around 12% for this type of momentum strategy over the long term. And looking at the posts above, it does deliver at least that much returns.

Received an email today with the following chart:

Finally, someone has succeeded in duplicating my results, and even do a little better. It says we have the same program settings or very close to it.

Therefore, as previously stated: anyone could do it.

@Andreas Clenow . After some unsuccessful attempts to develop market timing models for investing my wealth I finally turned to passive investing and specifically to Vanguard funds. I do not particularly regret the money I spent to buy your books but little or nothing in them has worked in the last 5 or even more years. Your volatility targeting method generated large losses and was a disaster. I followed your fund in Bloomberg (Yes, I have a terminal in my office still) but now it is not listed after several years of negative returns as shown in attached. My question to you: do you believe after your failures that market timing is still viable?

What was this method? I ask because I am a believer in tactical asset allocation. And Markowitz seems a pretty good route (by way of example).
Markowitz Portfolio

As to buy and hold of Vanguard Funds, yes indeed. Except there is no such thing as "buy and hold". With the S&P 500 ETF by way of example you only hold stocks so long as they remain in the S$P. You are buying, effectively, a trend following system on US economic growth. As to whether you should add a timing mechanism on top of that ETF is a further question but one that is by no means as simple as it seems. A 60/40 equity bond split works on "market timing" in a sense - you are profit taking and re-allocating to the losing side on a regular basis.

Where criticism of trading sites is justified is to the extent that participants are fooling themselves with ludicrously optimistic back tests based mostly on leverage and or curve fitting.

Quantopian itself can not be criticised on these grounds. All they do is to provide a top notch platform which gives the inexperienced, the gullible and the foolish enough rope to hang themselves. I am not a believer in their approach of gearing a "market neutral" approach to the eyeballs but I am a HUGE fan of the research they produce (the above link being a prime example).

Most people become disappointed because they are enticed by impossible returns marketed in many cases by cynical rogues with phrases like "make millions from scratch by doing bu**er all." I don't think Mr Clenow is guilty of that.

There have been many intelligent comments here on Quantopian in particular by Michael Harris who has severe doubts about market timing - or at least the trend following variety. Fair enough and who knows he may well be right. But if you don't operate SOME form of decision making on when to enter and or exit you will end up owning the future equivalent of Weimar Germany bonds and a bunch of bankrupt economies.

This is a fun algo. Does anyone know why it takes 6+ months or so before it starts trading after starting a back test?

I sent the screenshot to Guy; it's a mighty fine example of curve fitting.

The only way I could get leverage to work right was by changing the estimated cash impact and position size taken to two different numbers.

I am really looking forward to playing with the Quantopian futures data so we can create a turtle-esque strategy that employs a diversified futures trend following system, like the one laid out in Andreas's other book.

Clone Algorithm
159
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58f03e6ff9c829652b9a51f8
There was a runtime error.

Guy, I got 37,000, drawdown of 19. Major improvements can be made here. If I get it to 50K ill post it for the luls

@Justin, sorry, I thought you had it since your chart did resemble mine.

http://alphapowertrading.com/quantopian/SOTM_Last_Vers_100k.png

As you can observe, the numbers are similar except for the initial capital which is at $100k.

Unfortunately, your version of the program is not that scalable as illustrated in the attached code.

I have not made any code modifications, and used the program as posted. It gave out what I would consider as ordinary results with $10M for initial capital.

It translates to a 13.79% CAGR for the 13.8 years, this before leveraging cost which were estimated at $6.2M, thereby reducing performance to a net 12.89% CAGR all expenses paid.

Any trading strategy we design or transform is just one of the possible scenarios. There are millions of different combination that can be tested. Picking one of them does not necessarily make it over-fitted, optimizing for a result might.

My suggestion: keep it up, you will find a way and the reason why my charts show such high numbers. It is when you will find out the why that you might consider my results as ordinary for that program and even make some improvements of your own to push it further.

Clone Algorithm
62
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58fa7cdfaf23b761a866829b
There was a runtime error.

@Justin, to get some clues on how you could do it, or improve on your program, take a look at my recent article on the subject:

http://alphapowertrading.com/index.php/papers/251-a-quest-for-stock-profits-part-ii

It provides my analysis of the strengths and weaknesses encountered in these trading procedures. It considers same-theme variations with an emphasis put on achievable performances.

Hope it will help.

I looked at your posted program and I don't see how you would classify it as a: “...mighty fine example of curve fitting”. You made some choices from millions of possible parameter combinations. How could you determine if it is over, or under-fitted for that matter, from the few trials you have tried?

@Vladimir Yevtushenko, I am currently live trading your most recent code posted 5 weeks ago and was interested in seeing if it would be possible to implement a modification that had the code move to an ETF (IEF for example) instead of cash when reducing exposure. Andreas in his Equity Momentum algorithm has a section like this (shown below, lines 107 – 109 in that code) and I’d be interested to see how this changed performance of this momentum strategy. I do not have the coding ability myself but wanted to see if you or anyone here would be able to implement that easily.

# identifier for the cash management etf, if used.
context.use_bond_etf = True
context.bond_etf = sid(23870)

I received the following direct question by email: “Is that algo for real? 40,000%?” My answer was rather blunt:

Yes, and the trading procedures used are perfectly legitimate
operations. They all survived within their coded limitations. There
were no errors in the code, mathematical, logical or otherwise. No
gimmick or deception. Just plain Python programming.

For me, the real question would have been: would you trade like that? Personally, I would bring more modifications to this strategy before even considering it. It is not my preferred strategy. It is wasteful of capital resources, has too many switcheroos, it is shackled to constant parameters, and makes as time progresses bigger and bigger bets. A way of saying it should be better controlled. Those are all subjective problems. They all can be addressed and fixed.

Nonetheless, my latest iteration, even in its crude state did show that it was more than just productive,

As a reminder. If my strategies are not valid in some way, then I would have to conclude that none of yours are either. Is that blunt enough?

Some think that the charts I've posted are not for real, well think again. Some might not like them, but basically, that is not my problem.

Regardless, I am not the designer of this program as I have said before in this thread. Most of it (≈ 99.8%) is Vladimir's code. His program modifications made it quite a different program than the original version at the top of this thread, and it is from Vladimir's version that I made my latest modifications. So, Vladimir, thanks, and also to others who have previously improved on the generic design.

The received email question kind of implied as if something was wrong or that some wrongdoing was at play when all you have is a program. Valdimir's program version is still in plain sight for anyone to analyze, clone, modify, and execute.

@Justin in a recent post describing his own program modifications came to the conclusion that: “... it's a mighty fine example of curve fitting.”

I disagree on that. Sorting 700 stocks by capitalization and return is not what I would call curve-fitted. It is just a rather plain stock selection process, and not original. It has been around for ages.

My only contention on this trading strategy would be on the validity of the price data itself. I have no reason to doubt the integrity of the price database provided by Quantopian. If the stock price data as provided was not reliable, everyone's every simulation would be in doubt.

So, you have these simulation conditions. The price data is guarded and protected by Quantopian. A program is executed somewhere in the cloud on their servers using their data and Python interface. Program code can be shared by those wishing to do so. It is the only thing that is shared: the code. And it can only be executed on Quantopian machines.

Having cloned someone else's program, it will give the same answers if executed. If you change the code, stock selection methods, program parameters, trading procedures, or simulation intervals, it is sufficient to have a different answer. It also changes the strategy's trading behavior.

But, whatever, it remains a program that is executed on someone else's machines under their setup and conditions. All you can do is supply code or variations thereof. And, code is code, it only executes if it has no programming bugs.

I saw some potential in Valdimir's program. Was able to extract it and also demonstrate that it was there. You could go from a do-nothing scenario as presented in the first post with no long term alpha generation whatsoever to progressively not only instill some alpha in this program but also have it grow from within. I made it respond better to its inherent mathematical makeup.

Spoiler alert: my modifications to Valdimir's program pertain to enhancing part of the submartingale properties of the strategy. In essence, letting the program, at times if it can, increase share holdings on some of its advancing stock positions. And since the selected group of stocks candidates as depicted in a previously posted chart are above zero in CAGR terms, they are expressing a positive expectancy for the taken group of positions.

For such a statistical swarm to be productive, a basic requirement would be an upward market drift. It is what you have most of the time when the strategy is taking positions. The posted chart will not let you buy without this positive expectancy. All I did was partly exploit these submartingale properties. And, the simulations I did, not only showed these properties were there, but that you could exploit them as well.

A submartingale has for mathematical expression: E[P(t+1)|(P(1),...,P(t)] ≥ P(t). And if you look back at the cited chart, you will observe that all price series having their CAGR above zero satisfy the condition. It also implies that any return sorting could have done a similar job.

It is not by doing the same thing as everybody else that you will get different results.

Where did the outperformance come from?

From gaming the strategy itself, of course. By making it an automated gambling machine.

Exploiting submartingale properties of the strategy by enhancing its betting procedures. This had for result: making big bets on big price moves while making small bets on smaller ones. This should come as no surprise, at least, it did not to me.

This does not make the underlying trading procedures predictive. That was clearly demonstrated by the original trading strategy at the top of this thread. There was no alpha there.

It is how you modified the program to follow different trading rules that made a difference. And, based on the various tests performed, it made quite a difference.

See the performance progression in the series of charts presented. All achievable by gaming the strategy. It makes the range of outcomes so huge that the first chart appears insignificant by comparison.

It would also appear that gaming the system might be a lot more productive than trying to optimize by finding the local maxima of a strategy parameter. It does raise the question: what should be part of optimization procedures? And what can you do to improve performance?

I do not believe that gambling strategies have any place in an adaptive complex system. Probability may be a fine tool to analyse games of chance with fixed rules but adding a gambling approach to market timing seems to me to be adding one dubious approach to another.

In the long term momentum may be effective. In the short term I believe it has become less and less effective as a tool to time markets over the past 40 years. My analysis of future markets over the period certainly shows that trends have become less and less easy to profit from in a trading sense as market activity has increased.

To the extent that I am now wary of using momentum for my own trading and prefer to “earn” rather than speculate on a probabilistic approach.

Hello, First of all: Sorry for my english ;)

I would like to know how i can add a exception in the code.

I saw the algo buy AMD; in my mind.. i never buy AMD and i will never buy it.

My previous post made the point that you could make money, and a lot of it, using what was classified as an automated gambling machine.

All it needed was some kind of generalized “excuse” to take a trade. It made the point indirectly that it might not matter much which excuse might be used as long as it provided a sufficient number of trading opportunities which might or might not be taken. Only two portfolio metrics mattered: A(t) = A(0) + n∙APT, where n is the number of trades, and APT the average profit per trade. Because trading intervals tend to be relatively short, n needs to compensate to make it interesting. A way of saying that n needs to be large, and because of that APT will tend asymptotically to a limit. Look up the law of large numbers if you do not see this.

A single decision space was set: if a stock price was above its 90-day linear CAGR regression and part of the selectable trade candidates, it could be bought and held while it stayed above its positive regression slope.

Some people design trading strategies using fancy names (on the move, momentum rotation, mean reversion,...) with a lot of stuff in them. When, in fact, their designs are no more than automated gambling systems.

I dare call them for what they are intrinsically, that is: automated gambling strategies. It is the case for the originally posted strategy in this thread.

No proof whatsoever, statistical or otherwise, was ever provided showing that a 90-day trend could, in fact, be profitable. No data or evidence of any kind was submitted. The strategy was hard coded with this 90-day CAGR trend without an ounce of justification. Nothing more than: hey dude, I good, trust me!

I do not buy that stuff that easily. I require proof. And, on that count, the originally posted strategy fail miserably by showing no alpha. No trading advantage whatsoever. Even with having 14.42 years to show its merits, it still failed. That is 5,265 days to make its point! And it could not even beat its SPY benchmark.

Due to the considerable lag in the regression of price series, we could view the trade triggering decision point almost as if quasi-random. In the sense that the right edge of the regression line was so outdated that it was like making decisions based on small price variations (as low as 1/90th of a cent) and reacting to price changes from 1 day to as far as 90 days ago.

As if the outcome of a random event, and therefore having low predictability. The lower the predictability, the higher the random nature of the trade decision point itself. And the less you can profit from predictive methods of play. Which is what the original program clearly showed: no alpha.

What are the odds that tomorrow a stock regression line goes up or down by one cent? Talk about market noise. This is much lower than even a whimper, it is close to silence, to cosmic background noise. And yet, trading decisions were based on such price variations for either the entry or the exit of positions. A one cent price move over 90 trading days was sufficient to trigger a trade...

What saved the strategy was that most of its bets were taken while the market, on average, was also going up. Imagine this prodigious concept: betting on long positions mostly when prices in general are going up. Wouldn't that be a trading revolution in the making, if it hadn't been there for ages. If a trader does not see that betting long on stocks in a rising market is one way to make profits, maybe he/she is not in the right game.

The original trading strategy as presented in this thread is gambling on the pretense of a positive CAGR slope. And yet, does not generate any alpha. It gets the same results as if a monkey was throwing darts at the financial section of a newspaper.

The methodology used is at most gambling its way to mediocrity. No alpha generation, no explicit skills demonstrated, no reason to think that the whole premises on which was based that trading strategy gave results different from random.

Therefore, yes, if a trading strategy does not generate any alpha I will call it ordinary or worse.

Would you take the money even if it is coming from what virtually amounts to an automated gambling program? I know I don't mind or care what generates the money as long as it is there.

And what my simulations showed is that the money is there, the alpha is there, and you can build it yourself. It is a lot more than the originally posted trading strategy had to show.

Exploiting submartingale properties of a trading strategy might not be for everyone. However, no matter what, whatever the math of the game, math is math. An equal sign has a lot more weight than an opinion.

@Anthony, you give opinions but you do not justify them. You send out: “I believe”, and I suppose. It might be acceptable for you. But, I am in a world of math. Opinions are not enough. I want some kind of justification, some data, some research to support your claims. Prove me wrong! No opinions. Just facts, show some tangible evidence.

@Justin, you have above the explanation why, with the slightest change in parameter, your results are all over the place. You could change the rebalance time to another time of day by minutes and see the performance drop. The reason is related to the trading mechanics of the strategy. It reacts to the difference in price of: (1/90)∙(p[-1] – p[0]). A one cent move over 90 trading days can trigger a trade. It is making the trade triggering decision process quasi-random. However, I don't mind,. My program version makes no pretense. It is an automated gambling machine.

It is not a question of of maths . You are are using the wrong tool for the wrong job.

@Anthony, again, no proof, no substantiating evidence, no facts. Just as before, only an opinion.

It gets tiresome you know... You have been at it for years now. Always with the same mantra whatever trading strategy I modify. At the very least, you should try to put some substance in your argumentation. Then, it might get interesting.

It is all about math. Period.

In a program you add, subtract, multiply and divide whatever stuff you want, in whichever way you want, under whatever conditions you want. Everywhere you put an equal sign, it will hold. It is the logic of what you are trying to do that might be right, wrong, misunderstood, or ill-conceived.

Note that if you knew the tools I use, you would be able to show the same results I did, and you have not so far.

According to you, “apparently”, these wrong tools would be applied to the wrong job! Is that programming, time series analysis, gaming theory... Or, is it your way of saying: no matter what, you are wrong! The world is flat, and that's it!

I know what my program does. I can also make it do more if I want.

In the end, it is very simple: PROVE ME WRONG!

If you can not prove me wrong, then your “opinion” has absolutely no value, no mathematical credibility.

It might be a lot easier to prove me right. But there too, you will need math.

On my part, I did put some tangible evidence on the table as to what my version of this program can do. And, I will stand by it.

Guy it's about the markets and reality not maths or programming. Anyone with the most basic maths and programming skills can produce wonderful backtests. Whether they can match these backtests going forward with a real money account is what matters. Have you managed to do that? I most certainly have not.

Type of trading Chances of success
Intraday with charts Zero
Intraday with model Very low
Short-term with chart Very low
Short-term with models Low
Trend-following with charts Very low
Trend-following with model Low
News/sentiment trading Zero
Rare events Low
Esoteric methods Very low

Michael Harris

I love Michael Harris' Blog. I have absolutely no idea what his software does although no doubt he would be kind enough to give me a free trial if I asked him. What he is, is a no bullshit man. He is a realist and sees most financial bloggers for what they are. We should all take a leaf out of his book.

Note to Quantopian, Guy et al. : please do not get stroppy. This post is not an insult. To anyone. I am merely stating that in my opinion Michael Harris is worth listening to.

Dear Anthony,

I haven’t been to these forums for a while because I’ve been busy with projects. I just logged in to check the performance of by DIAT4S3 system and to make a quick tweet about it.

https://twitter.com/mikeharrisNY/status/863779504573087744

I saw your name as the last poster in the forums and checked it out as always for some wisdom. I appreciate the reference to my work but I’ve never asked anyone for recognition and frankly I don’t care if anyone is listening. Recently I even stopped writing free articles for my blog with market analysis because I want people to do their own homework.

Backtesting is dangerous because “the more you try, the lower the chances of success”. Please see my recent article in Medium.com

https://medium.com/towards-data-science/the-paradox-of-the-elusive-market-edge-5e6b57b04398

Figure 1 (from my book) depicts one of the most dangerous processes one can get involved with in this area. Figure 2 shows how probability of success decreases with new trials because the probability of a Type-I error increases with an increasing number of backtesting runs.
Forward testing with real money is the only way of dealing with this problem but it takes time and resources and the risk of regime change after full deployment is there after all.

Trading has a significant luck dimension. Losers should not blame their skills and winners should not be arrogant.

Online platforms such as Quantopian are forcing efficiency on the markets and academics will be happy and feel vindicated. As traders {rather programmers I should say) from all over the world attempt to capitalize on these new tools that make backtesting accessible to all, their purpose is defeated by the effect on price action and specifically by the increase in randomness. Trading strat development was supposed to be a secretive activity for people who were well-motivated, had money and were looking for extra profit and some fun. When trading turns into relentless data-mining bias using publicly available tools, the result is noise. There will be a long period before a return to normality but in the process damage will be inflicted. But at the end of the day the market will “absorb” all the noise with known results.

Best.

Just published my latest book: A Quest for Stock Profits
It says: if you want more, you will have to do more...
A Quest for Stock Profits is now available on Amazon: https://www.amazon.com/dp/B071LL1YR3

It takes what I now consider a lackluster stock trading strategy which had little going for it. It could not even beat its benchmark. The trading method used had nothing out of the ordinary. No predictive powers of any kind.

Nonetheless, it was transformed into a more productive scenario.

Overall, it ends up just gambling its way through.

Seriously winning the game. Not by having better analysis, but by gaming the strategy itself.

The book makes the point that a part of the stock market is behaving as a submartingale, and a way to play it is also with a submartingale trading strategy. To demonstrate the point, it provides simulations which could only result from a submartingale method of play. Some of which have been shown in this thread.

The modifications to the trading strategy are such that it is very different than the original version at the top of this thread from which it is derived. There is no survivorship biases, no optimization, except mathematical and mostly from outside the strategy itself. As said, it accepts gambling its way to the finish line.

The strategy's trading behavior is considerably altered. But, still, what might be important is the simplicity with which it handles market uncertainty and quasi-randomness of stock price series.

Over recent months I covered a lot of ground on the analysis of this trading strategy. There was a lot more to be said. A lot more to cover. It resulted in this book:

Hope it can help some.

A Stock Trading Strategy That Is Simply Gambling

My latest book: A Quest for Stock Profits. If you want more, you will have to do more... makes the point that the original stock trading strategy, on which it is based, was simply gambling. And this automated gambling was somewhat camouflaged in code as if trying to persuade people that it was trading based on some fundamental market data.

The modifications I made to the program were only to enhance this gambling notion. As if saying, if you want to gamble, at least play to win.

When in fact, it was just playing market noise. To such an extent that it did not even outperform its benchmark. It was as if randomly trading on a swarm of stocks. It is understandable since the trading method was much like having a monkey throw darts at the financial section of the newspaper where the most expected outcome would be tending to the market's average. Resulting, as should be expected, into a long term no alpha scenario.

This analysis might appear severe, but, let's see and call things as they are.

Winning at the stock market game is a blurry notion. You can make money, no problem. But, you have to compare whatever you do to a benchmark. And if your efforts do not result in higher profits than the said benchmark, then you have done a lot of work for nothing. That is where the winning comes in. Your efforts, your program has to generate more profits than just buying low-cost index funds. Making some money is not enough. You have to generate some alpha.

To show that your efforts resulted in more than just a random occurrence, you have to exceed by a considerable margin what the market would have given you for practically nothing, just for participating in the game for a long time.

The original trading script defined a 90-day regression line as a trend. As if there was such a thing. One can draw a line on any chart; it does not make it necessarily predictive.

It is just a line that at most can say something about past data, not what is coming next.

It should be evidently clear: there is no 90-day trend. It is a line you draw in the sand, and the market does not and will not even look at it. It will just go its way with no consideration for your “trend”.

Linear Regression as Trend Line

That you draw a line on a chart does not make it a prediction! At most, maybe a guess, an excuse.

It remains just a line, and when it ends at the right edge of a price chart, you are simply at the right edge of a chart as everyone else.

You are facing an immediate quasi-random future path with a high degree of uncertainty. This is where gurus can give you this wonderful advice: the price will go up tomorrow if it does not go down.

If there was anything predictive in a 90-day trendline, it would be the equivalent of a self-defined free lunch.

The market usually offers few of those, and usually, they do not last. They are rapidly arbitraged out. One has to consider that the whole premise of the 90-day regression has practically no strategic value. Then, what could be its use?

However, if you draw 90-day trendlines for all the stocks all the time, you will find more of them going up than down over extended periods of time. Not as a predictive tool, but simply as a classification of what you see.

Sorting by Price

That you sort stocks by price, or their logs, does not give an advantage either.

Nonetheless, the strategy is sorting on what might appear as recent accomplishments, and there is some value in that. Big companies got bigger for some economic reasons.

However, it might not be enough to outperform the market. This was clearly demonstrated in the original program simulation. No alpha generation whatsoever.

Tagging along is not how you generate excess returns.

In the original strategy, a trade could be triggered due to a penny move from up to 90 days prior. This in itself makes the trading strategy operate on what looks more like market noise. You should not base a trading strategy on fundamentals where a one-cent price move some 90 days ago has the ability to trigger a trade.

The original strategy is so sensitive to minor price changes that changing its rebalancing time by only a few minutes will give different answers.

Placing bets on such minimal market noise is akin to simply gambling your way out. Try giving probabilities on that, and you will find yourself at the right edge of the chart in need of the support of the above mentioned gurus.

There is practically nothing of interest in the original strategy as was presented. The first post in the forum where the original strategy was provided confirmed this with its test results.

Gambling Acceptance

Having established that the trading strategy is simply gambling, why not accept it as is? And play accordingly.

Find in the method of play what could transform the trading strategy into an alpha generating machine.

Note that this trading strategy is very wasteful of its capital resources, even after my modifications. It trades unnecessarily, has really bad trade timing, and will still trigger a trade from a penny move some 90 days prior. There is much to improve here.

But, nonetheless, it does have a redeeming quality.

By modifying its code, and accepting its gambling habits, you can make a lot of money. You will need guts, perseverance, and capital to do so, but you will get there a lot faster than everyone else.

My modifications accept the strategy's gambling stance. Sees that the stock selection process itself has some indirect advantages and that applying a submartingale strategy can generate some considerable alpha.

That is the whole purpose of designing automated stock trading strategies. It is to outperform, not just of a few days or weeks, but for years and years. You want to reach the finish with much more than just average performance. To do so, you need some alpha generation. Average performance is always available just by buying low-cost index funds.

Should it really matter that much that you are gambling to generate your alpha? Will you accept it, even if it is gambling? All the while providing you with better odds of outperforming the averages.

That is what my simulations showed, as a kind of proof of concept. You can change the trading strategy's behavior. There are many variants and improvements that could be made.

Yes, I admit and accept that my code also trades, should I say, gambles its way to the finish line. But then again, it wins, and it wins big. There are reasons for that too.

Related Articles:

A Quest for Stock Profits – Part I
http://alphapowertrading.com/index.php/papers/250-a-quest-for-stock-profits-part-i

A Quest for Stock Profits – Part II
http://alphapowertrading.com/index.php/papers/251-a-quest-for-stock-profits-part-ii

Book:
A Quest for Stock Profits. If you want more, you will have to do more...
https://www.amazon.com/dp/B071LL1YR3

The following notebook is all about alpha generation.

Loading notebook preview...
Notebook previews are currently unavailable.

This is a very informative thread!!

Where is the Alpha?

It might be hiding in plain sight. In my last notebook: No Alpha No Game it was stated it was a sufficient condition to have an upward bias in the price data to win a long term stock market game.

Often times, people want to look at the game as if randomly set, meaning that the probability of going up is about the same as going down. As if playing a heads or tails game. A game known for centuries to be a zero-sum game and unbeatable except by luck; when, in fact, the stock market game might be something quite different.

The picture below surely does corroborate a 50/50 argumentation. Over the past 5,473 trading days (21 years), about half were up days. Within statistical tolerances, it is saying: 50/50 odds. With such a chart one could indeed consider it a zero-sum game.

1 UP and DOWN Days

However, no one is forced to trade as if it was a casino game. They can, and some do, but nobody forces them to do so. If they do, they should realize what are the odds and stop complaining that they do no win so much over the long haul.

They could, for instance, naively wait for a profit to materialize. It is a continuous betting game and one could leave his/her chips on the table over some time interval. Doing so can change one's perspective. The question then becomes:

How many days in the past could I have been profitable taking a long position?

That is a very simple question to answer. Draw some lines on a chart, get a visual representation.

I used the same chart, took a snapshot of what was visible on my monitor (18 months). ABT was the first symbol in my folder and it could easily serve as example to make the point.

2 Profitable Days

From the above chart, one could have taken a position in any of the past 427 trading days and would have made a profit just by holding on.

It could have been done on any of those days, multiple times, and each would have resulted in a profit. The profit would be there just for having waited for a better day to exit a trade.

Of the 5,473 trading days which represent a sufficient statistical sample, 5,395 were below the last price on the chart. Meaning that 98.57% of all prices in the ABT 21-year history would be showing a profit just for having held on.

It was not a question of gambling in a 50/50 world. It was buying a stock that is prospering over time and giving it the time to do something for you.

Sure, you can gamble that tomorrow will be up. But, do you absolutely have to or need to?

Sounds like just giving a prospering stock some leeway and selecting a sell day when it is profitable for you would appear more appropriate. And looking at the above chart, it could have been done many many times. One could have scaled in and out of position as each position was showing a profit.

Thereby, profiting from price fluctuations without predicting price movements, but still taking advantage of them. Without using any indicator, or whatever contraption, and nonetheless pocketing the profits.

What you want from an automated stock trading program is to have a machine do this for you.

Of note, ABT was not an exception. It might be at the top of the list in my DEVX8 folder, but it does show about the same behavior as the other nine in the list, including DIA serving as benchmark. Here are the summary statistics.

3 Summary Stats

See Related articles
https://www.linkedin.com/today/author/guy-r-fleury-6041529

profiting from price fluctuations without predicting price movements, but still taking advantage of them. Without using any indicator, or whatever contraption, and nonetheless pocketing the profit

Unfortunately your entire premise consists of "prediction" and to show a small sample of survivors gives lie to the fact that the majority of stocks end up worthless over time.

To quote the DJIA is a different kettle of fish but still predictive. You are predicting the continuance of economic growth which may (or may not) prove prescient.

I must politely state that in my opinion your entire approach to stock markets and investment is flawed. There is of course scope for polite disagreement in life and the markets and it seems to me that you approach markets from the perspective of a computer programmer rather than a realist or investment professional.

No one can or should doubt the numbers you provide above. You have taken some numbers, applied some programming and drawn some conclusions. Unfortunately in an uncertain world your conclusions are incomplete or more probably inappropriate to apply to an infinite future.

I realise that I have been your constant critic for some years and a thorn in your flesh. But I hope that I may perhaps have served some useful purpose in curbing your enthusiasm for schemes which do not, sadly, have much basis in reality.

Mr. Garner, so now, it appears sufficient to add a few numbers together, mostly ones, and it become lies. Awesome.

First, the selected group of stocks is somewhat representative of the market. It had an average, as a group, of about 10-12% in the CAGR department which is close enough to the US long term stock market average. So those 10 stocks were just ordinary. The stocks stayed on my list for the simple reason that their respective last price was higher than their initial price some 21 years ago (p(t) > p(0)).

You find the 10-stock sample too small, then, please do the work. Get a thousand stocks over the last 20 or 30 years and do the calculations. The formula is: nb_up_days += 1. You will find about the same numbers.

Regardless, it could not have been predicted with any accuracy some 20 or 30 years ago that they would still be there today. But that could have been observed day by day over the entire period. Giving at least 7,660 days to observe and decide what to do next.

Stock prices, in general, had been going up for the 220 years prior (kind of corroborating evidence). It is no surprise that prices continued to go up for the next 20 years just as the 10-stock sample.

It is looking at the glass half full or half empty again. Quite an old debate. Personally, I go for the half full scenario with the corroborating evidence. I am not part of the portfolio gloom and doom scene. After all, I'm a pessimist: in the long run, I am not going to make it.

One's job in building a stock portfolio starts with some “reasonable” stock selection process. You don't want everything and the kitchen sink in it. A stock trading strategy is not a fit all do all investment/trading kind of thing or else you are doomed. Every fund manager has to make a selection and allocation. Overall, one could consider whatever is selected as just a small sample of what was really available whatever the selection criteria.

I would point out, Mr. Garner, that even the methods you have demonstrated in the past have also been as selective. Jumping in and out of stocks at the slightest hint of a downturn or shiver rippling in the market average.

You design a long only stock trading strategy, and then claim that most stocks are doomed to oblivion. There is kind of a contradiction in there.

What I find surprising is that based on what you just posted, that you even play the game at all.

Shouldn't you somehow prefer the short side of things? With an half empty glass, you would know that there is still half of it available even if it is to the downside because that too is kind of a prediction. So, the market goes up on average, or it goes down. Which will it be? I program as if it was going up! My glass is half full.

Why, some 20 years ago, have you not shorted most stocks all their way down? If you missed the opportunity for whatever reason, it is here again. You can do it for the next 20!

Yes, I do make the prediction that over the long run, the US stock market will be higher on average. I do take the same bet Mr. Buffett has taken over the years. It is directly tied to the prosperity of a nation. In every stock we can see the combined efforts and aspirations of every employee in all these companies resumed to one number. And all every employee wants is to survive and prosper by rendering some kind of service to others while building their own retirement funds. Regrettably, there too some do not make it.

Going forward, some stocks will fade away and disappear. Yes. But the job is not to invest in them all, all the way down. In fact, the job is not only to notice they are there, that we will touch some, but to make sure we are not in them long for the ride down. I presented a chart to that effect in a prior post.

My trading strategies, or my modifications to other peoples strategies, might be complicated, but they do have simple economic reasons for generating their alpha. How hard can it get since any trading strategy can be resumed to just two numbers.

In the end, at bean counting time, we all keep score.

With respect Guy your strategies are far from complex. They are simply naive, ill conceived and unrealistic. I do not want to upset you or insult you. That has never been my object.

I just don't believe you think things through properly. This is demonstrated perfectly by the following statement:

Get a thousand stocks over the last 20 or 30 years and do the calculations. The formula is: nb_up_days += 1. You will find about the same numbers.

My database contains currently listed as well as delisted stocks and mutual funds. I can't recall the exact number. 80,000 perhaps?

No one is accusing you of "lying". Certainly not me. I am sure you have many loyal supporters out there on Wealthlab but not many of them will have much experience.

Your numbers do not "lie" - they are merely produced from naive and unrealistic assumptions.

Mr. Garner,

Maybe something you might agree on:

...on some days, somehow, some stocks might go up, and on other days or the same day, some other stocks might go down...

For you, “the majority of stocks will end up worthless over time”. Which technically should be better played to the downside. But, who am I to argue.

My interest is for stocks that can remain above my selected initial minimum value and gradually move up from there in order to stay selectable in a portfolio. Note that doing so eliminates all companies that fail to prosper going forward. Therefore, I am not that much concerned about stocks going bankrupt since they will fail the test to remain on my selectable list of tradable candidates long before they go down to zero.

People try to do the best stock selection they can and use the best trading strategies they can devise. However, they don't take an 80,000-stock universe to do so or use thousands of strategies. They don't add the kitchen sink to the process either. Their interest is mostly for the best performers to play long and, at times, the worse to play short. At the end of the day, all tradable shares of all listed companies are in someone's hands!

I have a dozen or so trading strategies on my website, each using different approaches, and with a lot of details on their governing equations.

Since, as you say, my strategies are so naive and simple, you could duplicate any of them anytime. Your way of showing how naive and simplistic they really are. Hope you could. But, I seriously doubt it.

Maybe a hint might help. I see the trading strategies used as operating in a tumultuous sea of quasi-unpredictable short term variance. A stochastic process where each stock's mean and variance are also quasi-random processes of other quasi-random processes with random jumps.

This, as before, turns out to be an unproductive exchange. So, I will let it be.

Therefore, I am not that much concerned about stocks going bankrupt since they will fail the test to remain on my selectable list of tradable candidates long before they go down to zero.

So how does that fit in with this statement you made in your previous post:

profiting from price fluctuations without predicting price movements, but still taking advantage of them. Without using any indicator, or whatever contraption, and nonetheless pocketing the profit

And as to the following statement:

However, they don't take an 80,000-stock universe to do so or use thousands of strategies. They don't add the kitchen sink to the process either.

No they do not.But they probably could and should. If you insist on sampling then do so on a statistically valid basis by including listed and delisted stocks on a random basis over several draws..

You will then likely see that your following statement no longer holds:

Get a thousand stocks over the last 20 or 30 years and do the calculations. The formula is: nb_up_days += 1. You will find about the same numbers.

Unfortunately your analysis of markets does not hold water. And when you add to that your recommendation for dangerous levels of gearing it becomes readily apparent that you have not worked your analysis through. If you had you would realise that your account would be wiped out long before your goals were achieved and that in any event you would be closed down by broker margin calls.

Did anyone else running this algo live get dealt YUMC at the beginning of the month? Had a not-so-nice %15 drop yesterday. I'm wondering what paramaters I should tweak to not get this stock or its ilk next time....

Dan,
The stock took a 15% hit, or your account did? I would say the best protection from a drop like that would be to be diversified. If you were invested in 30 or more stocks that move would equate to .5% or less of your account. But if you were invested with more than half of your account in that stock, which is possible with some of the versions above, then that move could be much more painful.
That's why as much as I love math, basic principles outweigh fancy formulas. Especially when those formulas have no bearing on out of sample performance.

Yeah, no, I am diversified, and perhaps things like YUMC will just happen from time to time. Looking at the chart, it does appear to be a "stock on the move". It's only been trading since last October, so perhaps adding a filter to reject newer issues could be something to look into. But I imagine you'd miss out on other opportunities that way...

Well, that is the beauty of diversification: you might miss on the downside, but you will miss on the upside also. And of course you can filter, but where do you start and where do you end: filter on market cap ? PE ? volatility ? age on the stock market ? so many parameters