Here’s a trading strategy that I think has a good “story” (no link this time – it’s one of my own ideas and not a replication of an academic paper). This is the motivation: Between smart-beta ETF's, hedge funds, and other asset managers, a huge amount of money has been invested in factors that have shown some historical ability to outperform the market: value, momentum, low volatility, high dividend yield, quality, etc. Consider the low volatility factor. There are now 25 ETF's, with $35 billion in assets, dedicated to this one factor alone (see here). But smart-beta ETF’s may just be the tip of the iceberg. At a Deutsche Bank Quant conference in 2012, there was a panel discussion on low volatility strategies, and one panelist estimated that there was$160bn devoted to this strategy among all types of money managers. And this factor has only grown in popularity in the four years since that conference.

Quants typically spend a great deal of time searching for new factors. But here’s another approach: for these standard factors, trade stocks as soon as they migrate into or out of an extreme quantile in anticipation of fund flows from rule-based smart beta ETF’s and other similar money managers. For example, as soon as a stock migrates into a low volatility quantile, buy the stock with the expectation that many other funds tracking this factor will have to eventually buy it too. And the same idea applies to shorting stocks that move out of an extreme quantile.

Everyone trades these factors differently, which makes it harder to anticipate the crowd’s flows. For the low volatility factor, for example, some choose stocks with the minimum historical volatility (the $6bn SPLV ETF does this), while others choose stocks that make up the minimum variance portfolio from a mean-variance optimization (the$12bn USMV ETF does this). Some instead choose low idiosyncratic risk stocks or low beta stocks. The rebalancing schedules differ, and the historical window for estimating volatility may differ. Some pick deciles vs. quintiles. Some use a large cap universe vs. a small cap universe. And some combine low volatility with other factors, like high dividend yield.

In the attached backtesting algo, I focus on the momentum factor. Although there is no standard way of computing momentum, the method that is most commonly cited in the literature is the “12-1” momentum factor, which is a stock’s 12 month return excluding the last month’s return (the last month is often skipped to avoid the short-term, one-month reversal in stocks). There are many variables to choose from with a strategy like this, which leads to huge potential data mining. I went short stocks that migrated out of the top decile (decile 10 to decile <=6) and went long stocks that migrated into the top decile (decile <=6 to decile 10). Similarly, I went long stocks that migrated out of the bottom decile (decile 1 to decile >=5), and shorted stocks that that migrated into the bottom decile (decile >=5 to decile 1). For the stocks that migrated to the extreme decile, I hedged the momentum exposure by selling an equal dollar amount of momentum stocks (stocks that haven’t migrated from the extreme deciles). Momentum is a volatile factor with periodic crashes, so I wanted to hedge out the momentum exposure. I measure a change in decile by comparing each stock’s momentum decile one month ago with what would theoretically be the momentum decile one month from now (the 11-0 stocks today will be 12-1 stocks in one month). I chose a fixed two-month holding period for stocks that migrated.

There are two enhancements I tried to improve performance. First, you get much better results by just trading migrations to and from decile 10 and ignoring migrations to and from decile 1, which could be data mining but also is consistent with the notion that there are more long-only funds that would only be trading the top decile of stocks (there is a flag in the code to choose this option). I also looked at screening out M&A names, since some of these stocks may migrate into decile 10 on the announcement of a merger but flatline after that. The M&A screen modestly improves results.

This idea could be applied to any number of other popular factors. I briefly tested a few of them, but with very mixed results. Keep in mind, though, that there are nuances with each factor. Although momentum stocks are more volatile and you see a fair amount of large migrations over a few months, some of the other factors, like the low volatility factor and factors based on accounting data, are more stable and do not migrate as much or as quickly, so the threshold for choosing migrations may have to be smaller (for example, for these factors, it may make more sense to look at migrations to adjacent quantiles, although stocks near the border can fluctuate back and forth). And there is less consensus on what is a “standard” quality factor (pardon my shameless self-promotion, but quality factors are described in my last post), so the assets under management may be more dispersed around the various measures of quality. But it might be interesting to look at some of these other factors in greater detail.

598
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 588bf2295cc1c65e19517d32
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

10 responses

Amazing post. Thanks for sharing, and hope to expand on the concept soon.

Thanks for the great post; however, do you think that if many people follow this strategy, this would create volatility in and of itself and bring the financial instrument back into a higher quartile?

Love the depth of knowledge.

Hey Randy,

Great question. In general any strategy takes advantage of some property/inefficiency/anomaly in the market. This strategy seems to be taking advantage of the economic hypothesis that currently not enough people are anticipating upcoming factor flows. If that were not the case, then before expected long factor flows, folks would purchase a ton of stock, driving the price up and incorporating knowledge of the upcoming factor flow into the stock price. The opposite is also true for the shorts.

An important fact to consider here is that large firms implementing these massive scale ETFs that Rob describes take a long time to change direction. Even if people started purchasing stocks ahead of time, order volume will push prices up for longs and down for shorts. This is due to market impact AKA slippage. Gus Gordon did a great study on slippage here. So if you know that orders are gonna come down the pipe for a given stock, and you can get in ahead of those orders, you can generally make money because the subsequent orders will push the price up or down. Obviously the real world is fairly complicated, as enough volume from quants expecting factor flows could also change the ranking of stocks by momentum/volatility and make the order flows change, rendering your attempt moot. You could then try to model this whole new world by expecting the quant flow to come in in expectation of the flows, figure out how things should settle out, and buy ahead of everybody else. This is how an arms race develops and usually you want to avoid having to model complicated situations like this.

In general, any strategy will decay as more and more people become aware and start trading it. If others are taking advantage of the same mispricing, then the prices will be pushed closer and closer 'efficient' pricing. In practice this is one of the things you want to study when designing a new strategy. Good ways to do this are looking at the alpha of the strategy over time using a tool like Alphalens. You can check to see if it's consistent, or has been decaying recently. The latter is an indication that others are starting to take advantage of the same inefficiency. You can also use Pyfolio to do a similar analysis, and do a more in depth analysis manually yourself to determine where your returns are coming from.

For this particular strategy, I'm not sure how long the alpha will last. I would use this post more so as an example of a great strategy idea from a veteran quant, showing things that are important to consider when testing an economic hypothesis. I think you could take the ideas shown here, plus the development process, and produce something new and interesting that doesn't have as large a risk of being exploited.

Does this make sense?

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Rob -

If there is an inefficiency, wouldn't it also propagate to the ETFs, hedge funds, etc. themselves? For example, in the case of momentum, there are a bunch of ETFs. It sounds like your strategy implies that those ETFs are paying a premium for stocks that are queued up to added, and selling at disadvantage, for stocks that are queued up to be sold. So, wouldn't the ETFs suffer a tracking error, relative to an ideal index? For example, as a benchmark, MTUM uses the MSCI USA Momentum Index. So, if I'm understanding what you are surmising, MTUM should be having some difficulty tracking the index.

Also, in the case of factor indices, wouldn't it make sense to buy access to the index? This would give direct information on the anticipated flows, right? What if you had access to the point-in-time MSCI USA Momentum Index (meaning, as soon as changes are queued, you would know)? That would help, right? Presumably, ETFs buy this information so that they can track an index, no? Otherwise, they'd always lag the index.

Great idea, one question, have you tested this in regards to the trading time? For example, if we are going off of flows from large institutional ETPs for factor portfolios, many have mandates that rebalance every X months. Usually this means at the end/beg of a month. Does trading a few days early or later have any effect on cumulative return?

Not to speak for Rob, but I think it would be a great analysis to do a parameter sweep. Looking at trading up to 5 days before and after the current day would let you see how the metrics dropped off in both directions and give more info about whether this is overfit (in which case the parameter sweep would look more noisy), or something real (in which case you'd expect a smoother drop-off).

@Grant
I would say that if many people traded these factors in anticipation of ETF flows, it would hurt both the performance of the ETF and of the index, but not necessarily the tracking error between them. For example, the index that the MTUM ETF tracks, the MSCI USA Momentum Index, does a majority of its rebalancing twice a year, on the last trading day of May and November (see the index methodology here). So as long as the ETF matches the closing price on those days, it will not suffer any tracking error. But even though there’s no tracking error, ETFs could execute at worse prices. For example, if enough people followed my strategy (which is unlikely!) and continued to sell a stock that had migrated out of the top momentum decile, it could further drive the price lower when the rules-based ETFs were scheduled to sell, given that there is a lag between when a migration occurs and when it typically gets unwound. For that reason, many ETFs have become less transparent about their methodology in order to make it harder to exactly predict the changes in their holdings. Some large ETFs, like Vanguard, not only don't disclose their methodology, but they also don't disclose their holdings in real time (see here). However, with so many ETFs tracking the same factor, it may not be critical to predict any one ETF exactly. Most of the smart-beta ETFs are too small individually to care about any one ETF's rebalancing (unlike, say, when a company gets added to the S&P500 index), but collectively there is a great deal of money that tracks momentum. The idea here is that if there are many players - smart-beta ETFs, hedge funds, and mutual funds (for example, AQR has a single-factor momentum mutual fund) - all using slightly different measures of momentum and on different time scales, when there are large migrations it can lead to large flows into and out of specific stocks over time by many players following similar, but not exactly the same, rule-based strategies. Bloomberg reported yesterday that factor-based ETF assets have now topped $500 billion and hedge funds may have another$300 billion devoted to factor investing (see here).

As far as purchasing the index data to get the index announcements: my guess is that it would be hard to recoup the (often hefty) costs. If a particular ETF did reveal large, unexpected flows when they make their announcement to subscribers, enough market participants have access to the announcement data that it would theoretically be reflected in where stocks open right after the announcement. Again, the analogy is when the S&P500 announces a new stock added to the index, which is almost impossible to predict with any degree of accuracy, the stock opens up a few percent higher the next morning and then on average flatlines. The announcement effect is impossible to capture even for index subscribers. Of course if you’re one of the “Authorized Participants”, who have the exclusive right to create/redeem ETFs, you may have to subscribe anyway to get exact share changes.

I think testing the strategy a few days before the end of the month would be an interesting variation to look at. Although, as I said, the timing of when different managers trade the factors varies, it is possible that many of them rebalance their factors on a monthly or quarterly basis and may choose to do so at the end of the month, possibly leading to some calendar effects.

Thanks Rob -

I'm still trying to put my finger on what basic inefficiency might be at play here. I guess the idea is that Wall Street has sliced and diced the equity market in various ways and created products. Whether the products work or not, they can convince investors to slosh their money around from investment to investment, particularly if there is some academic backing to the factor. In the case of MTUM, if they can drum up interest in the investment, they can earn 0.15%, which (if I've done my math correctly) amounts to \$3M in annual revenue--not much, but I guess once the code is written, it doesn't cost much to run on an existing ETF platform. Stocks that get added to MTUM and its ilk tend to get a little boost to their prices, since they are in demand (all of the momentum-seeking investors want them), whereas stocks that get dumped by MTUM-like investments see a slight price haircut, since by policy, they need to be sold (momentum-seeking investors are no longer interested).

I guess the key question here is have you actually isolated the effect, supposing it is there? How could one tell if it has anything to do with the momentum factor ETF's, etc. out there? Maybe it would be there without them? In other words, what is the testable hypothesis here? It seems tricky to show directly that it is actually the existence of the Wall Street momentum factor investment machine that causes the inefficiency, versus something else at play.

Very impressive research! Quite a clever way to explore the flow opportunities. quick question, will sector or beta hedge help to smooth the curve?

Hey Shanming,

You may want to try our experimental stage optimize API, you can see it here. It would allow you to automatically enforce a sector and beta neutral constraint on your portfolio.

https://www.quantopian.com/posts/optimize-api-now-available-in-algorithms

In general the best way to smooth out any returns curve is to add more models so your signal is more consistent.