Quantopian-Based Paper on Momentum with Volatility Timing

This post features the Quantopian-based research presented in the paper "Momentum with Volatility Timing”
(SSRN: http://ssrn.com/abstract=3417360). Specifically, the paper introduces the volatility-timed winners
approach that applies past volatilities as a timing predictor to mitigate momentum factor underperformance for
time intervals spanning the market downturn and post-crisis period. The proposed approach was confirmed with
Spearman rank correlation and demonstrated in relation to different strategies including momentum volatility scaling,
risk-based asset allocation, time series momentum and MSCI momentum indexes.

The figure below illustrates the volatility-timed winners approach and compares it with the conventional winners-
minus-losers momentum. The corresponding implementation is based on the Quantopian/Alphalens platform and
provided in the attached notebook.

802
70 responses

This is excellent, thanks so much for posting. It has long been my hope that quant finance and economics could go more open science supported by the Quantopian platform as this case beautifully illustrates.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Yulia,
Thank you for using and citing Quantopian as part of your research process!
thanks,
fawce

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thomas and John, Thank you for the kind feedback and cool platform.
Indeed, Quantopian was instrumental for this paper. Best regards, Yulia

I agree with @Thomas and @John, it is an interesting notebook.

There is something in there that could apply to any wannabe market-neutral trading strategy. However, it still depends on the premises made about the market in general.

Here, a trend-following assumption is made by assigning more potential upside to the top 10% of momentum stocks while giving more downside to the lowest 10%.

In the presented #1 figure, it does show the strategy breaking down and bad. The poor performance is attributed to a change in behavior when in reality, the market had not changed. It did what it usually does after a market meltdown, it recovers. And from there, it becomes a simple math problem.

A 10% decline requires an 11.11% rebound to recover. It is not much. While a 50% drop will need a 100% rise to get even. During the 2008-09 financial crisis most stocks declined, few were there to carry the upside torch. While the bottom momentum stocks were plentiful and a lot exceeding the minus 50%. Enough, in fact, to carry a lot of heavy short positions in the seemingly worse-performing stocks.

After the financial crisis, the majority of stocks recovered in the next few years and some more since then (over 300% for the general market lows). It meant that the stocks that declined by 10, 20, 30, 40, 50 or 60% saw their respective prices increase by 11.11, 25.00, 42.85, 66.67, 100.00, and 150.00% just to get even.

Therefore, for sure, it was not a good idea to have big shorts at the very bottom of their respective downfalls. The more the stocks went down, the heavier the short positions were and this right at the bottom of the cycle where they maxed out.

An important number in that trading strategy is the 0.27 constant used in its threshold function. A number that could be known, or set, only after doing many simulations. Increase this threshold to 0.50 and there will be no trades at all. Reduced it to 0.20, and you will skip the period from mid-2007 to mid-2012 with no trades for the W10-Timed. Make the threshold 0.10 and there will be no trading at all for W10-Timed. That is how critical this number is. It is a number you would not have known or optimized before doing any of these simulations.

Another assumption that is made is about the method of determining what is the trend. Here, it is defined based on the last year momentum compared to 1 month ago. Meaning that you are already late by one month over the past year. Making you somewhat over 6 months late to the party. Thereby basing your trading decisions on 6+ months lagging data and out of phase by at least one month. The impact being that the program did not see the recovery coming and was quite late to notice it had happened and was still piling on shorts.

Nonetheless, you can increase the strategy's performance by giving it some alpha. Since the strategy is operating on returns, its basic equation is: $$F(t) = \displaystyle{F_0 \cdot \prod ^d (1+r_{d,j})}$$ and to achieve more, the equation needs to be transformed into: $$F(t) = \displaystyle{ F_0 \cdot \prod ^d (1+r_{d,j} +\alpha_{d,j})}$$.

I opted to give the strategy more time since my first interest is to determine if such a strategy can last. And I added some tiny alpha to the picture. This transformed figure #1 into:

By treating the winning and losing returns separately, and giving them small alpha numbers, it was possible to push even further and give even better numbers. Each was awarded the same initial capital and then combined as illustrated in figure #2.

You could push for slightly more alpha, but then, the credibility factor would be put into play. Here, like in other trading strategies, the understanding of the pressure points within a strategy gains its importance. It is with this understanding that you can push on what at first appears as limits when in fact they were just lines in the sand.

On the other hand, you could allocate half of the capital to each of the winners and losers sides. This would be more like the following where less capital would be used:

Again, it is a matter of choices and preferences.

Guy, Thank you for the post.

The conventional winners-minus-losers momentum was identified by Jegadeesh and Titman (1993) on US data spanning from 1965 to 1989. This paper assessed momentum for recent data and demonstrated that its behavior is broken during the market downturn. Therefore, I would consider these findings as empirical results rather than assumptions. Furthermore, Figure 2 in the notebook (and corresponding Exhibit 3 in the paper) is consistent with your math.

Going forward, there are two parameters in the volatility-timed winners approach that can be further analyzed: the realized volatility threshold constant and window length. Rather than use simulations to compute the former, I would prefer developing a model that captures/identifies the corresponding significant variables (e.g., volatility range and rate of change). In relation to the latter, the paper shows the different effectiveness for 3-month and 6-month realized volatility (Exhibit 6) that implies the need for further study.

Regarding the one month lag. The last month is excluded from computing priors in order to avoid the short-term one-month reversals (Jegadeesh, 1990; Lehmann, 1990). Yes, the 11-month prior is relatively long in comparison with the downturn interval. Therefore, the volatility-timed approach compliments it with a shorter (3 or 6 month) realized volatility for capturing the required time interval.

Please elaborate how you determine alpha in the equations.

Best regards, Yulia

All: Leo M is going to present this paper and NB in a journal club on August 1 at 10am ET. It will be a hangouts call where we just talk about the paper with Leo leading the discussion. If you want to join you should have read the paper. Email me at [email protected] if you want to join.

@Yulia, I modified the notebook before reading your paper. I wanted first to know if there was something that could be done before undertaking the read.

I agree with your general assessment and with your conclusions. There is more work to be done, more ways to improve the methodology, and it should prove even more rewarding.

When we design trading strategies, we actually design what could be considered our game within the game. We set barriers, trading rules, directives, goals and trading procedures where we want to account for most contingencies and constraints.

It was the strategy, in this case, that “defined” what it considered an uptrend and a downtrend. It is its momentum definition that directs all the trading activity. The volatility threshold acts as its trading barriers and was shown to be quite sensitive. For sure, a better method should be sought, more adaptive and more responsive to market turmoils for when they happen. The 1-month delay will reduce the whipsaws a bit, but it also ignores crucial moments at turning points due to its delayed reaction.

The W10-Timed strategy, added for protection, a go-to cash alternative when it exceeded the volatility threshold. It could be considered an excessive measure, but at least a minimum. However, one should note that over the 15 years, it was a one of a kind. And that does not give it that much statistical significance, even though it did happen. Just like we will have some of those in the future, we just do not know when! Also of note, it would have given even better results by simply reversing the weights when the threshold was exceeded. Nonetheless, the 0.27 threshold is very sensitive. A little bit more or a little bit less and it will change the whole return outlook.

I usually consider strategies from their total payoff matrix perspective: Σ(H∙ΔP). That simple expression will fit any trading strategy. What I did is put pressure on the holding matrix H by giving it some alpha, as in Σ(H∙(1+g_(d,j))^t∙ΔP) where the g_(d,j) function is adapted to each stock (j).

The impact is that every trade is affected by the procedure from start to finish. A little in the beginning and gradually more and more as the trading interval increases. As can be observed, the alpha is compounding. As a result, the bet size is increasing with time. In fact, we have the design of a sub-martingale at play, not as a linear add-on, but as a compounding one. It is also why you see the relative average spread compared to the benchmark increase with time.

Note that all this is being done without knowing what is being traded, when, and by how much. It has all been delegated to the top and bottom 10% trend definition under the threshold constraint.

This nonetheless, at any one time, ignores 80% of the tradable universe. Still, due to the constant fluctuations in the selected few, a lot of the stocks get to be selected. The asset_list counted 4,652 stocks. This is a lot more than needed to be classified as “diversified” holdings. This is understandable, the top and bottom 10% are constantly changing. Those at the bottom of the range more than those at the top, but still changing positions all the time, if not dropping out of the list. That 80% not considered should be investigated further since most of the stocks getting to the top or bottom 10% might come from there, and thereby increase performance even further should they be detected sooner from either side.

Guy, thank you for the response.

While the notebook outlines the development of the volatility-timed winners approach from the conventional strategy, the paper aimed to address three momentum instantiations: factor, basis for index investing, and trading strategy. Quantopian therefore was essential for conducting this composite study. With regards to momentum as a factor, the analysis highlights its application intervals. Next, the transition to the long-only approach following the assessment of the winners-minus-losers momentum behavior is consistent with the MSCI Momentum Index methodology. Third, the paper compares the cross-sectional factor and asset-oriented time series momentum strategy.

The 10% cross-sectional factor was used as a reference case to facilitate comparison with the scaling approach documented by Barroso and Santa-Clara (2015). Time series momentum (Moskowitz, Ooi, and Pedersen, 2012; Hurst, Ooi, and Pedersen, 2017), on the other hand, represents an asset-oriented strategy where assets are selected based on their individual momentum prior history. As shown in the paper, in both the cross-sectional and asset-oriented approaches the realized volatility can be considered as a timing signal (Exhibit 9). Then, one primary goal of the threshold was to highlight the importance of identifying the volatility-timed intervals and revisiting/changing strategy during these time periods, especially within the context of multi-factor portfolios.

@Yulia, a more elaborate and detailed explanation for the equation used can be found in my third article of a series: https://alphapowertrading.com/index.php/quantopian-posts/333-reengineering-for-more-iii.

This is, I think, the 7th strategy I have enhanced or repurposed in Quantopian forums using parts of the equation given in that article. You have another dozen or so simulations that have been chronicled on my website over the years based on the same equation.

The equation is designed to control the position sizing over the whole trading interval subject to price fluctuations, portfolio constraints, and preset goals.

Of interest in that article, the strategy's trading decision making was delegated to the CVXOPT optimizer which was also somewhat trade-agnostic in the sense that we did not know at what times and in what quantities the stocks it traded would be traded. In fact, I see it more as a variation to the conditions illustrated in your notebook above.

I wrote my last two books on that subject. One to illustrate and demonstrate that theoretically, it would hold using randomly generated price series, and the more recent one to demonstrate the practical side of it; that it would also apply using actual historical stock prices. Both using the CVXOPT optimizer.

The optimizer will only detect something if there is something there in the first place. That too was effectively demonstrated by building multiple scenarios of 20,000 randomly generated portfolios each having 200 stocks of randomly generated prices.

I find that gaining the ability to control the position sizing under divers market conditions can be a major alpha extractor, not just for a trade here and there but for all trades over the entire trading interval. And this is what the position sizing equation permits, seeks and execute.

The very first thing a trading strategy has to do is survive over the long term, and there it should be measured in decades and not just a couple of years as many of the portfolio simulations on Quantopian do exhibit. The next thing they need to do is to perform above market averages and above their peers. Also, all frictional costs should be accounted for, otherwise, a strategy might be running on the fumes of those not considered costs. I do not think that the present notebook scenario is considering those frictional costs, and if it is the case, then they would be a drag on performance. Nonetheless, a slight nudge to the equation controlling parameters would be more than sufficient to compensate those costs.

There are many ways to achieve outstanding results. Of the 7 enhanced, reengineered, and illustrated strategies I have covered in these forums, all had a different architecture, a different stock selection process, different trade mechanics, but still I managed to use some variant or parts of the cited equation to raise performance to higher levels, including all frictional costs except leveraging charges which would tend to reduce overall performance but that can also be compensated for.

Each strategy had a different protection mechanism in place but overall applied to about the same general trading regions with fuzzy boundaries. This is understandable, whenever we try to predict what is coming next, it is still an uncertain terrain which will unfold the way it wants to and not necessarily the way we might anticipate. It is why we have to adapt and maybe control the way we interact with all that uncertainty.

From my observations, I cannot change the price matrix P, whatever its size. It is recorded history. Nor can I change the price difference matrix ΔP which is just the period to period price fluctuation. And going forward, I will not be able to control or even influence those two matrices either.

However, I can control the way I participate in the market through the intermediary of the stock holding matrix H that it be over past or future price data. H is the residual of two matrices: H = B – S, where all trades are accounted for. H is the total ongoing inventory after all the buying and selling.

We can transform the total payoff matrix into $$\sum (H∙(1 + g_{d,j})^t∙\Delta P)$$ and adhere to the $$g_{d,j}$$ holding enhancer function since, on average, we should design it to be positive which in turn will increase overall performance due to compounding its bet sizing capabilities.

This is what was demonstrated in using the above notebook. Enhancing overall performance, from start to finish with every trade being affected, and this, even without knowing what would be traded, at what times or in what quantities. But nonetheless, outperforming the market's benchmark (SPY) and most probably a lot of other players.

This is my attempt to implement in IDE the VOLATILITY-TIMED WINNERS APPROACH strategy, proposed by Yulia Malitskaia
in paper Momentum with Volatility Timing .

Period: START 07/01/2004 END 07/19/2019
No leverage: LEV = 1.0;
Absolutely the same momentum factor: MOM = 252; EXCL = 21;
The same percentile: PCTL = 10;
The same Volatility Period and Threshold: VW = 126; TRHD = 0.27;

The only difference: I used annualized downside volatility mean of winners daily returns.
I also used bonds ('TLT', 'IEF') to park money during volatility spikes which is in line with
at this time, the momentum factor can be replaced by other strategies.

What do you think based on this tearsheet?

65

the short trades have a negative contribution, would it not be better to leave them out? Or do they compensate at the right time and have a bigger negative contribution later? (can't see that from the tear sheet)

Maybe you can share the algo then we can all tinker to improve.

Wonderful, Vladimir - always learn lots following your post - seems significant exposures to momentum and volatility from the metrics

@Peter

the short trades have a negative contribution, would it not be better to leave them out

There should not be any initial short positions.
I backtested one more time using order_optimal_portfolio().

Results are the same.
Most likely there is liquidity problem or a bug in pyfolio .

@Karl

seems significant exposures to momentum and volatility

By definition this is a volatility-timed momentum strategy , so it is not for Quantopian contest.

Guy, Thank you for elaborating your approach.

Vladimir, Thank you for assessing VTW within the IDE environment. I look forward to going through it in detail and will reply this weekend.

Vladimir, attached please find an updated notebook extended with two sections: IDE-Oriented Pipeline and Pyfolio Analysis. The pipeline was built from your IDE-based variant and included DownsideVolatility. This volatility is related to the alternative asset-oriented time series momentum approach and, as shown in Figure 5 (and Exhibit 9), requires a different threshold. Best regards, Yulia

34

Yulia,

You implemented in notebook only part of my algo when DV is below threshold.
Can you combine in notebook both sides when DV is below threshold and when DV is above threshold without changing threshold for period: START 07/01/2004 END 07/19/2019?

I also don't really understand why another way to calculate volatility on the same universe is related to an alternative asset-oriented time series momentum approach?

Vladimir, Thank you for the suggestion.

This is consistent with the paper and should be added to the notebook accordingly under a new section Comparison with Time Series Momentum and Risk-Based Asset Allocation. Aiming for the weekend.

Hi Yulia,

nice updated review of the momentum literature, and proposal of improvement.

The threshold value is a typical example of over fitting.
For example you can make a simple system with moving averages crossing on the SPX which makes money if you fine tune the parameters of the moving averages on the historical data.

I also tried this kind of volatility timing before, but it is not robust out of sample. Either you found the magic number, (thank for you sharing it ..) or the chances of failing out of sample are at least 50%.

Adjusting it dynamically could be the holy grail of momentum investing ..

Bernie

Bernard,

This paper is not about a quick magic number/solution. First, it aims to highlight the broken behavior of the conventional momentum factor during a market downturn and proposes to take these results into account before applying existing and new solutions for mitigating factor performance. Then, based on the review of existing solutions, it proposes to apply the realized volatility from volatility-scaling methods explicitly as a timing signal for capturing the corresponding time intervals. The threshold function provides a transparent approach for delivering on the above statements. To make it robust in the out-of-sample scenario, the approach can be extended from several perspectives, for example with a composite timing model or sigmoid-like function (in combination with scaling). Best regards, Yulia

HI Yulia,

identifying the market regime is very challenging.
Historical volatility is relatively easy to forecast using time series analysis but implied volatility is much more difficult. I think the implied volatility is probably even more important in identifying the market regimes. By market regimes I mean periods of time when momentum or reversal strategies are respectively more profitable. Based on my research the transition between regimes is difficult to predict, it is generally a posterior consideration and as such is not easily generalized to out of sample data.

Regards
Bernie

Yulia,

This notebook may be helpful for yours research.

Measuring an algorithm’s sensitivity to volatility regimes:

50

@Vlad, Yes, agree with @Guy... +1 for useful work...already applied it to one of my own...and works like a charm!

Yulia,

This time I used Custom ATR as volatility factor.
Left unchanged all strategy parameters:

STK_SET = QTradableStocksUS(); MOM = 252; EXCL = 21; W = 0.01; PCTL = 10
VW = 126; TRHD = 0.27; LEV = 1.0; BONDS = symbols('TLT', 'IEF').

This variant is even more productive then with downside volatility.

Another proof of yours concept!

44

Now, having a 70% hit rate says that whatever you are doing in the trade selection department, you should continue to do so since it is showing some predictive powers... At the very least, an edge. Either at the trade selection process or at the trade exit method used.

This to say: I like your numbers.

Vladimir, Attached please find an updated version of the notebook separated into two parts: the analysis/approaches presented in the paper and techniques/extensions proposed within the forum.

The former includes a new section Comparison with Time Series Momentum and Risk-Based Asset Allocation. Figure 6 from this section compares the performance of the top decile winners (W10), volatility parity (VP10), time series momentum (TSMOM10), and volatility-timed winners approach (W10-Timed).

The second part of the notebook incorporates the following sections:
1. Deployment of VTW in IDE
2. Application of VIX Data for Selecting and Highlighting the High/Low Volatility Regimes
3. Plotting Cumulative Returns in Different Regimes
4. Sharpe Ratio Bootstrap Test

This week, I plan to take a closer look at your latest notebook with ATR. Thank you again for the very useful scripts.

49

We moved the journal club with Leo M presenting this paper to August 15 at 10am ET. If you want to join, message me on Quantopian or my email (see above).

Vladimir, Using talib, I took a look at ATR for the top winners decile with different periods (14, …,126 days). From the perspective of timing, it seems the shape of the 6-month realized volatilities provides a better signal. Let me know if I missed something.

In case I can not participate in the journal club on August 15 at 7 am PT.
Here's the notebook: “The Evolution of the Volatility-Timed Winners Approach in My Backtests”
+1 Yulia Malitskaia concept.

33

Yulia, thanks for participating in the journal club. Had a question. Do you think using a function that maps say the last 3 year up market volatility to a range of thresholds will work? That way the threshold might be able to adjust to current market conditions.

Hi Leo, this what I was proposing in my previous post.
"

Hi Yulia,

nice updated review of the momentum literature, and proposal of improvement.

The threshold value is a typical example of over fitting.
For example you can make a simple system with moving averages crossing on the SPX which makes money if you fine tune the parameters of the moving averages on the historical data.

I also tried this kind of volatility timing before, but it is not robust out of sample. Either you found the magic number, (thank for you sharing it ..) or the chances of failing out of sample are at least 50%.

Adjusting it dynamically could be the holy grail of momentum investing ..

Bernie
"

Yeah, I was concerned about unexpected changes. Say if MSCI world index is less volatile than the US market then the thresholds will have to be reversed as well. A function that dynamically changes the threshold to divergences from historical averages could help.

Leo, Thank you for the interesting discussion.

Regarding the mapping function, I am thinking about a model based on temporal and cross-sectional correlation
between deciles, sectors, etc.

Yulia, thanks. A model that can predict the volatility threshold will be a powerful extension to the approach you have outlined in the paper.

Yulia,

Below you can find the notebook "VIX threshold + adaptive switch W_10 Plus Bonds", combining your (threshold) and Bernard Madopp - Leo M (adaptive) approaches.
The algo uses implied volatility index VIX to identifying the market regimes (Bernie recommendation) and
uses function that dynamically changes the threshold to divergences from historical averages (Leo M recommendation)
combined with threshold function (Yulia Malitskaia) by me.

This may be useful for your research.

Can you share the results of a purely adaptive approach using the notebook?
BTW divergences from historical averages indicator was created by Gerald Appel in the late 1970s

32

Bernie,

Are you corresponding from Connecticut Correctional Center? Hope you're not imparting your Ponzi schemes here. Just joking around...nice work guys and gal!

as soon as I ll have time (unfortunately not as soon as I would like ..) I'll try something using your notebook. Let me just comment that this is not a trivial problem, it is the root of momentum/reversal trading strategies, a solution of which has been seeked for a long time. The main problem is that there is little data to build a statistically reliable model on: if you take the last 20 years for example there is just a handful of periods when momentum did not work, basically it is equivalent to predict market downturns. GIven the small statistical sample it is easy to over-fit the threshold to do well on historical data and fail in the future. If the market has a fractal structure the same model may also work on large time scales.

I think in high frequency trading it could be easier to find a reliable threshold model because the "down turns" are more frequent, i.e. the same plots which Yulia was producing would show several periods over which the momentum does not work. Not surprisingly big hedge funds make money with high frequency. May be on quantopian something could be done using intraday data. I think this could be more promising, just because more data implies less chances of over-fitting.

Sorry, but now I must go back to my cell...
Bernie

Thank you Vladimir, the backtest looks great. Yulia

Bernard,

I can not prove or disprove your conserns:
-There is little data to build a statistically reliable model.
-In high frequency trading it could be easier to find a reliable threshold model.

Quantopian only has access to yesterday's VIX daily data, and the pipeline using yesterday's daily data since 2003.
We have what we have.

But I can prove that in my first implementation of the VOLATILITY-TIMED WINNERS APPROACH STRATEGY
proposed by Julia Malitskaya
,
I left unchanged all the strategy parameters, including the threshold, but used a completely different method of calculating volatility,
and it shows identical results as the VIX threshold + adaptive switch.

3

I do not have a notebook, just a thought that when an adaptive model is used to set the threshold, one can then show that the approach works across equity markets of several different countries over several decades of data and that using the approach one ends up with higher cumulative returns and sharpe than the equity markets in a statistically significant way.

Leo M,

The thoughts reinforced by backtest are much stronger than just a thought.
Try to backtest in IDE your thoughts but in current Quantopian environment.

Here is another my backtest of Yulia Malitskaia constant threshold approach using custom ATR as volatility factor which you as if not noticed during The Journal Club presentation.

6

I did mention in the journal club while going through the notebook that Yulia found realized volatility to work better than other measures like ATR or VIX.

Leo M

Did you mention on The Journal Club about my post (Aug 1, 2019) in this thread 'custom ATR as volatility factor' ?

what I mean is that you need a walk forward type of backtesting to trust models, or overfitting will be inevitably present, but in the case of regime changes there are few samples to implement wak forward reliably.
As for using VIX, I think I was the first who proposed it in this thread when I wrote "identifying the market regime is very challenging.
Historical volatility is relatively easy to forecast using time series analysis but implied volatility is much more difficult. I think the implied volatility is probably even more important in identifying the market regimes. By market regimes I mean periods of time when momentum or reversal strategies are respectively more profitable. Based on my research the transition between regimes is difficult to predict, it is generally a posterior consideration and as such is not easily generalized to out of sample data."

With smaller resolution data you could implement walk forward more reliably. By the way walk forward is very similar to what in machine learning is called training/validation set.

The fact that the threshold depends on markets is clearly overfitting. As I said earlier it it easy to optimize a system using all available historical data as I suppose the thresholds have been obtained, but is much more difficult to have a system which is robust to walk forward backtesting, i.e. it is finding the thresholds "online", i.e. as the market data becomes available, not as an external input.

It is possible nevertheless that the parameters values found by Yulia will also work in the future for some time, or could be updated periodically, which is basically what walk forward optimization does.

Regards
Bernie

Good morning,

From my perspective, we have two options: continue to use the momentum broken behavior or eventually understand
the issue and address it. The volatility-timed threshold approach was proposed within the context of the latter and
aimed to transparently highlight this direction. I am now working on the next approaches triggered by the paper.

Thank you everyone for joining the webinar.

Best regards, Yulia

Here are results of the strategy backtest using Wilder smoothed ATR as volatility factor(Custom factor by Nadeem Ahmed).
Another proof of Yulia Malitskaia concept.

7

Good morning,

Triggered by the recent discussion, I would like to share that the next Quantopian-based paper “Uncovering Momentum”
following the Aug 23 post will be released in December.

Best regards, Yulia

I look forward to the next Yulia Malitskaia Quantopian-based paper “Uncovering Momentum”

Gained a lot of insights on momentum from the first paper and notebook. Look forward to the "Uncovering Momentum" paper.

Thanks for sharing the code. I have a question.
Why is the sma_vol not a smooth curve?
custom value

@vidal
"smooth" is relative. The curve is smoother than "vol" as it should be. Anyway, I found some issue with the algo. Here is the updated notebook attached. Better results.

7

Hi All,

First congrats for this very effective and clear paper.

I tried to modify to have the same with French equities ; I get stuck when modifying the trading universe (attached is my draft (#universe = QTradableStocksUS() ) ; any ideas ?

Simon

6

Simon,

Glad to hear you liked it.

As Dan mentioned in this post, the get_pricing method currently works only with US equities and subsequently prices can be retrieved within the pipeline as shown in the attached notebook. The corresponding French data however contain some abnormal returns. Therefore, additional universe-based filtering is important and can look into it next week. Best regards, Yulia

10

Hi @Yulia, very nice work.
Ochin khorasho :-)

"By market regimes I mean periods of time when momentum or reversal strategies are respectively more profitable. Based on my research the transition between regimes is difficult to predict, it is generally a posterior consideration and as such is not easily generalized to out of sample data."

Yes, i agree completely with your comment. In fact the concept of a dominant "market regime" can be usefully extended to include the level of volatility and trend direction (up / down / flat / chaotically noisy & un-tradeable, etc), as well as whatever is currently the best performing (i.e. most fashionable) Investing Style and/or Economic Sector at the moment.

As @James Vila knows, this is one of my favorite topics, and i have looked at it a lot in the context of Markov Chains & transition probabilities. What my own investigations (outside of Q) indicated to me is that, while some regime transitions are more probable than others, generally the most likely "next" regime tends to be the same as the current one. Maybe this sounds trivial but it is not. Because what it means is that if we can quickly and correctly identify the current market regime (in whatever way we choose to define that, and using however many different groups as we think appropriate) and doing this in as few time bars as possible to minimize lag, then our best bet going forward is usually to stay with the currently prevailing regime.

I don't want to lead this too far off-topic from Yulia's excellent work, so maybe we could launch a separate thread on "Market Regimes" if anyone is interested? Best wishes, TonyM.

Tony,

Appreciate the kind words. Would be very interested in seeing the preferable papers on Markov chains
in the context of market regimes. Best regards, Yulia

Hi Yulia, my own work on Markov Chains / Transition Probabilities is all unpublished. It dealt specifically with price-only data and the Regimes that i used were based on price action = trend quality, direction, strength, volatility, and "character" as defined by a range of different metrics. My comments in the post above refer particularly to findings from such price-related regimes, but I believe (although based on only limited evidence so far) that similar conclusions do also hold for fundamentals-based Regimes. This is something I would like to investigate further now. I'm sorry, i don't publish my own work in the academic community, but there is a lot of easily accessible material on the Internet regarding Hidden Markov Models, no doubt fueled by the success of Renaissance Tech following Jim Simons' & Nick Patterson's original HMM work about 30 years ago. Some other good areas to look for papers are in Biology, Speech Processing, and Pattern Recognition, with potentially relevant ideas that have not yet been over-worked by traders. Cheers, best wishes.

Revisiting the performance of some of my algorithms for this thread during the COVID-19 market crash.

DV Constant Threshold Timed W_10 Plus Bonds
Yulia Malitskaia magic threshold 0.27!

24

Thank you for the update. It seems momentum volatility timing is working as expected in the crash. Moving forward, eager to see the broken momentum behavior and market recovery.

Wishing everyone to stay safe and healthy,
Yulia

Here is a little algo that uses the MTUM ETF and realized volatility timing/trigger. Not an algo to write home about, but a little contribution to Q community.

32
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from scipy.optimize import minimize
import quantopian.optimize as opt
import numpy as np
import pandas as pd

def initialize(context):
# Rebalance every day, 1 hour after market open.
algo.schedule_function(
rebalance,
algo.date_rules.every_day(),
algo.time_rules.market_open(hours=1),
)

context.mtum = sid(44542)
context.spy = sid(8554)
context.tlt = sid(23921)

def rebalance(context, data):
vol = np.log(data.history(context.spy, 'close', 126, '1d')).diff(1)[1:].std() * 252**0.5

if vol < 0.15:
order_target_percent(context.mtum, 0.70)
order_target_percent(context.spy,  -0.30)
order_target_percent(context.tlt, 0.00)
else:
order_target_percent(context.tlt, 1.00)
order_target_percent(context.spy,  0.00)
order_target_percent(context.mtum, 0.00)

record(vol=vol, th=0.15)
There was a runtime error.

Thank You for sharing Luc. Would be nice if you could replace spy volatility with VIX. If I remember correctly, there is a normal level of VIX. Or please help me with the code which will do so. I am still learning python :-(

Luc,

Nice to see an application of the approach with MTUM ETF. The algorithm however encompasses several changes from the perspective of the original paper: order_target_percent(context.spy, -0.30), spy vs momentum volatility timing, and 0.15 vs 0.27. The attached algo illustrates a baseline example approach that can be further enhanced with extensions such as VIX, etc. Best regards, Yulia

49
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from scipy.optimize import minimize
import quantopian.optimize as opt
import numpy as np
import pandas as pd

def initialize(context):

# Rebalance every day, 1 hour after market open.
algo.schedule_function(
rebalance,
algo.date_rules.every_day(),
algo.time_rules.market_open(hours=1),
)

context.mtum = sid(44542)
context.spy = sid(8554)
context.tlt = sid(23921)

def rebalance(context, data):

vol = np.log(data.history(context.mtum, 'close', 126, '1d')).diff(1)[1:].std() * 252**0.5

if vol < 0.27:
order_target_percent(context.mtum, 1.00)
order_target_percent(context.spy,  0.00)
order_target_percent(context.tlt, 0.00)
else:
order_target_percent(context.tlt, 1.00)
order_target_percent(context.spy,  0.00)
order_target_percent(context.mtum, 0.00)

record(vol=vol, th=0.27)
There was a runtime error.

The "Stocks and bonds ETF balance based on VIX" was developed 7 month ago during the discussion in this thread
as vesion of "VIX threshold adaptive switch W_10 Plus Bonds" for a retail trader.
It, like luc prieur algo, uses just 3 symbols.

STOCKS = symbols('QQQ'); BONDS = symbols('TLT', 'IEF'); TRHD = 18; LEV = 1.0;

Here is the "Stocks and bonds ETF balance based on VIX" notebook with performance metrics updated till March 27 2020.

15

Yulia,

luc prieur method to calculate volatility differ from that used in the paper and requires a different threshold.

6

Attached please find a comparison of different momentum portfolios. The first paper “Momentum with Volatility Timing” analyzed equally weighted winners in alignment with the seminal momentum study by Jegadeesh and Titman (1993). The second paper “Uncovering Momentum” considered the value-weighted variant in relation to recent papers. The methodology of the MSCI Momentum Indexes is described here. As you can see from the figure, MTUM cumulative performance can be approximated with the value-weighted W10 portfolio. The MTUM realized volatility also shares a similar pattern with the value-weighted winners albeit at a lower level that is likely attributed to risk-adjusted selection and weights. During this downturn, all portfolios nonetheless exceed the common threshold. Best regards, Yulia

21

If possible, please share Wilder smoothed ATR custom factor code. Thanks.

This is what I have

class APR(CustomFactor):
inputs = [USEquityPricing.close,USEquityPricing.high,USEquityPricing.low]
window_length = 252
def compute(self, today, assets, out, close, high, low):
hml = high - low
hmpc = np.abs(high - np.roll(close, 1, axis=0))
lmpc = np.abs(low - np.roll(close, 1, axis=0))
tr = np.maximum(hml, np.maximum(hmpc, lmpc))
atr = np.mean(tr[1:], axis=0) #skip the first one as it will be NaN
apr = atr / close[-1]
out[:] = apr

Thanks Nadeem. To get the ratio (vol/threshold) do you do any additional transformation on apr value? APR value itself is much smaller than 1?