Back to Community
Adjusting to the New Framework (Development Thread)

In light of the excellent new guidelines, I think it would be great to start a journal thread. A brief summary of the guidelines :

1) Turnover < 20% (removing the need for a transaction costs model)

2) Slow alpha decay (total or specific returns --- depends on the objective of the strategy)

3) Broad universe coverage with passive use of the optimizer (opt.TargetWeights with constraints=[])

For me, the best combination of tools is the performance attribution tearsheet from Pyfolio combined with the new EOD holdings notebook.

I have found that a 5-year backtest is compatible with research memory constraints, and is consistent with the sample period used by other firms working in this area. It can be argued that 5 years is sufficiently long to capture statistically significant results, while short enough to mitigate against developing 'crowded' strategies.

45 responses

To start, I picked a simple price-based strategy and employed a single-factor sort.

Nice random-looking sector and style exposures centred around the zero line.

Loading notebook preview...
Notebook previews are currently unavailable.

And for the new EOD Holdings notebook:

I used factor smoothing to bring turnover to an acceptable level.

Loading notebook preview...
Notebook previews are currently unavailable.

That's the Introduction. Start off with something simple, and then move to an N-factor version.

My next post will use the new FactSet Estimates database.

@Antony, the minimal Style Risk exposure tilts are incredible! Is that the 'natural' style exposures of your factor?

Thanks, @Joakim. No, I manage those myself.

@Antony, very nice and great contribution to understanding the new guidance!

Thanks very much, @James!

Over the last couple of days, I have written a simple two-factor Estimates model. The engineers have done a great job of making the data easy to query.

My first impression is that there is an abundance of short-term alpha in this data that requires a new set of skills compared to other fundamentals-based models. Previously when I have written fundamentals models, the problem has often been one of generating enough turnover, with typical accounting-based measures generating turnover < 4%.

Estimates signals appear to be much more dynamic, and the problem becomes one of controlling turnover and harvesting alpha beyond the initial day.

In the following example, I combined two factors, and used a simple equal-weighting scheme. I managed to run a 7-year backtest without running out of memory in research.

Loading notebook preview...
Notebook previews are currently unavailable.

Under the old guidelines that would have been the end of the story, but we now have the additional challenge of creating models without excessive alpha decay and without excessive turnover (< 20% per day).

Loading notebook preview...
Notebook previews are currently unavailable.

The investment team no longer runs stand-alone funded algorithms 'as is'. Instead signals are combined using proprietary ensemble methods, risk management and execution algorithms. There are regulatory requirements that potentially limit position sizes in individual stocks. These all combine to make the fund, in its current form, a mid-frequency fund.

With that in mind, here is my interpretation of the results contained in the second notebook above for my two-factor estimates strategy.

1. Alpha Decay
The first chart exhibits the Information Ratio (IR) as far as fourteen trading days out. This is the annualized ratio of expected returns to standard deviation of returns. The chart shows two pieces of information --- specific and total returns. For a model to score highly in terms of uniqueness, the returns should be dominated by specific returns, i.e., those not explained by the Quantopian risk model. This will be characterized by the blue and green bars being at a similar height throughout.

The first chart seems to be pretty good for my strategy. There is some alpha decay, but it is slow and gradual. The absolute level of the IR seems OK.

2. Exposure
The box-plot exposure chart does a great job of comparing the strategy's returns against the risk model. For my model, the sector exposures are centered around zero, and while there are some persistent style exposures, they are smaller than 10%. The two largest ones, momentum and short-term reversal make sense in terms of my economic hypothesis, so I'm not too worried about them.

3. Holdings / Turnover
This is a critical chart. The IR can be decomposed into the forecasting strength of a model and its breadth. More independent bets are good, and my strategy is placing bets on approximately 1800 stocks over the 7-year sample period. The mean turnover for my model is 10%, so I don't appear to be placing very short-term bets.

4. Quantiles
The five lines in this chart should not overlap. The safest way to achieve this is to use the optimizer with TargetWeights and without constraints.

Great strategy if you ask me, and great overview of the new notebook as well!

@Antony, great work and should serve as the model template and analysis framework under the new guidance. Your analysis is very much on point. Reflects on your professorial style, comprehensive yet compact. I like it Thank you, Professor Antony Jackson!

This strategy has been posted in the recent mini-contest. I wanted to re-post here with some extra thoughts.

  1. Although this strategy starts from January 2010, a 7-year backtest looks likely to be the optimal tradeoff between ensuring memory limits aren't breached, statistical significance, and data coverage.

  2. The alpha decay chart looks almost horizontal. This is easily achieved through factor smoothing (although it's worth pausing to consider which pieces of information need smoothing.)

  3. This trades on just the one concept across three simple factors based on consensus estimates. I think "less is more", and when I've done walk-forward or time-series cross-validation, I've been able to achieve Sharpe Ratios that generalize in the range of 1.5 - 2.0.

But I am open to more fancy methods of factor combination or factor dimension reduction!

Loading notebook preview...
Notebook previews are currently unavailable.

This latest estimates strategy removes a lot of discretion, which I think is preferable in terms of avoiding over-fitting.

I'm pretty much at the stage where the emphasis is on choosing factors. I don't intend to use any more than 2 or 3 in any one strategy.

Loading notebook preview...
Notebook previews are currently unavailable.

The objective in this post is to test a methodology that ensures the 11 sector and 5 style risks are centered around zero in the new framework.

Loading notebook preview...
Notebook previews are currently unavailable.

Performance attribution tear sheet for my final submission in the Estimates mini-contest.

My main objective was to manage risk as best as possible, while using Target Weights and constraints=[ ] in the optimizer.

Overall, I am pleased with the methodology, and with this foundation I intend the future focus to be on alpha research.

Loading notebook preview...
Notebook previews are currently unavailable.

Today, I have completed a robust scoring scheme that addresses the weaknesses of Z-Scores and Quantiles:

  1. Z-Scores can lead to large imbalances in the number of names on the long and short side.
  2. Quantiles leave large parts of the universe with zero weights.

I have also come to the conclusion that the best strategies could have been implemented by a portfolio manager in real time. In practice, this means not cherry-picking common risks to mitigate after the event.

I have also avoided any factor smoothing so that the resulting alpha is as close to the original contained in the dataset as possible.

The following 7-year estimates-based strategy passes all the contest constraints, and has the following core statistics:

  • Specific returns 23.29%
  • Common returns -0.62%
  • Specific Sharpe Ratio 2.32
  • Turnover 6.3%

I'm particularly pleased with the low turnover number, as the strategy's returns are more likely to carry forward under a variety of market conditions.

Finally, I attach the new notebook. I am taking the view that the blue bars are the relevant ones, and are the ones that will be extracted by the investment team's signal combination and risk management techniques.

Loading notebook preview...
Notebook previews are currently unavailable.

Best practices, Antony! Glad you're back here, not in temporary detention, LOL!

Ha ha! I think if you upload a notebook and quickly delete it, it goes to 'pending' mode, or similar. Seem to be OK now!

I wanted to finish off this discussion with an outline of the ideal way I would like to approach developing these strategies, and with a pair of accompanying examples.

The broad procedure is:

Initial Solution

  • In this stage, I use a 5-year window.

  • Risk management is put to one side, and the idea is to develop a strategy that has a maximum chance of generalizing out of sample.

  • Within this window, good data science is essential, which at the very least means effective cross-validation, but could also mean formal methods for factor selection, hyper-parameter tuning, etc.

Managed Solution
At this stage I control risk. I do not attempt to drive all mean exposures to zero, and I do not cherry-pick the exposures to manage. I aim for an 'optimal' level of risk management.

Recalculation
After one year, the data window slides forward one year, and the procedure is repeated.

Here is an example of an Initial Solution. First the performance attribution tear sheet:

Loading notebook preview...
Notebook previews are currently unavailable.

And then the corresponding End Of Day Holdings notebook:

Loading notebook preview...
Notebook previews are currently unavailable.

Very nice, Antony. In all these, you don't use Optimize API Risk Model other than TargetWeights with no constraints, right? I like the walk forward validation method.

Hi James,

Yes that's right, TargetWeights and constraints=[ ].

And for the managed version, which hopefully with follow shortly, I reduce all exposures by a multiplier k, 0 < k < 1, in order to avoid biases like keeping short vol because that has done nicely recently.

Here is the performance attribution tear sheet for the Managed Solution.

All mean exposures are approximately one-third of their original values.

Loading notebook preview...
Notebook previews are currently unavailable.

And here is the End Of Day Holdings notebook for the Managed Solution:

Loading notebook preview...
Notebook previews are currently unavailable.

I would quite like to fill up my allocation of contest slots, take a break until the New Year, and see how these things perform live.

The next strategy retains the focus on estimates, but attempts to diversify the predominantly trend-following factors with a reversal-based factor.

The IR (specific Sharpe Ratio) is already looking promising for the 'Initial Solution', and all the style exposures, except momentum, are already < 10%.

Loading notebook preview...
Notebook previews are currently unavailable.

Here is the corresponding EOD Holdings notebook for the Initial Solution.

My thoughts are that the strongest part of the alpha is concentrated in the first few days, which perhaps isn't surprising given the introduction of a reversal-type estimates factor, but the blue and green bars are already closely aligned.

Loading notebook preview...
Notebook previews are currently unavailable.

And the corresponding Managed Solution (all exposures dampened by the same multiplier).

Effectively, what this technique does is approximately keep all the bars at the same height, but bring the green bars in line with the blue bars.

Loading notebook preview...
Notebook previews are currently unavailable.

This model completes my suite of estimates-based contest entries. I have added a slower signal to close the model.

Loading notebook preview...
Notebook previews are currently unavailable.

The corresponding EOD Holdings notebook for this final strategy.

Loading notebook preview...
Notebook previews are currently unavailable.

Just wanted to post this one, as it is a slight variant on a theme --- expanding window time-series cross validation.

I also control risk by exactly replicating the underlying portfolio.

Loading notebook preview...
Notebook previews are currently unavailable.

@Antony, Looks really good. The alpha decay has improved a lot and turnover seem symmetrical! Keep up the good work!

Thanks, James. Yes, the turnover appears to follow a seasonal pattern.

Turnover follows the 63 day reporting cycle, trading peaks as new reports come out and seem to hold their positions till the next reporting cycle. The minor kinks at the bottom could just be a result of stocks falling in and out of the QTU universe.

I intend to close the thread with this post, as I've ended up where I want to be with respect to the new guidelines.

Going forward, I will no longer be managing style exposures. This has two advantages:

  1. At the fund level, it is likely to be inefficient if individual components are being risk managed.
  2. At the individual level, hedging is expensive!

To close, then, here is the notebook for an example strategy without any risk control or factor smoothing.

The blue bars are the relevant ones. As hedging is expensive, it would bring the green bars in line with the blue bars, but the heights of the bars would be lower.

Loading notebook preview...
Notebook previews are currently unavailable.

@Antony, this is as raw and pure an alpha signal as you can get. At this level, meaning as an individual signal that will be later combined with other individual signals in a signal combination schema, there is really no need to try and control style risks as these may eventually cancel out in the combination phase with other hopefully uncorrelated signals that perhaps have different common style risks exposures . However, I will try and make a case for factor smoothing. In your example above, I get that you really don't need to do factor smoothing because you chose factors that are naturally slow moving and this is manifested by the symmetrical movement of your turnover, it peaks at the 63 day reporting cycle. I believe factor smoothing may be necessary had you chosen the fast moving short term alphas that churn out 4-5 IRs but with very high turnover. There is also the right and wrong way to do smoothing. The wrong way will tend to overfit the model. But this we can leave for later discussion.

Lastly, I would like to stress that prospective authors should first change their mindsets with this new approach and guidance given by Q investment team. We are no longer designing an end to end algorithmic trading system that functions as a standalone implementation within the fund. We are now asked to generate individual signals or as Q likes to call it factors that will be later combined with others for signal combination, portfolio construction, risk management and execution, all of which will be handled by the Q investment team at the backend. So let us understand where our tasks begins and where it ends. This is akin to a production line where different processes are compartmentalized . Change is always hard because old habits die hard but it needs to be done.

Makes perfect sense to me. What I don’t quite understand is how one can get all specific Returns, and either no or negative common Returns, even though the strategy clearly has consistent exposure to several of the Risk factors. I sometimes see this in some of my strategies as well. What am I missing?

@Joakim,

If you look at Antony's last chart (bottom right) where returns per risk exposures are enumerated, you see total common returns at slightly negative. If you add the individual returns per common style and sector risks exposures, you will arrive at the total common returns. How specific returns are arrived at is by regressing out the common returns as defined by Q. Hope this helps.

P.S. Another way to put it, say, Short Term Reversal is defined by Q as -RSI(15), your factor should not follow the returns pattern of -RSI(15) to lessen the attribution to STR.

Penny dropped, thanks @James!

In my opinion, this common and specific returns hoolabaloo is more of a marketing ploy invented by the industry to try and distinguish that common returns are cheap and specific returns have a price (2% fee and 20% of profits). To me a return is a return is a return, no matter how you slice it. It all depends on whether your returns survive and outperforms during different market conditions and regimes, consistency in other words. You don't hear Jim Simons and Rentec talk about specific and common returns.

Thanks for the comments.

Indeed, that strategy is just a starting point, for illustration.

In my opinion, a good factor is likely to possess most of the following properties:

  • Market Neutral and / or Sector Neutral
  • Balanced number of names on the long and short side
  • Horizontal-looking alpha decay
  • Turnover < 15%
  • Uniqueness

In terms of measuring the uniqueness of a strategy / factor, the following simple method should suffice:

  1. In the far-right histogram in the middle panel, count the number of squares for specific and common volatility.
  2. Raise these numbers to the power of 2 to get specific and common variance.
  3. Uniqueness = specific variance / (specific variance + common variance)

In the 'starting point' example, there are approximately 2.5 squares of specific volatility and 1.75 squares of common volatility. This leads to a 'uniqueness' score of:

(2.5 ^ 2) / (1.75 ^ 2 + 2.5 ^ 2) = 67%

I would guess a ratio of 80% is a sensible threshold, but achieved through natural diversification rather than using explicit risk control techniques (which just add unnecessary noise and costs).

During the Estimates Challenge tearsheet review Thomas mentioned that they ideally want to see portfolio positions fit a normal distribution. However, even when I weight my portfolio by a zscore of my alpha factor, I'm not seeing the kind of distribution he was describing as ideal. Anybody have any ideas of how to accomplish the ideal distribution of portfolio weights?

This model uses a broader set of Estimates datasets than previously.

7-year backtest, with mostly specific returns.

Loading notebook preview...
Notebook previews are currently unavailable.