Back to Community
An updated method to analyze alpha factors

We recently released a great alphalens tutorial. While that represents the perfect introduction for analyzing factors, we are also constantly evolving our thinking and analyses. In this post, I want to give people an updated but less polished way of analyzing factors. In addition, this notebook contains some updated thoughts on what constitutes a good factor and tips on how to build it that we have not shared before. Thus, if you want to increase your chances of scoring well in the contest or getting an allocation, I think this is a good resource to study.

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

46 responses
ep.cum_returns_final(perf_attribution).plot.barh()  
plt.xlabel('cumulative returns')  

Yes, very helpful indeed. It makes it a great deal more obvious and tractable. And anything that improves speed is more than welcome.

On speed, working out of US hours seems to make a difference. The code below executed within 5 to 10 minutes this morning. Last night I finally shut it down when it had not completed after an hour.

To be honest I felt rather depressed. Your tools are innovative and fascinating (to me at least). But often unusable in terms of the time they take to run.

It is particularly helpful for instance to see how you are calculating "specific returns".

pipeline_output = run_pipeline(  
    make_pipeline(),  
    start_date='2007-01-01',  
    end_date='2016-11-01' #  *** NOTE *** Our factor data ends in 2014  
)

pricing_data = get_pricing(  
    pipeline_output.index.levels[1],  
    start_date='2007-10-08',  
    end_date='2017-11-01', # *** NOTE *** Our pricing data ends in 2015  
    fields='open_price'  
)

factor_data = get_clean_factor_and_forward_returns(  
    pipeline_output['factor_to_analyze'],  
    pricing_data,  
    periods=range(1,252,20) # Change the step to 10 or more for long look forward periods to save time  
)

mean_information_coefficient(factor_data).plot()  

Incidentally, with a back test you can shut down the web page locally on your own computer and it continues to run on the Quantopian server. So that you can initiate a number of back tests, go off to do some gardening, and return later to see the results.

Notebook does not seem to work in this way? I Tried the same trick with your most interesting new notebook, came back and fired it up but the calculations had not been performed. The notebooks had not been shut down - the memory usage on your server was still the same. But the calculations had not been performed.

Just an idea for you.

@Zenothestoic: Yes, that's a downside of research. The difficulty is that we know when a backtest is finished while a kernel just keeps running so potentially we would just need to keep it running indefinitely. The issue with the tab closing is discussed here at length (without a solution, however): https://github.com/jupyter/notebook/issues/1647

FWIW, nothing in that NB was that slow for me, what parts are you referring to specifically?

No Thomas. Not your notebook. I was referring to the standard Alphalens notebook from which my quoted code above was taken. And thank you for the extra information. You have great tools here. I am looking forward to contributing.
A

In the code following the paragraph "Risk Exposure" the following error needs correcting: erroneous and corrected line show:

   # pos = (pos / divide(pos.abs().sum())).reindex(pricing.index).ffill().shift(delay)  
    pos = (pos / (pos.abs().sum())).reindex(pricing.index).ffill().shift(delay)  

It is correct later on in "Putting it Altogether" but I just thought I should note it.

@Zenothestoic: Thanks, should be fixed now.

Very cool, Thomas! Thanks for the awesome post.

Hi @Thomas - This is very useful. One result that I have that's a bit baffling to me currently is a plotted factor exposure range of [-15, 20] whereas I would have expected a max range of [-1,1] and preferably one of [-0.2, 0.2] as shown in the example you shared. Any reactions as to how this is possible? My annual volatility is high for specific returns (0.6), but lower on individual exposures (<0.1).

Separately, the default Cumulative Returns and Annual Volatility charts shown in the lower right are for delay=0 which I didn't find relevant for what I'm researching. In case anyone else finds it useful, you can update the delay on those charts by passing the delay parameter within factor_portfolio_returns().

portfolio_returns, portfolio_pos = factor_portfolio_returns(factor, pricing, delay=2)  

@Cem: Re exposure range: Is it possible you are not equal weighting your factor?

Also to clarify: delay=0 means you are trading into the factor when it's available, so maybe you compute it any time before close and trade into it on that same day. delay=1 then means that you have one additional day to act on the signal. I agree though, that delay=1 probably is more relevant here.

Thank you Thomas for the notebook. I definitely needed this notebook to better determine the alpha of my factors and also save a couple steps in development. By the way, thumbs up on your blog post about copulas.

Glad you find it useful. Feel free to post your tearsheet here if you have an interesting factor that looks promising. After running the NB, you can just delete the cells in the NB that have your factor logic and only leave the tearsheet output to not reveal your IP.

Hi Thomas,

Thank you for this awesome notebook! I thought I'd give it a try with a simple, yet effective factor: fcf_yield from MorningStar. (effective up until March 2016 that is. Not so much thereafter).

I wanted to first see when Mean IC and Specific Returns 'peak' for this factor, and then possibly apply a SMA on the factor based on this (as per your comments in the NB). However, as you can see in the attached, both Mean IC and Specific Returns just keep rising...

I've most likely made a mistake somewhere, though I didn't really change much from your original NB. Or could it be related to the 'data overlapping' problem in AL that Michael Matthews mentioned in another thread a while back?

Loading notebook preview...
Notebook previews are currently unavailable.

@Joakim: Great, thanks for posting this. I don't think you did anything wrong here, code-wise, and I've definitely seen this before. I don't know the exact cause but can come up with two hypotheses:
1. The factor is short volatility which is something that has been working (sort of) well for a long time. As such, if you just keep betting on something that keeps going up you will have this slow stacking.
2. The IC overall is quite low, as such, it does not take much to keep it modestly increasing.

I think it could be a combination of these two but happy to hear other thoughts.

Also, the factor is daily and seems to have a long alpha horizon (if we were to believe this, which I don't, but for the sake of argument). In that case you'll want to either subsample, or better, smooth the signal a bit by taking the average, like I did originally. When you're just developing the factor at first it's fine to do it like you do here, just saying that could be a good next step if you wanted to develop this idea further.

Here is a factor anomaly that shows up in several research papers: investment to asset ratio.

Loading notebook preview...
Notebook previews are currently unavailable.

Thanks Thomas, very helpful!

the factor is daily and seems to have a long alpha horizon

Using your NB, I wanted to try to find out the best rebalance period for the factor, and/or what window_length to use when smoothing the factor using SMA. How would I go about doing this using your NB?

or better, smooth the signal a bit by taking the average, like I did
originally.

Yes, this was what I was planning to do. How can I determine a reasonable period to use in the smoothing though (without risking too much overfitting)? The 'price' portion of the yield updates daily, and is noisy, so should I smooth by 63 days (FCF should update quarterly)?

I tried smoothing over 65 days in the attached, and it doesn't appear to make any difference.

Note: I'm not really pursuing this specific factor, just using it as an example when trying to learn how to use your NB. Also, the full NB didn't complete due to the memory limit and kernel restarting.

Loading notebook preview...
Notebook previews are currently unavailable.

Yeah, you probably don't need any smoothing as the factor only changes once a quarter. In that case you should definitely sub-sample the factor to that frequency however to avoid overlapping windows when computing the IC curve, this might change things.

Here is a new iteration of this tearsheet. Instead of cumulative IC it now just displays daily IC which makes it easier to see which horizon the signal is predictive for. The code is also refactored so you should use this version.

This is also what I use the evaluate community algos where I would just input the eod holdings of a backtest. I think it's the best tool out there currently to analyze your factors/portfolios.

Loading notebook preview...
Notebook previews are currently unavailable.

I got an error when running the notebook. It seems that the 'alpha_vertex' module is not available now. But I don't know how to fix it.

SD F: That is perplexing, it's working fine for me. Can you post the error you're getting?

It seems like that was indeed removed. I'll try to use a different factor. In general though, you can still use the functionality on your own factor.

Thank you for this, Thomas. Really like the sector neutral function and IC Decay. I've been noticing a patterns that a lot of the strategies being posted perform well on a longer time horizon (3-7 years), but in the short term strategies are much more variable in their results. Is there a reason for that?

Short term movements are driven by noise - most of these factors have been demonstrated to work in the literature over longer horizons like 1-3 months. Some factors that show efficacy at much smaller time-frame (google 1-day reversal), however you have to balance trading costs as they are high-turnover...hence may show academic / technical efficacy but are too costly to implement.

Ben Graham, once said, “In the short run the market is a voting machine, but in the long run it is a weighing machine.”

Best,
Georgios Tyrakis

Explain the exceptional performance of funds such as RenTech then...

Interesting.

I updated the top post and replaced it with the Moneyflow factor. Unfortunately that one is not as illustrative but at least it runs everywhere. The text is adapted to reflect that.

Has anyone found an easy way to directly feed results from bt = get_backtest('xxxxx') into this notebook?

Andy: Excellent question, here is an example NB.

Loading notebook preview...
Notebook previews are currently unavailable.

Fantastic, thank you Thomas!

Second that! Fantastic!

@Thomas,
Thanks for your work on this...it's great!

I'm having trouble understanding the meaning and import of the first graph: IR vs. Delay.
In particular, you make the statement:

Instead of cumulative IC it now just displays daily IC which makes it easier to see which horizon the signal is predictive for.

It would help me if you could give a short analysis of that graph for my included Alpha Analysis, which is for an algo that I'm looking at and believe I understand. From what I can see, there is a decay of the IC at 4 and 8 days...is that right?...and what does it mean wrt the factor...good...bad...meh...?

Thanks for any help!
alan

Loading notebook preview...
Notebook previews are currently unavailable.

Thanks for asking, Alan, that's a critical question.

The alpha-delay plot is the most important one. The key thing to realize is that you might not be able (or rather not want) to trade into a factor-portfolio immediately on the first day, the primary reason being transaction costs / slippage. Say for example you have a factor with an IR of 10 on the first day and then of -2 on the day after. So you better trade into the target portfolio extremely quickly (ramping up huge costs). But then you also have to trade out of it extremely quickly or you're bitten by it the next day. That example is a bit extreme but the same mechanism is at play at longer time-scales.

Imagine you have a factor that turns over 50% a day but you set a turn-over constraint at 15% a day. What will happen is that you'll be constantly trying to catch up to the factor. As you're doing so, those older factor-portfolios that you were trying to target e.g. 5 days ago will also still linger around in the portfolio, because you can't easily trade out of those existing positions. So ultimately your portfolio will follow the factor to some degree with some lag.

That is why it's important to look at what happens if you were to trade your factor 1, 3, 5, 20 days delayed, because you will. Your specific factor looks like it has pretty stable alpha over time, those wobbles are just noise and I wouldn't read too much into them. I have seen many other examples where it's super high on the first day and the sharply drops to zero by day 3, that pattern is more worrisome. So that looks like a pretty good factor to me. From the exposures I would think it's fairly close to a standard reversal factor.

Anyway, from that understanding above, another related insight can be derived:
1. You shouldn't sub-sample your factor (e.g. trading it weekly or monthly) to achieve lower turn-over. As an example, imagine you emit a new signal on the first day of the month, but maybe we only trade on the 5th of that month. We would start trading into a 5-day old portfolio. We'd rather trade into something more recent. Instead, you can apply a moving-average to your factor (e.g. 5-day), that way you slow the factor down but it will still update daily. Although you actually don't need to fret too much about turn-over, as long as it has good alpha over several days, we will be able to capture it.
2. You shouldn't tweak the trading time. If your algorithm is sensitive to trading times, it's indicative of short-term alpha or some noise you're trying to overfit to. My advice is to set to always trade as close to the close as possible and never change it.

Finally, you can probably tell how our own thinking is evolving here (and I plan to do a larger post on this too). Given everything I've written above, I view this type of analysis as absolutely critical. A user could easily work on an amazing looking backtest with a Sharpe of 6 without realizing that it's super short-term and actually not interesting for our current trade horizon. My advice is to run this analysis as the very first and main thing when working on a factor/algo.

Hi @Thomas,

Does this tearsheet include transaction costs?

Thanks!

@Anthony: No, and I don't think it's really something to worry about. Estimating transaction costs accurately in a backtest is quite tricky, we spent a bunch of time coding up various models a while back but the results were never that satisfying.

Having said that, for me the better reason not to worry about it is separation of concern. At this level of the workflow, the focus should be on alpha and its decay profile, uniqueness (which is where the risk model can help), and turnover. The next level down is portfolio construction where the focus is on combination of factors, risk, volatility, and controlling turnover more tightly. Then at the execution level you really care about transaction costs.

Maybe that is what you meant though and I definitely agree that turnover is currently missing, not quite sure where to place that yet.

Thanks, @Thomas. I think that makes sense. I switched off transaction costs when building my last model (adding them on only when it was complete), and will probably trade a few minutes from the close going forward.

Super helpful, thank you!

My advice is to set to always trade as close to the close as possible and never change it.

So the previous guidance to rebalance 1 hour after open should be forgotten? (Seems like things like rebalance time would be best handled behind the scenes on the execution algo side of things then, no?) Is there no risk of unfilled orders causing under/over-leverage when rebalancing near to the close?

you can probably tell how our own thinking is evolving here (and I plan to do a larger post on this too)

I look forward to the larger post.

Here's the notebook for a momentum strategy I have. This tearsheet looks flattering, except the obvious Achilles heel is the returns volatility -- too high I'd assume to be considered for an allocation? Or how has Quantopian's thinking evolved on that front?

Loading notebook preview...
Notebook previews are currently unavailable.

So the previous guidance to rebalance 1 hour after open should be forgotten? (Seems like things like rebalance time would be best handled behind the scenes on the execution algo side of things then, no?) Is there no risk of unfilled orders causing under/over-leverage when rebalancing to near the close?

Yes, can you let me know where that text is? The reason is that we are now focused on your EOD holdings, and evaluating your algorithm using those (using the tearsheet I posted). In order to make your backtest, which still uses intra-day trading, give you results closest to what you would get from the EOD analysis, trade at the close.

Yes, can you let me know where that text is?

I can't remember. I thought it was maybe in the tearsheet reviews.

What do the four different lines in the specific returns and total returns charts represent?

Here is a small update: I added number of holdings and turnover, which are important things to track. Feedback welcome.

@Viridian: Those are the cumulative returns of the signal delayed by 1 to 4 days.

Loading notebook preview...
Notebook previews are currently unavailable.

Nice. The + pd.Timedelta(days=30) code was giving me errors because my backtest ends within the past 30 days, but I removed that bit and it worked fine.

Yet another update, now also including the percentiles of your holdings. This is useful to make sure you are not equal-weighting.

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Thomas,

Had to make these changes as suggested by @Viridian Hawk to the notebook work on a recent backtestid (end_date 2019-09-02) to get it to work.

# Load risk factor loadings and returns  
#factor_loadings = get_factor_loadings(assets, start, end + pd.Timedelta(days=30))  
#factor_returns = get_factor_returns(start, end + pd.Timedelta(days=30))  
factor_loadings = get_factor_loadings(assets, start, end)  
factor_returns = get_factor_returns(start, end)  

Perhaps an adaptive pd.Timedelta that only adds the Timedelta if it is an old backtest might work better than my version.

Thanks,
Leo

@Thomas,
While working on the mini-contest, I noticed that I'm confused by the definitions of: "total_returns", "specific_returns" and "common_returns".
From what the code tells me, they, are 1D-14D returns, yet I can't correlate that to the full total cumulative returns of the portfolio, as appears in the typical IDE run as the number in the upper right corner.
My example has 60% cum returns over 5 years, yet the "Total Returns" in your Factor Analysis notebook is ~20%.

Are these("total_returns", "specific_returns" and "common_returns") in your "An updated method to analyze alpha factors" absolute or relative to the factor being analyzed?

Thanks!
alan

@Alan As I understand it, the 1-14D returns represent what would happen if your strategy were delayed by 1-14 days respectively (using EOD close prices). So if your reblance isn't set to run right before close, the difference between the IDE total returns and the 1D-delayed total returns could be attributed to alpha decay that occurs between your rebalance and market close. Could that be it?

Viridian Hawk explained it well, that would be where I think the difference should come from. There is also a 1-day built in, so there is no direct match between the two.