Back to Community
Live Tearsheet Review (Updated 1.14.19)

Hi all,

This is a new thread to post your tearsheets, as the original tearsheet feedback thread has become too long. If you plan to participate in the upcoming live tearsheet review with Dr. Jess Stauth on Thursday, January 24th at 2:00 pm ET, please submit your tearsheet below by 5pm ET on January 18th. Don't forget to register for the webinar.

If you have already submitted your tearsheet to the old thread, you do not need to re-submit, however, we recommend all discussion and feedback to be posted to this thread going forward. For instructions on how to create a tearsheet, see this tutorial lesson.

If you want to submit your tearsheet but cannot make the live webinar, please note that it will be recorded and made available for viewing on the Quantopian Channel.

Let us know if you have any questions. Thanks!

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

31 responses

Hi Jess,

Attached is my fundamentals-based algo based on an alpha factor with the following IC statistics according to Alphalens. I was excited to see these values, as it indicated that I might have actually found something meaningful.
Information Analysis

However, the returns were not as exciting as I'd hoped. I'm worried that the algo is extremely cyclical (given how fundamentals are reported), as I have short periods of gains and then longer periods of more or less steady holding. How can I use the tearsheet to determine the cyclicality of my algo? Is there a way to "break" this cyclicality to give it more consistent returns?

Thanks!

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Jess,

Here is my latest result. It uses proven factors in Technical and Fundamental in construction with proper risk management and portfolio construction.

Thank you
L

Loading notebook preview...
Notebook previews are currently unavailable.

Hi all,

Please note that the deadline to submit your tearsheet has been moved up to 5pm ET on January 18th so Jess has enough time to complete her analysis. Thanks for understanding!

Fundamental-based algo emphasizing cash-rich stocks with high liquidity. 900-plus holdings version. I'm still working on the turnover issue.

Loading notebook preview...
Notebook previews are currently unavailable.

Same algo as above except with a reduced number of holdings.

Loading notebook preview...
Notebook previews are currently unavailable.

Simplified combination of alpha factors

10000000
START
12/01/2016
END
01/11/2019

Quantopian default slippage and commissions models.
Passes all Quantopian contest constraints

I use a combination with as few factors as possible in 4 categories: fundamentals, intraday momentum, momentum and volatility.
Setting the objective function and сonstrains are focused on low transaction cost and returns far exceeding the current 2-year CD to avoid the trap of the assumption of zero borrowing cost in real trading.
In order to prevent over-fitting, I use non-parametric factors or factors with a small number of parameters.

Loading notebook preview...
Notebook previews are currently unavailable.

Jan 2015 - 18
Two Factset Fundamentals only
17.1% annual return
backtest shows excellent performance from Jan 2008 to Jan 2018

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Paige and Jess,

Thanks for the new thread and for doing another tearsheet review webinar!

Please use the attached tearsheet from me for the review webinar, instead of the other one I had attached in the old thread. I've briefly outlined in this post how I believe this one to be less overfit. This tearsheet includes both my training period, as well as part of the periods I used for OOS testing. It's the longest backtest I'm able to create a full tearsheet from without maxing out memory and having the kernel restarting. I've now run out of data to do OOS testing on so I thought I might as well just run as long a backtest as possible for the review, and then move on to a new strategy with a different economic hypothesis and factors.

I'd especially be interested in critical feedback, e.g. a downward trending Sharpe? Too likely overfit? Too long drawdown periods? Positive skew of monthly returns distribution curve? Poor training & testing methodology, etc?

Name:
Warren Buffett on the Move 2.0

Model Setup:
$10MM starting capital with Quantopian's default slippage and commission cost model (none specified).

Economic Hypothesis:
Buying 'great companies at reasonable prices' when they are trending higher. Shorting the opposite.

Loading notebook preview...
Notebook previews are currently unavailable.

This is quite impressive Joakim!

If you don't mind, I'm curious about your approach to minimizing common returns - did you design your alpha signals specifically to cancel out/minimize introducing known factors or do you have the optimize algorithm take care of taking out common returns by setting aggressive limits to the optimizer? I can see pros and cons to either.

@Jess: Would also welcome any thoughts from you on that question.

@Joakim belongs to elite team. =)

@Bala has the simplest and least factors. I keep getting pipeline timeout

@Andy, thanks. I try to do both. First without constraining, then squeezing the last out in the optimiser. Easier said than done sometimes, and I’m not always successful. I’d be very interested in hearing @Jess’ thoughts on this too!

@Leo C. Thank you, but honestly I’m just a jack of all trades. I continue to learn so much from the Q community, which I’m very grateful for. :)

@Joakim, as always great work! The trajectory of returns and the tightness of specific to total returns are just amazing. Time will tell...all the best MARTY or should I now refer to you as Warren Clenow, hehehe!

Thanks MAGIC! Yours is quite impressive as well, as is usually the case. Quite a number of very impressive strategies this time actually I think. Seems like we're all improving. Or just getting better at overfitting... Time will indeed tell. :)

@Joakim, agree on the question of whether the authors are genuinely improving or just getting better at overfitting! Given the framework of Optimize API and its corresponding constraints and required thresholds, it is really very difficult to gauge overfitting. From the examples I've seen so far, majority are doing ranking, zscoring and/or weighing of factors at a set frequency over a specific time frame. If you "train" the insample data from 2008 to 2015 and designate 2016 to present as your out-of-sample, the results are, for the most part, the SAME as if you did the reverse (i.e 2016-present as insample and 2008-2015 as OOS). If you ran the whole dataset from 2008 to present, the results will be consistent to the results of both IS and OOS. I believe the reason for this lies in the Optimize API which runs on the set frequency, if daily, the weighing system of the stock selection is fitted on a daily basis. As opposed to AI/ML based algorithms where the norm is to freeze and save the best weights of insample dataset and then applied to the OOS dataset to test the veracity of its generalization in data it has not seen before. I have therefore concluded that under the Q framework, all backtests are insample and the only way to test the accuracy of the model is to go live. But I could be wrong :-))

Hi

Not sure if you permit more than one submission. If not review either my previous one or this - your choice.
This one is already in the contest and therefore has more recent performance data.
Based on two factset fundamentals + one mean reversion factor. Thanks.

Loading notebook preview...
Notebook previews are currently unavailable.

{Placeholder for my tearsheet review} until I have access to Notebook after work.

Tear sheet for my algorithm. Two fundamental factors, volatility and momentum

Loading notebook preview...
Notebook previews are currently unavailable.

@James, yes, the real test is to go live. We are playing a compounded return game, and as such we should always consider what it implies. You design a trading strategy and it has a modus operandi, a signature. It does not matter what it is, view it as simply designed to do this and that. From the stock selection to the actual trading rules all is specified except for the real gyrations of actual price movements. The trading strategy will react according to its programming. The strategy itself is what really matters and will make a difference.

You can view a trading strategy, in matrix notation, from start to finish as: \(\int_0^t (H \cdot \Delta P)\). It will operate from \(\,t=0\) to termination \(\,t=T\). A strategy could also be resumed as \(F_0 \cdot (1 + \bar g)^t\). If you add some delay you will be shifting the final outcome by \(\tau\) and produce something like: \(F_0 \cdot (1 + \bar g)^{t - \tau}\). It is the same as if you viewed the time horizon as: \(\int_0^T (H \cdot \Delta P) = \int_0^{t-\tau} (H \cdot \Delta P) + \int_{t-\tau}^T (H \cdot \Delta P)\).

What is being thrown away in an OOS or a walk-forward test is not the immediate future of a trading strategy, it is its future potential which can be valued at: \(F_0 \cdot (1 + \bar g)^{t - \tau} - F_0 \cdot (1 + \bar g)^t\). That number might appear trivial in a formula, so to give it more meaning, here it is with numbers: \(\$10,000,000 \cdot (1 + 0.20)^{25 - 2} - \$10,000,000 \cdot (1 + 0.20)^{25} = \$291,488,440 \). That is the cost of a 2-year delay. The cost of the delay calculated at the start rather than at the end is more modest: (\(\$4,400,000)\). Nevertheless, the real opportunity cost is the one at the end. It gets worse if you put in more years, a higher CAGR, or more cash on the table.

This would suggest going live as soon as you possibly can. And therefore, all the testing should be done prior to launch date. It would also suggest assuring all testing is properly done under similar market conditions as the trading strategy might encounter so that it would be ready to handle what will be thrown at it.

Naturally, if you do not design your trading strategy to last that long, then all this will not matter much since the strategy might simply go kaput. And in such a case, what would have been thrown away is: \(\$953,962,166\) the strategy's potential outcome. So, my suggestion is: do the best you can and as fast as you can. Delays are indeed costly.

I know its late but just thought I would throw my latest entry into the thread for any type of feedback.

Model Setup:
$10MM starting capital with Quantopian's default slippage and commission cost model (none specified).

Hypothesis: Mixed between a few technical factors and several fundamental factors with a focus on trending volumes and low volatility

Loading notebook preview...
Notebook previews are currently unavailable.

Here's another version of the same algo above however, this one was designed to focus specifically on sector neutrality throughout . I need to work on the turnover between 2014 and 2016- 2018 where suddenly it drops from a steady 6% ish range to as low as 2% but overall not terrible results. I could increase the min turnover to 20% just to meet the min contest constraint above 5% but that's not the right way to handle it.. Some more work to do.

Loading notebook preview...
Notebook previews are currently unavailable.

Hi! Is this going to be a recurrent event? Thanks for the great content :)

Is there a link to the archived tearsheet review webinar? I wasn't able to catch it live!

Hi Kyle,

Yes, the webinar was recorded. We hope to have it up on our Quantopian YouTube channel by next week. I'll also post the link here when it is ready for viewing. Sorry for the delay, and thanks for your patience!

I just watched the webinar on YouTube. Thanks, I enjoyed it.

It sounded like there was some confusion at the end about sector concentration. At least it was confusing for me. I thought the 25% sector concentration constraint was net exposure, not gross exposure. So if you're 50% long energy and 50% short energy, even though you're 100% gross exposed to energy, net you'll only be net 0% exposed to energy. Isn't that how sector concentration is calculated?

Speaking on that, I'd like to see gross risk factor exposure graphs in addition to the current net exposure graphs on our tearsheets. Currently we can only see the bias, but not the actual concentration. I think it'd be a useful addition to help us visualize what our algorithms are doing.

Hi Viridian,

It's referring to the individual sector beta in the Quantopian Risk Model (11 sector betas, 5 common risk betas).

Nothing to do with net dollar exposure in the sector or dollar sector allocation, although those things will indirectly influence it.

You can have 100% gross exposure to a sector and still maintain zero sector beta/"exposure", so I didn't really understand that bit at the end of the webinar. Seems like it shouldn't be an issue?

Btw Antony, sector exposure in Quantopian does not appear to be calculated as beta-to-sector. You can do a test by selling all your positions at close. Your "exposure" will be 0, even though your beta to the sector will be something else. Risk factor exposure as far as the contest is concerned appears to be calculated based on your end-of-day positions. Not beta.

Not sure, to be honest, Viridian. I thought they used the Fama-MacBeth procedure, which is a two-step regression.

Also, there were some comments in the webinar about negative cumulative returns to the energy and healthcare sectors. Energy is the most volatile of the 11 sectors, and I wondered if it has something to do with that, and not necessarily that people are consistently lousy at picking energy stocks?

Clarification from Quantopian on the matter would be nice.

I was thinking more about Dr Strauth's comments on algorithms having difficulty with energy and healthcare stocks. My impression is that whereas the other sectors will correlate more closely to the overall economy (jobs, consumer spending, trade, etc.), energy and healthcare have some unique drivers that set them apart.

In the energy sector they have massive geopolitical risk and sensitivity certain commodity (oil/gas) prices that themselves are highly manipulated. If these factors alone don't make the sector entirely unpredictable, they at least give it a different calculus.

Healthcare is also unique. Make-or-break events affect both individual stocks (FDA approvals) and possibly the sector as a whole (legislation/regulation). Valuations are often extremely speculative and based on information we don't have access to through fundamentals (for example, addressable market size for a new drug, etc.). There's also arguably a drug-pricing bubble further unhinging valuations. These stocks can drop -80% or pop 400% overnight -- which is not common in other sectors. The catalysts behind these massive price movements are likely coin flips, and any detectable market inefficiencies in the sector are likely to pale in comparison to the idiosyncratic volatility risks.

Not an expert. Far from it. But that's my sense of what we're up against.

I was thinking more about Dr Strauth's comments on algorithms having difficulty with energy and healthcare stocks.

I too had this question some time back as to why the energy and healthcare sectors (and to some extent even utilities, if I remember correctly) apparently seem to perform worse than the others. This pattern was seen across value based as well as technical based strategies, be it momentum or mean reversion. Now, having access to the algorithm code itself, it is easier for us to dig deeper and find out what's causing this. Here's what I found out:

First, I proceeded to check where the negative returns attribution come from: the Long side or the Short side? For this, the backtest has to be run by commenting out not only the taking of positions on one side, but also the constraints for dollar neutral and beta neutral have to be commented out or else the Optimize API will not work. With this, I determined that the losses were mainly being made on the Short side in the case of these sectors. On further looking into this, it turned out that in general, irrespective of sectors chosen, the Short side always loses money over the long run (backtests of 10 years or so) and it is these sectors like energy and healthcare which provide the least cost of "insurance" so to speak, and therefore are automatically selected by most types of algorithms which seek to maximize the Sharpe Ratio.

On a side note, we all hear of Long Only strategies that are profitable, but a Short Only strategy that is profitable with a large number of stocks over 10+ years is extremely rare. I know of only one Hedge Fund manager - Dana Galante of the "Stock Market Wizards" fame who specializes in Short Only strategies.