Back to Community
Lessons From Tearsheet Analysis Webinar

I am glad to have participated in Dr. Jess Stauth's Tearsheet Analysis Webinar as it made me realized that my initial approach to strategy development for Q's design of a "slow and steady" low volatility, highly diversified and sector/style risks controlled long/short market neutral equity fund was totally off the mark. Just like several other community members who have began to refocused their development based on this new found guidance, I am posting below my first crack at it.

There are several good takeways from the webinar and other members have already articulated them in other threads but my biggest takeaway is scoring factors and/or combination of factors across the whole QTU universe.

Personally, I prefer long in sample backtests for training with a couple of years out of sample holdout period. So for this exercise, my training period was 01/02/2008 to 12/31/2015. The OOS period is 01/04/2016 to 08/22/2018.

Below is training period notebook, followed by OOS period. I welcome all kinds of comments and feedback. Thank you.

Loading notebook preview...
Notebook previews are currently unavailable.
25 responses

And here is the Out of Sample notebook:

Loading notebook preview...
Notebook previews are currently unavailable.

Looks great, thanks for posting! As a total amateur it's super helpful to see how ya'll think/change how you think about the process.

I do think it would be a good idea to include 2006-07 if possible. That time period may introduce a more testing time period for your algo (or not!)

@ James -

I was a bit surprised that the Q desire is for algos that trade hundreds up to a thousand or so stocks, but I guess this makes the problem easier for them. There's no need to worry about the alpha combination problem at the fund level; each algo can be treated more or less in isolation. I think it says that either one needs to identify a single broad market anomaly, or a set of N complimentary factors, where N ~ 5-10.

Does your algo have multiple factors?

@Grant,

Yes, multiple factors, all complementary to my assumed economic rationale...survival of the fittest!

Tearsheet looks pretty good to me.

Wondering if each factor is being ranked on the entirety of the universe or subsections of it.

Hi Leo M,

Factors were ranked across entire QTU universe.

I wanted to see whether this algo strategy performs consistently over different market volatility regimes, so I ran this neat notebook based on this Q staff post here Did it separately for in sample and OOS periods. The results tell me that this algo can consistently perform well under both high and low volatility regimes and its volatility preference is adaptive to current market conditions. While not perfect yet, it has pointed me to areas where I need to improve.

Loading notebook preview...
Notebook previews are currently unavailable.

Here's the OOS period volatility regime sensitivity analysis.

Loading notebook preview...
Notebook previews are currently unavailable.

@James, were you surprised by the 2.3% CAGR results or was it what you expected?

A 2.3% CAGR has a doubling time of 30.5 years. Therefore, you would expect to double the initial stake after running the strategy for more than 30.5 years. Yet, the strategy has already started to decay CAGR wise. And, it will get worse if you add more years. It is not that you are losing alpha. It is just the way you built the strategy. I did not see any compensation measures for that phenomena in your test results.

Also of note, the inflation rate in the US is currently 2.9%. And at that level, the strategy is losing in buying power. And, you have not as yet covered taxes or made provisions for margin fees. Even if the FED's rate is currently 2.0%, it would not make much sense to leverage the thing either.

Have you notice that out of 203,200 trades taken, a single trade accounts for ≈10% of all the profits made. It would be surprising to see how many trades of the largest profitable ones would have been required to equate all the profits. I suspect less than 50. Already, with that number of trades, it would represent about 0.0246% of all trades taken. A way of stressing the importance of those few trades. How many do you think would have been needed?

@James, looking for possible solutions to raise your average net profit per trade.

Every dollar you can add to your average net profit per trade is some $200k added to the bottom line. What could you do to make it happen?

One thing would be to allow more volatility. Understandably a low-volatility figure does give a smoother equity curve, but, it also implies the \( \Delta^i p \) is being squished.

On a trading unit of about \(\$10\)k, the strategy makes on average \(\$11.17\). That's an average price appreciation of 0.1117% on a typical trade.

There is nothing wrong with that, but the question remains. What could you do to increase that number? Where in your code could you push for a higher trade exit? Note that oftentimes, just delaying the exit will increase profitability. The reason is simple, \(\Delta p\) which is subject to variance will increase in step with it since it is proportional to the square-root of time \( \sqrt {t}\). Giving a trade more time increases the variance, and thereby increase the spread between entry and exit points, on average.

You have 50.4% of your trades that are shorts, yet only 34% are profitable. You lose on average $28.25 per short trade. That could be minimized as well. It says that your trading strategy does not handle its shorts properly. Either you force this to happen, which technically you do by design by being market-neutral, or/and your strategy misidentifies what could be a potential short.

Your strategy's beta is only 0.01. You could accept that it be a little bit higher for sure. Even if it will slightly increase your drawdown. You need to find a compromise in all this.

@Guy,

No, I wasn't suprised and kinda expected the 2.3% CAGR for a "slow and steady" low volatility, highly diversified and sector/style risks controlled long/short market neutral equity strategy that Q is looking for. Empirically, one can achieve higher CAGR by taking in more risks (higher volatility) confirming the old investment adage, 'high risks, high returns" and vice versa, "low risks, low returns".

The "decay" in OOS CAGR, I attribute to low volatility regime change from the high volatility environment in the in sample period. See the notebooks above on volatility regime sensitivity analysis. What never ceases to amaze me is your ability to predict that "it will get worse if you add more years". And you say that it is because of the way I built the strategy, please elaborate on this and the lack of compensating measure.

I am aware of the inflation rate, risk free rates, taxes and borrowing fees you mentioned, issues I've raised before which doesn't seem to concern Q because at their fund execution level they plan to leverage the algo 5-8 times, along the lines of the strategy that their principal investor, Steve Cohen of Point72 employs. In short, their confidence comes from the algo's low volatility (low risk) and it's market neutrality which protects it from adverse market swings.

You say:

Have you notice that out of 203,200 trades taken, a single trade accounts for ≈10% of all the profits made.

Can you please point out to me where in the tearsheet you found this "fact"? I just don't see it.

Thanks for your feedback.

The changes that James mentioned in the opening post in my opinion captures the guidance from the "Live TearSheets Webinar" very well.

In a 4x-6x leveraged environment an algorithm that returns 2.3% (unleveraged) with very low drawdowns/volatility/risk exposures is useful. No?

If we take into consideration that quantopian backtester do not account neither borrowing costs nor costs of leveraging then this algo will not be profitable at all in reality.
Guy can easy prove it.

The default trading cost model Q uses is pretty conservative I believe, possibly to account for shorting and leverage costs?

Joakim,

as I know, Quantopian's default trading cost model only accounts commissions and slippage costs.

Fair enough Vladimir.

Hi Vladimir,

You say:

If we take into consideration that quantopian backtester do not account neither borrowing costs nor costs of leveraging then this algo will not be profitable at all in reality.
Guy can easy prove it.

In reality, this can be hard to prove unless you are Q and their prime broker who have entered into a private and confidential agreement with regards to cost of borrowing. You are thinking retail not wholesale, institutional investing which Q hedge fund is and at that level, cost of borrowing is highly negotiable. I guess one has to be within this industry to fully grasp the concept. But I hope you and Guy can easily prove it.

James -

I wouldn't get to caught up in all of this low returns concern. If you are happy with the algo, let it percolate for 6 months. You should be able to fiddle with the Optimize API settings and create a whole set of algos that Q can evaluate, if it turns out they don't like this low-volatility/low-return one. You could also look into how you are combining the factors. Maybe there is something to be had in this area.

I got the sense that Q will be publishing details on their headline-grabbing (and presumably exemplary) $50M algo. Maybe we'll get some idea for the returns and Sharpe ratio, among other performance metrics? Who knows...maybe it looks just like what you published above? Perhaps Point72 needs a kind of passbook savings account?

Hi Grant,

Thanks for your feedback and advice. I actually looked into how I was combining my factors after I ran the volatility regime sensitivity analysis notebook which gave me a hint on how I can improve overall performance. I was looking at something that can consistently perform within the bounds of what I perceive to be what Q is looking for and survive the test of time. After revising the calculations of one of the factors, I saw some improvements in returns and was satisfied by the regime change statistics. I am taking your advice and entering it in the contest and will let it percolate for six months and see how it goes!

@James, in your round_trips=True section you have: largest winning trade: $216,922 while total profit came in at $2,270,658. This outlier alone is 9.55% of total profits. I suspect that the next 100 in line would obliterate total profit. That would be some 100 trades out of 203,200 accounting for all the profits. It is less than 0.049% of all trades taken. In your backtest, only the first 86 highest trades were needed to exceed that mark. So, in this case, might as well forget the 20/80% rule of thumb. Moreover, UFS and MSI accounted for over 20% of the P&L (2 stocks out of the thousand+ traded).

I do not see how Q or anybody else on the planet could leverage, as is, that particular trading strategy to something like 5-8 times, even with free capital (OPM). No matter who they are or how creative they could be in financial engineering. Managing money will cost something even if you could get below LIBOR rates and not consider such things as needed personnel, hardware and software monitoring and expenses, opportunity costs, alternative investments, margin, leveraging and management fees. Here is another simple question: how much 20+ years of your time on this kind of problem is worth to you? Because that is also part of the price to pay.

The decay of your trading strategy is built-in. The strategy insists on a 50/50 long/short scenario for market-neutrality. In your trade execution summary, the data should show the equivalent of what should be viewed as an average number of transactions per period. Make it an average number of trades per month or per year. Your average daily turnover chart, for instance, says just that.

Your average net profit per trade ended with \( \bar x = \$11.17\). As time progresses, \( \bar x \) will continue to decrease. Most of the trades are concentrated near zero. See your “P&L per round trip trade in $” chart for a confirmation. The strategy is expected to continue to generate some 29,000 trades per year. To see this more clearly, redo the simulation with start date: 2008-07-01 and with end date 2018-08-22.

Note that it is not just your trading strategy that is behaving this way. I think every strategy participating in the contest suffers from the same thing. Nowhere on Quantopian have I seen, in all the examples provided or the discussions anyone even trying to compensate CAGR degradation by any means. I would have to conclude that none are doing it. How could they compensate when they don't even look at the total picture, but just period to period? They could so easily prove me wrong by showing that they do it by showing their tearsheets. I can read it off those charts.

To help find ways to improve on the strategy, let's start with your strategy's portfolio payoff matrix equation.

$$ F(t) = F_0 \cdot (1 + \bar r)^t = F_0 + \sum (H \cdot \Delta P) - \sum (Exp) = F_0 + n \cdot \bar x $$

You fixed \( n \cdot \bar x \) to some decaying constant per period and therefore will have your CAGR slow down with time. This made your strategy have a linear return \(\approx (1+rt) \) instead of compounding \(\approx (1+r)^t \). It does not seem like much over the short-term, at the end of year one they give the same answer. But, over the long term, it will make quite a difference.

Your strategy has set \( \Delta (n \cdot \bar x) / \Delta t \) to a gradually decreasing number since \( \bar x\) is decreasing. Saying the profit generation's cruising speed is slowing down. And because of this, you will see a decaying CAGR not just as if the Law of diminishing returns was kicking in but also due to some of it camouflaged in the trading procedures themselves.

The portfolio's payoff matrix equation gives you the answer on what you could do to compensate for the decreasing CAGR. Find ways to increase \(n\), the number of trades, and find ways to increase \( \bar x \), the average net profit per trade. It means that you will have to apply a positive monotonic function to the payoff matrix \( \sum (\gamma (t) \cdot H \cdot \Delta P) \). This will have \( \Delta (n \cdot \bar x) / \Delta t \) increase with time.

It will compensate for the return degradation and give you back your CAGR. I wrote a piece on that subject last year. See, for instance, the link below where that problem was made more explicit with charts:

https://alphapowertrading.com/index.php/12-research-papers/7-building-your-retirement-fund

Hope it helps.

James,

I agree that cost of borrowing is highly negotiable, I may assume that sometimes rates may be even negative but then Sallie Mae 2.80% 2yrs CD is better choice compare to algo.

@Guy,

You've taken a single statistic, largest winning trade of $216,922 divided it by the total profit of $2,270,658 to come up with 9.55%, fine. Conversely, you can take the largest losing trade of $-131,145 divide it by the total profit of $2,270,658 to come up with -5.78%. I'd rather you took a look at both sides, winning and losing trades as your divisor is net profit :)

I do not see how Q or anybody else on the planet could leverage, as is, that particular trading strategy to something like 5-8 times, even with free capital (OPM).

You probably have to talk to Mr. Steve Cohen of Point72on how he does this. Here's a link for your guidance: cohen-point72-s-reveals-high-leverage
Based on your statement above, I am assuming you have very little or no exposure at all with institutional trading. Haven't you heard about some prime brokers giving rebates to their institutional clients in exchange for order flow. It's a totally different game at the institutional level, where 100s of billions are
exchanged in any given day. Prime brokers specializing in institutional clients rely heavily on volume business and providing liquidity to the market and costs and fees are highly negotiable.

Guy, you are a good accountant. You can take statistics and numbers provided to you by the tearsheet and formulate them in a closed equation which is fine. But when start changing and manipulating numbers within that closed equation which is representative of what happen in the past to make predictions on how it will perform in the future, I crinch. It is very easy to play a Monday morning quarterback with the full benefit of hindsight but that in my humble opinion are mere afterthoughts, the veracity of which have to be proven in real time in the future. So the rest of your proposed solutions and take on this algo, I take with a grain salt, no offense. But I do appreciate your feedback, although I know we're both of different mindsets because it gives me a glimpse of how others think. Thanks.

I'll soon be starting a new post with the current tearsheets of my algorithm. Would appreciate feedback, even if it is a different viewpoint or constructive criticism, will be glad to get your views.

Hi Vladimir,

The algo as it is, unleveraged, is no better than Sallie Mae 2.80% 2yrs CD, agreed! But as I keep on saying, at the fund execution level, Q plans to leverage it 5-8 times and that gives a totally different outcome and picture.

PS - Having said that, I do not totally subscribe to this high leverage strategy. In a low interest environment like we have today, it might work but in a high interest environment like the early 1980s, it can fall apart as the disparity between risk free rates and borrowing rates becomes wider and lenders might not be as lax as they are today.

I'm guessing one can't leverage Sallie Mae 2.8% 2yr CD 4x+ because the prime broker discount margin rates will be tied to some transaction levels?