Another version - Is worth deploying capital?

1 million $, 50 names, zero leverage. 5 Loading notebook preview... Notebook previews are currently unavailable. 12 responses Looks good to me. How are the results with gross leverage = 1 (in contrast to average of ~1.4)? The sharpe drops to 1.3 because the spreads are very small. Looks great! seems contest worthy A few suggestions: 1. Your leverage creeps up over time. I suggest fixing the normalization so that it is always ~1.0. Although Quantopian allows leverage up to 3.0 for the contest, I was told that they normalize to 1.0 for comparing algos. 2. Typically, I run backtests back to the earliest possible date. I suggest doing the same, unless there is a reason to think that your strategy only applies after a certain date. 3. You need to run backtests at$10M, since this is the contest requirement (and presumably, the capital level relevant to the fund).
4. I've been working on a long-short mean reversion strategy, and I have a suspicion that there is something fundamentally different about periods of fear-uncertainty-doubt (FUD) and steady bull or irrational exuberance periods. Your algo may reflect this, as well. It is interesting that the Q tear sheet has a special Stress Events section, which curiously includes some non-stress events, as well. If most of the performance of an algo results from gains when the market is stressed (dropping/choppy), then it'll under-perform if the market undergoes a long, steady bull run. It appears that your algo carries this risk, so you might want to dig into it.

Thanks very much Grant. I will take a look. Unfortunately most statistical arbitrage strategies require leverage else won't work.

Your leverage roughly monotonically increases with time, which suggests that you may have a normalization problem. If you normalize your portfolio weight vector to 1.0 and use order_target_percent, it'll fix it. Then, you could always add back in dynamic leverage by just multiplying the vector by a factor.

Regarding leverage, if your algo is selected for funding, my understanding is that you won't have control over it anyway. Q will normalize to 1.0, and then dynamically allocate and leverage across the algos in their fund. You'll get a percentage of your algo's leveraged returns, based on their decision. In my backtesting, I just keep the leverage at 1.0 (but for contest entries, I jack it up, although it is not obvious if this is the best choice).

Another comment is that at leverage 1.0 and a backtest back to 2002, I suspect that it is really challenging to get a Sharpe > 1.0 with a relatively large, diversified universe, without some form of bias/over-fitting. It'd be interesting if this is born out by the backtest results Q is seeing for the algos they've funded, but my sense is that they'll keep that info close to the chest.

But, if they get a bunch of orthogonal strategies that can be mixed and matched dynamically in a predictable fashion, then maybe SR ~ 1.0 is just fine? Again, it would be interesting to see what they've selected thus far, including the orthogonality (uncorrelated-ness).

I know for a fact (from a classmate who now heads systematic trading at a hedge fund here) that there exist strategies with Sharpe > 2 at leverage 1.0 for more than 10 years. So it is possible.

Yes, I had like to know more about the algos they have selected thus far, but they are very silent about it. Maybe they should put up a presentation on these algorithms and how P/L looks. I had be very interested.

@Grant, while we are at it, can you take a look a this post and tell me your opinion. Since you are the expert on this subject.

Regarding realistic SR's, the guidance on ~ 1.0 comes from this book: