Back to Community
I think I just built my first 2.0+ sharpe contest strategy, tearsheet

I was exploring an alternative dataset and came up with this strategy. It combines three alpha factors taken from the same dataset and feeds those into MaximizeAlpha in order to generate portfolio weights. Originally it did not survive slippage, but I was able to resolve that by lengthening the holding times. It turns out the signal is weaker during high volatility regimes. So, not knowing what else I could do, I decided to scale up holding times in proportion to VIX in order to reduce portfolio churn and needless slippage while the signal is weak, which worked. The signal isn't symmetrical on the long and short sides, so that's why the position concentrations differ so much.

What do you guys think based on this tearsheet?

I'm going to keep working on it, and if I can resolve the Tradable Universe issue I'm running into I plan to submit it to the contest. I've tried everything I can think of though. I even tries an exclusion list to manually remove the non-QTU stocks from ever trading, and even that doesn't pass.

Loading notebook preview...
Notebook previews are currently unavailable.
22 responses

@Viridian, great work. My suggestion is: continue to push to increase the average holding period. It will increase the average overall net profit per trade and since your strategy does already exhibits a positive edge, it should improve on that too. Resulting in higher profits.

Also, look at your shorts more closely. They represent about 19.6% of your trades and yet do generate about 55.4% of your overall profits. Your question should be: does extending their holding time further increase their average net profit per trade too?

Hey Viridian,

Nice work !

Could you extend the backtest period ? Since 01/2016 - Today is one on the strongest upmarket phase in history, you should backtest on a more volatile period which include downmarket regimes (31/08/2018-31/12/2018 isn't sufficient to test that)

great work. My suggestion is: continue to push to increase the average holding period.

Thanks. I'm already nearing the limit at times, with rolling turnover getting as low as 6%, so I'm not sure how much more I can squeeze out in that department. I will look into handling the holding period for longs and shorts separately and see if there's room for improvement there. That's a pretty good idea.

My hunch is that since the shorts are the ones returning all the alpha, if I can trade more often on the short side I may be able to generate more returns. The longs on the other hand are simply hedging market exposure and not delivering any alpha, so they may as well be held as long as possible to reduce needless churn. I will test and see if this is the case.

My concern would be different holding periods for the long and short side would mess up the hedging. Once I close a short and open another one, the long hedge that I'm still holding may no longer be appropriate and I may experience increased exposure to some risk factor.

Could you extend the backtest period ? Since 01/2016 - Today is one on the strongest upmarket phase in history, you should backtest on a more volatile period which include downmarket regimes (31/08/2018-31/12/2018 isn't sufficient to test that)

I recognize that this is a major flaw. Unfortunately the data does not extend further back. I think most of the alpha is generated on the short side, and the longs are simply acting as neutral hedges. My sense is that it should still offer an edge even in a downturn, though perhaps maybe not be as strong.

Are you able to use self-serve data in the contest? I didn't realize that was possible, how are you updating the data regularly?

@Robert: Yes, self-serve data is allowed in the contest. For 'live load' datasets, we keep records of when each data point was uploaded to Quantopian and we apply our regular point-in-time techniques on the dataset to make sure that any data uploaded via self-serve after the initial historical load is only usable in the simulation after the upload date.

@Viridian: Thanks for sharing this. It looks like you're getting really good use out of the Quantopian toolkit! For the universe issue, if I get stuck trying to debug why an algorithm is below the minimum QTU threshold, I usually use the notebook in this tutorial lesson as it gives me a little more control/insight into the results of my backtest.

One other thought: have you tried running the same strategy with TargetWeights instead of MaximizeAlpha? I find the TargetWeights behavior to be a little more intuitive than MaximizeAlpha. I'm not sure how it will affect your results but maybe it will help clarify what's going on.

If you get stuck, feel free to email in to [email protected] and we'll see if we can come up with any other suggestions on how to debug it.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Viridian:

Have you tried calculating the weights with MaximizeAlpha first and then placing the order using TargetWeights?
Something like this:

objective = opt.MaximizeAlpha(your_alpha)  
target_weights = opt.calculate_optimal_portfolio(  
            objective = objective,  
            constraints = constraints,  
        )

# Assuming q_assets is the output from your pipeline  
final_weights = pd.Series(0, index=context.q_assets.index) 

for name in context.q_assets.index:  
    final_weights.loc[name] = target_weights.loc[name]

final_objective = opt.TargetWeights(final_weights)  
final_constraints = [opt.MaxGrossExposure(1.0)]  
order_optimal_portfolio(  
            objective = final_objective,  
            constraints = final_constraints,  
        )  

Thanks for the tips, Jamie. I've somehow managed to resolve the QTU issue. Looks to have been a mistake in my logic.

Thanks, Enigma. That's precisely what I've done, except I keep a rolling list of target_weights from previous days in a context. variable and combine them together before ordering. That's how I lengthen the holding period.

how are you updating the data regularly

I wrote a simple PHP script that pulls the data from its real-time source and outputs it in the the CSV format that Quantopian expects. The php script sits on my web server and translates the data on demand. Quantopian handles the rest.

Wow, that looks incredible, well done @Antony!

I haven’t used this new version yet (which is great, thanks @Thomas), but I’m pretty sure it uses EOD positions from the backtest, so would include trading costs, unless it was set to 0 in the backtest. Maybe @Thomas can confirm?

Ah, if I run that version of the notebook it looks a little more flattering than my previous post. :)

Loading notebook preview...
Notebook previews are currently unavailable.

Also very impressive, well done @Viridian!

What does the negative short-term reversal exposure mean on the chart? It always confused me whether these refer to net or gross exposure. Does it mean I'm more exposed short-term reversal the short side than the long side (net exposure to STR)? Or does it mean my gross exposure is negative (positions are generally inversely exposed to STR)?

Thanks, @Antony. That makes sense. So essentially inverse exposure to short-term reversal is basically the same as being exposed to 15-day momentum.

As nearly 6 months have elapsed since I came up with this factor, I couldn't contain my urge to start investigating how this strategy has held up out-of-sample.

I'm happy to report an OOS Sharpe ratio of 3.11!

In addition, the algorithm is also placing fairly consistently on the contest leaderboard for a few weeks now.

Loading notebook preview...
Notebook previews are currently unavailable.

Encouraged that the strategy is indeed predictive and not spurious, I decided to devote some more time analyzing the factor. The depressing thing is that I've waited this long to realize that it could be much, much better.

The first thing I did was split the algorithm into a long-only and a short-only algorithm. I did this by using the target weights output by calculate_optimal_portfolio, setting negative weights to zero and visa versa, and then feeding those new weights into order_optimal_portfolio with no constraints.

This confirmed what I'd long suspected -- this is a short-only factor. The short side generated 0.27 alpha, while the long side generated -0.06 alpha. In other words, the long side is simply acting as a hedge, but indroduced needless portfolio churn and slippage as the factor values change every day. A simple improvement would be to only rebalance the long-side but once a year or so. A better improvement would be to forgo the pointless neutral hedge and combine with an entirely different strategy to generate alpha on the long side as well.

If only I'd pursued this line of thought six months ago, I wouldn't now have to wait another six+ months to gain meaningful OOS confirmation for any changes I apply. Doh!

For the sake of further investigating the alpha factor, I threw together a simple short-only algorithm that showcases its predictive strength on a shorter holding period.

This uses default slippage and commissions and rebalances once a day using order_optimal_portfolio. It is not intended to show attainable real-world trading results. (The risk management is terrible, drawdowns unacceptable, and turnover too high.) Rather, I just wanted to confirm that the alpha is significant.

Specific Sharpe Ratio 3.01
Alpha 2.76
Beta -0.87
CAGR 819.386%
Stability 0.97
Percent profitable 0.62
Profit factor $2.17
Ratio Avg. Win:Avg. Loss $1.34

This alpha factor could potentially be quite valuable to a fund with a little more sophistication that unlike Quantopian doesn't operate on an artificial 2-day delay on its signal. In a signal combination setting, it could inform whether to buy a stock today or wait a week, for example, and generate tremendous alpha simply from affecting the timing.

I'm not sure how to best apply this alpha factor to a Q fund-oriented algorithm. I guess keeping turnover low is most important, whereas keeping risk factor exposure in check is no longer a major concern. There was talk of them opening up to short-only strategies and other strategies that don't necessarily fit the old criteria. Has anybody heard from them recently? Most of the emails I've sent over the past couple months have gone unanswered. Some guidance on the best way to implement this factor into a strategy would be great.

Ideally they would offer some mechanism for submitting the raw factor scores and they could work out what they want to do with it, since their implementation guidelines keep changing and evolving, while alpha remains alpha.

Loading notebook preview...
Notebook previews are currently unavailable.

Sorry for a noob question. why sharpe ratio in back test is 2.94? I always thought that the sharpe ratio is calculated as (annual return - annual risk free rate) / annualized volatility.

Assuming risk free rate of 1%, shouldnt the sharpe ratio be (819% - 1%)/89% = 9.19!!!

@Viridian congrats! 2.76 alpha and 3.01 sharpe ratio 6 month later... impressive!

Which kind of factors are the 3 you are using? Mix of any of Momentum, Growth, Valuation, Profitability, Cash flow, Capital allocations...? Or as you are using an alternative dataset it has nothing to do with fundamentals? (I understand if you don't want to answer :))

Thanks in advance

@Nadeem Ahmed - very good question. Depending on where Quantopian reports it, the Sharpe ratio varies. So it appears they use different equations, but nowhere is it 9.19. I'm not sure why, or what is correct. Perhaps somebody else could explain for the both of us.

@Marc Thanks!

The final version of my alpha factor combines the zscores() of four different metrics. The first is beta, so it favors shorting low beta stocks that have high -----, high -----, and low ----. I don't want to give too much away, but the other three metrics are derived from my self-serve dataset. I will reveal that the dataset has nothing to do with company financials or stock price.

@Viridian Interesting, thanks!