Back to Community
Sharing my journey.. a rough cut at Sharpe 1.7

Been a really steep learning curve building my first algorithm.
Enormously grateful to Quantopian for providing the platform to learn > analyse > tests > paper live > repeat
Huge thanks to the community for all the sharing, guidance and support to get to this point to knock up an algorithm..

My steps were nothing extraordinary:
+ Explored the alpha idea in Notebook with Pipeline and get_fundamentals()
+ Mainly use Pipeline to pre qualify stocks; learnt to be very lean to avoid timeout in Pipeline.
+ Used get_fundamentals() for supplementary data for detailed computations to illuminate on the alpha.
+ Wrangled with data to injest into Alphalens.. alternating between disasters, magic, despair and Eureka moments.
+ Suffered the banes of transitioning from Research to IDE, and copying backtest IDs to pyfolio tearsheets.
+ Repeat backtests. Repeat Alphalens. Repeat Pipeline. Repeat timeout.

I think I am now ready for the Optimise API lecture and tutorials.

Loading notebook preview...
Notebook previews are currently unavailable.
10 responses

Great job!
How does the algorithm perform in 2017?

Hi Karl,

Thanks for posting your results. 2015 to 2016 was an awesome year for most of my algorithms as well and then they flatten out. Must be a regime change in 2016.

Best regards

Hey, so what stands out to me is that you've got that jump in returns during the market downturn. If you didn't have that single isolated jump in returns, how would the algorithm have performed? I worry about algorithms that make isolated jumps in gains like that that they might not be as robust out-of-sample. It could indicate over-fitting.

There appears to be a correlation between your 6-mo rolling beta and your rolling sharpe. So if you can keep your beta more in control -- especially keep it from getting negative biased, you might improve your returns.

Karl, the recent speed up to fundamentals data might solve the timeouts. You could also email support at quantopian dot com with some or all of your code when you get timeouts. I've found them extremely helpful in the past.

My main note on reviewing pyfolio is that your net exposure is highly variable. The optimiser can help solve this. I had something similar. I noted that position sizing reduced the positive skew, and increased sharpe, but this may not be what an investor in the fund wants. They probably have a fairly negative skewed portfolio of stocks and bonds and are looking for some positive skew from the hedge fund allocation. In the end I just did 1/n sizing, when n was fixed.

Almost 4 months since the rough-cut algorithm was put together.. last it hit ~100% cumulative return and Sharpe doubled.

No less steep a learning curve as Q keeps lifting standards with new models and risk mitigation measures over the last 4 months.

If there was one constant, nothing's really changed: Analyse > Test > Live > Repeat

Highlights in the journey were the new Fundamentals and Optimize API! In hindsight, it's hard to remember how life was like without them.

Nonetheless it's work in progress to keep building and reinforcing; still ample room to improve, to cross more hurdles, to push new limits:

  • From the rough-cut factor, discovered 2 more factors correlated (that also scale) with the main factor.
  • Tedious to combine factors, with methods viz linear/non-linear, zscore, geometric mean, harmonic mean, winsorize but it finally works.
  • Exploring the Quantopian Risk Model intently - adapting workflow & watching input/output minutely and incrementally.
  • Challenging the basis of risk factors - the short_term_reversal factor seems irrelevant or an inherent risk itself - otherwise not an insurmountable challenge.
  • Prognosis for all other risk factors is encouraging - seems negligible impact on performance metrics.
  • New set_slippage(FixedBPSSlippage()) looks to be the most challenging yet - still working through backtests - it will be very consequential, for real.

Along the way an intraday strategy emerged - actively testing now - good collecting cash every day and hold no risks overnight.

Next to explore machine learning with the view to adopt as performance enhancer/productivity multiplier to augment the strategy, not a predictor of factors and, expecting insights along the journey, the strategy could evolve to integrate factor predictions moving forward.

Hope to reach another level worthy of next update.

Loading notebook preview...
Notebook previews are currently unavailable.

Looks great. Amazing performance. I noticed a few of the most commonly held longs are real stinkers, so I assume this is a short-term mean reversal style algo or are they just statistical outliers?

Yes @Viridian: Trading method is short-medium term (3~60 days varying by stocks' market behaviour) for the present purpose. Outliers are statistically excluded by ± 2-sigma.

Hi Karl, thanks for sharing this journey. Keep going. I always enjoy reading your posts. Cheers & best wishes.

@Karl, this looks very good! Have you tried commenting out the beta constraint and see if it improves your returns and drawdowns without breaching the thresholds (+- 0.3) of beta. Just curious to find out. Thanks, my man.

@Karl, OK I understand where your focus is and rightfully so as they matter more in the final analysis. However, you mentioned that you consistently set MaxBetaExposure = 0.10 and yet in your notebook it shows your final beta is 0.14. It seems like the beta constraint is not doing its job properly in your case either. I have re-run all my good algos and commented out the beta constraint and the results are they have either increased the returns or reduced drawdowns or even better, both without breaching the beta threshold of +- 0.3. At least in my algos, the beta constraint is not only useless but also has the effect of hindering better returns and drawdowns. Just wanted to share my experience.