Today, we’re introducing a big update to the backtest analysis screens that should make it easier than ever to write algorithms for the contest. When you press 'Run Full Backtest', you'll now see which contest criteria are met and which require additional work.
We’ve added many new metrics and analyses to the full backtest page, to give you more insight into your algorithm's performance and exposure. On the new page, you’ll see:
- Whether your algorithm meets structural constraints required for the contest.
- New time series plots showing:
- Specific Returns
- Common Returns
- Maximum Position Concentration
- Net Dollar Exposure (how net short/long a portfolio is)
- Sector Exposure
- Style Exposure
- Specific Returns
Notably, you’ll be able to pick and choose which sector and style exposures you want to show on your graphs, so you can isolate and drill down on an isolated risk model exposure:
A number of metrics were previously displayed as single point values. However, single point values don’t give you a good picture of how those metrics responded to changes in the market. Beta-to-SPY, Sharpe Ratio, and Max Drawdown are now timeseries plots instead of single point values, so you can see how those values changed over the course of a backtest.
It’s useful to know that an algorithm’s beta was, say, 0.4 on a given day. But what’s missing is context - is that beta good enough for the contest, and will it help on the path to an allocation?
The new page provides you with a lot more of this context. For example, I can tell that my algorithm’s doing a good job with its leverage, beta, sector and style exposures, but that my turnover, position concentration and net dollar exposure aren't meeting the contest criteria:
Meeting all of the structural and risk constraints, while also maintaining positive performance, means that your algorithm is ready to enter the contest. It's also a good sign of progress on the path towards an allocation.
Note: When you enter the contest, we run a two-year backtest going back from the present date, with default slippage and commission, to determine if an algorithm meets the criteria. It's a good idea to run a backtest with those same parameters before submitting to the contest.
It’s now much easier to go back and forth between a research notebook and your backtest results. The Notebook tab puts you in a research notebook with the backtest ID pre-populated, so you just need to hit Shift + Enter to run a tear sheet. This notebook works the same way as any other research notebook.
When you hit ‘Run Full Backtest’, you’ll get an option to go to the new page. Click through on ‘View New Backtest Page’ to get to the new page:
Here’s a brief overview of each tab:
Risk: Factors that a backtest should control. The metrics on the risk tab need to be kept within certain bounds to be eligible for the contest - for example, beta-to-SPY needs to be between -0.3 and 0.3.
Performance: Returns-driven metrics, like total returns or Sharpe ratio, that don’t have specific constraints placed upon them.
Activity: Useful metadata about a backtest, like its logs or source code.
Notebook: A research notebook with your backtest ID and the
create_full_tear_sheet function call pre-populated. This works just like any research notebook, and allows for free-form analysis of your backtest.
There are a few other changes to be aware of:
- Benchmark Returns have been moved out of the Overview tab, in favour of specific and common returns. Benchmark returns are still accessible in the Notebook tab of the page, by passing in the
create_full_tear_sheet. They are also accessible in the IDE.
- Recorded Variables have also been moved out of the Overview tab, and are accessible in the Notebook tab via the
recorded_varsattribute on the backtest object. They are also accessible in the IDE.
- Positions and Transactions are not yet available on the new page, but will be made available soon. They’ll be loadable on demand, instead of by default, so that you don’t have to download hundreds of MB of data every time you view a full backtest.
- Down the line, we'll update the IDE backtest results, and the backtest sharing widget in the forums, to provide the new metrics there as well.
- Backtests run before today (April 25) will need to be re-run to get access to the new metrics.
Most importantly, we want your feedback! This page is still evolving, and we want to know what you think so we can make it even better. I’d also like to thank the community members who helped us test this feature and provided early feedback (if you’re interested in alpha testing any upcoming features, let me know).