We have two changes to the scoring for the April contest. We just posted the first leaderboard with the updated scoring system. The March contest, currently in progress, is unaffected. The changes were driven by things we observed and learned in the almost-two-months we've been running the Quantopian Open. I discussed the changes in a webinar yesterday, and am putting them down in writing below.
Beta to SPY
The first change is that your algorithm is now scored according to how connected your algorithm's performance is to SPY. The lower your connection to SPY, the better. We already had 6 equal-weighted factors that generate your score; beta is now a 7th factor, all of them still equal-weighted.
Why did we do this change? If you look at a chart of SPY for February and March (the Quantopian Research notebook is attached), you could almost believe that the Quantopian Open was the driving factor in the S&P performance. On February 2, the start of the February judging period, SPY got on a rocket and headed for the stars. Grant's algo rode that rocket to the top of the charts, and he started trading real money on March 2. On March 2 the rocket ran out of fuel, and Grant's algo suffered! As we build the Quantopian hedge fund, we need to find algorithms that are uncorrelated from each other. The biggest correlation we're seeing today is around the S&P 500. This change is designed to encourage algorithms that are not correlated.
Consistency Between Paper Trading Results and Backtesting
The second change is that algorithms are now scored on how consistent they are between their paper trading returns and backtesting returns. The more consistent you are, the better. This factor is added to the calculation at the very end. After we compute what used to be referred to as your final score, we now multiply it by the consistency number, and the result is the new final score. This is applied gradually over the first few days of trading while the paper trading record is very volatile, and is fully applied at 20 days of trading.
We put in this change for a couple of reasons. The biggest reason is that we were seeing a lot of algorithms that had really good backtests that just weren't doing well in paper trading. This isn't too surprising when you think about it - if you're trying for a good score, you invest time in the backtest, and it's pretty easy to fall into data-mining, data-snooping, curve-fitting, or whatever you want to call that mistake. If you're prone to that mistake, you're not going to make an algorithm that lasts in the long run. We want to strongly encourage people to use good practices with out-of-sample data testing. If we make it very clear that a good backtest, on its own, can't win the contest, we hope to get more careful thought about how to write an algorithm that will perform well in paper trading.
The second reason is to discourage cheaters. We've seen a few instances where contest submissions are being deliberately gamed by submitting a "perfect backtest" and then a coin-flip over several entries for the paper trading. We've disqualified them, and we will continue to disqualify them in the future. The scoring change is a bit of a safety net, and a clear signal that it's not a strategy that will succeed.
For the detail-oriented: we're computing the consistency score using a kernel-density estimate using Gaussian kernels found in the Python scipy package. Both the backtest daily returns and the paper trade daily returns are each pushed through the function to fit them to a distribution separately. The difference between the areas of each of the distribution curves is used for the consistency score.
When we kicked off the Quantopian Open we promised that we would iterate and improve the contest. We don't think today's changes are the last word. There will be future scoring and rules changes as we think are necessary. We hope that you've found the previous discussions about scoring to be helpful; we certainly have. As always, we welcome and value your feedback about how we can make the contest better.