Back to Community
August Contest Rules Update: New Prizes, Staying Hedged, Going Longer

Our next edition of the Quantopian Open kicks off on August 3rd at 9:30 AM EDT, and it's time to update the rules - and the prizes.

Last month we made a couple of significant changes: we stopped scoring the backtest, and we required algorithms to be hedged. We've been very pleased with how those rules worked. The algorithms sitting on the top of the leaderboard this month have been far and away the best-quality set of algorithms we've seen yet. We only have a few minor tweaks to leaderboard scoring which you can read about below.

The biggest change to the Quantopian Open will come later in the month. On August 17th, at 9:30AM EDT, we are kicking off a 6-month contest. The scoring system and prize will be the same, but instead of being scored on just one month out of sample data, you will be scored on six months of out of sample data. We see this as the best way to ensure that the best algorithms are rising to the top of the leaderboard. We knew from the beginning that there was a tension between wanting a contest that turns around quickly and the need to gather more out-of-sample data. We feel like we've got a scoring system that is reasonably solid and we're pleased with the flow of new entries coming in every month. We think the time is right to add a longer time frame to encourage algorithms that are also looking at a longer time frame.

We will kick off a new 6-month contest every month, mid-month, for the foreseeable future. The first 6-month contest prize will be awarded after the market closes on February 12, 2016, and monthly thereafter.

We are continuing to run the 1-month contest as we have been, at least through the end of the year. We haven't decided what we'll do next year once the 6-month contest results come in and we start giving prizes to those winners. We may choose to continue both contests indefinitely, or we may change the 1-month prize in a way that reduces our capital commitment. It seems unlikely that we' d end the 1-month contest entirely; it's too interesting to remove, even if it has a small out-of-sample size.

All contest entries are eligible for both the 1- and 6-month versions of the contest. Once you submit your entry, it's automatically entered to all contests going forward, unless you withdraw (stop) your entry.

New Prize: Consulting for Contest Winners

For the first few contest winners, we started them trading the very next day after declaring the winner. When we picked our first algorithm for the hedge fund we did something different. We reached out to the individual and offered them some advice on how to make the algorithm more robust. The changes we suggested were about how to handle algorithm stops and starts better, how to order more wisely using Interactive Brokers (as opposed to paper trading), and things like that. The algo author made a few changes, then we tested the algorithm rigorously. The result is a better algorithm than the original. We did the same thing with July's contest winner, and we think the algorithm that resulted was much more robust.

We're going to offer the same process for all future winners. This is going to introduce a few days delay between the contest winner being declared and the start of the prize period. We believe this will greatly improve the payouts for the winners.

Leaderboard Changes

The changes are relatively small, and affect a handful of the contest entries.

  • The consistency factor is removed entirely. We've developed other, better methods to identify people who are "gaming" the contest. The consistency factor is now simply adding noise to the results without any benefit, so it is removed.
  • The hedge filter is relaxed a little bit. If your backtest has a few days where it is either long or short at the end of the day, it can now get the "hedged" badge. There are only four current contest entries affected by this, so it's only a small change.
  • We swapped the Sortino ratio in for the Calmar ratio in the scoring. Most algorithms rank the same on these two metrics, but there are a few that see significant ranking changes.
  • Previous winners of the Quantopian Open may re-enter the contest, provided they enter using a different strategy.

Looking Forward

Last month we made the rules changes at the last minute, just a couple days before the deadline. We're doing better this month, 12 days before the contest kicks off. We promise to keep getting better at this, and will announce future rules changes at least two weeks in advance.

There are two other rules changes that we've been considering but haven't formalized yet that I'd like to share.

  • One rule change we're considering is to permit more entries per person. On one hand, we want to encourage people to make thoughtful entries. We're worried that if we permit more entries per person, we're going to see more low-quality algorithms that are just wild bets. On the other hand, if someone has a few algorithms performing decently, we'd like that person to be able to make more entries without having to decide which entry to turn off. We're interested in what the community thinks about the current limit.
  • We evaluated several different measures of algorithm turnover. Our fund is seeking actively trading algorithms. We've seen a few algorithms that open positions and then make no other trades. These algorithms are not interesting for the fund, and they aren't good entries for the contest. While we haven't written a specific rule to exclude these algorithms, we would most likely find such an algorithm to be not suitable for trading with real money.

As always, feedback is desired. Thank you for your past contributions to the contest.

Good luck to all competitors! We hope you're as excited about the 6-month contest as we are.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

27 responses

What is the actual hedging requirement?

I tried adding -0.01% SPY to an all long algorithim and it was accepted

Agree with everything said here these are all positive changes. Really like the idea about consulting for winners defiantly can learn from mistakes made by ahead of us so we don't make the same errors. Really appreciate the announcement of the rule changes early thankfully this months changes shouldn't effect any algos being designed so it's a non-factor to some extent still appreciate the prior warning. My only question is in regards to selection. Obviously winning algos that preform well in live trading are prime candidates for the fund. What I am wondering is if there are compelling entries at the top of the leaderboard if they have a shot at being selected (particularly as the contest time frame is extended to 6 months)? Or is the only path to being selected winning the open?

Spencer - any good contest entry is eligible for the fund, not just the contest winner. Think of it this way: we're only picking one contest winner per month, but we need dozens of algorithms for the hedge fund. It's a bit counter-intuitive, but it's easier to be picked for the hedge fund than it is to win the contest.

Thank you for the fast response there Dan. Thank you this is good news to hear for anyone trying to get selected.

The hedge filter is relaxed a little bit. If your backtest has a few days where it is either long or short at the end of the day, it can now get the "hedged" badge.

This is a pretty squishy requirement. Could you be more specific? If I understood the original hedged rule correctly, you required at the end of each trading day 100% cash, or X% long & Y% short, with (100-X-Y)% cash (with no constraints on X & Y). Now, you are saying that X or Y could be zero for N days, correct? So what is N?

Also, does this approach make sense? Say the algo is 99% cash & 1% long. So, unless the U.S. government goes under, there is the risk of losing at most 1%. Yet the algo would be disqualified if it runs more than a few days with this allocation.

we stopped scoring the backtest

But it sounds like the backtest still counts toward the badges ("If your backtest has a few days where it is either long or short at the end of the day, it can now get the "hedged" badge.). So, somehow the backtest results are being rolled into the scoring?

Also, the hedging requirement presumably applies to both the backtest and the paper trading, correct?

Is the leverage requirement still 3 or less? I thought I'd seen somewhere that it is now 1.05?

Dan Dunn,
It was nice to see that at least two of my propositions was finally implemented.
You may find my opinions about contest changes

About rule change to permit more entries per person.
Three for each contest separately: 3 for August 1M, 3 for August 6M, 3 for September 1M, 3 for September 6M, and so on and do not forget to initialize all of them at the start point.

For me there were no surprise that The algorithms sitting on the top of the leader board this month have been far and away the best-quality set of algorithms we've seen yet.
This is the result of unbalanced scoring system, consistency and stability factors, blue and green belts.
I myself write simple algo doing nothing which was the leader for a week and still on the top of first page just to show that the scoring system like any objective function should be balanced.
For each negative factor should be positive factor if you use sum of ranks.
Right now only 3 factors has positive correlation to productivity and 9 negatively correlated.
The other way to get productive winning algo to use single formula combining all factors with possibility to adjust weight of each of them.
It may be done the way I posted three month ago and did not get any reply.

How to get that magic string which bring you directly to my post? #????????????????????????

1. Hedge criteria

Add minimum hedge ratio (suggested earlier). However my personal opinion is low beta in itself is good criteria.
Possible gaming already clarified

I tried adding -0.01% SPY to an all long algorithim and it was accepted

2. More entries

+1 (suggested earlier)

3max for short term and
3max for long term

3. Actively traded

(-1) This might lead to liquidating positions which might not be part of plan.

Alternatively I suggest Q can think of addition of mandatory daily stoploss orders for 100% of open position. This can help track
of stoploss levels of all existing positions.

Dan and team,

Is there a requirement to hedge the position on an intraday basis too? I mean, If my strategy is an intraday strategy on SPY, I will not have a position by EOD. So is it okay to stay in cash overnight?


Hi Dan,

Any feedback yet on "The hedge filter is relaxed a little bit. If your backtest has a few days where it is either long or short at the end of the day, it can now get the 'hedged' badge"? Maybe you could illustrate with some algo code that would serve as a check to make sure that the "few days" limit is not exceeded?


I am brand new here, and hoping to get into the next contest. I am also concerned about the "must be hedged" requirement.

As someone else noted, it seems that the requirement can be 'gamed' by simply holding one or two shares short the whole time. It seems it could also be gamed with shares of an inverse S&P ETF (like ProShares Short S&P500 (SH)) -- it seems that holding 50% long in SPY and 50% short in SH would count as perfectly hedged.

The whole requirement seems a bit odd (to a novice like me). If the return is high, the beta low, and the Sharpe Index good, the requirement of hedging seems superfluous.

I'm glad to see the positive changes.

I have 1 more suggestion: reduce the bias on volatility / draw down. Right now a simple way to increase final score is to reduce your leverage, even to < 1, such as 0.1. By doing so, one will get worse ranking in Return, but better ranking in both Volatility and Draw Down, and hence increase final score. Given the same algo, an easy "trick" to increase your score is to trade with just a small portion of booksize.

Regarding the # of entries: I suggest building an author reputation system based on all contest entries submitted. Doing so transfers the responsibility of quality control to authors, without having to worry about missing out on good algos. People can submit as many as they want, but submitting garbage will (and should) destroy their reputation.

Following up on some of the questions:

The revised hedging requirement let's someone in if they have 10 days in their backtest where they are purely long or short. This doesn't represent a change in what we're looking for - we want algos that are hedged, all the time. Most of the people who were "just missing" the hedge requirement had a problem where an order didn't fill fast enough or something like that. In our actual fund operations we're keeping a close eyes on fills and we have the ability to intervene manually if the algo's desired behavior isn't being met. On the other hand, if the algo's strategy is be long or short at some point, it's not something we are interested in.

The intent of the hedge requirement was to make people think about hedging, and it has apparently succeeded at that - the algorithms that have been coming into the contest recently have looked much safer that the ones that came in through the spring. Several of you are correctly pointing out that the hedge requirement isn't truly checking the hedge. That is quite true - it's not even coming close. However, the limited hedge requirement, combined with the other judging requirements, are working. Look at the winning leaderboard for July. There are some great algos there! Good enough to start a hedge fund with even. . .

Vladimir: The test of the contest rules isn't who is in the lead in the first week, the test is who is in the lead at the finish line! I hope that you agree that the 6-month contest will mitigate some of your concerns in the scoring.

Yagnesh: thanks for the feedback.

Andy: that's an interesting idea on the algos submitted. I'll see if I can come up with a simple version. It would need to be something easy to understand.

Scoring system not only choose the best.
It is ranking all algo from best to worst.
What we see now on the front page (best 50)
66% underperformed SPY,
38% actually doing nothing or loosing.
This is the result of unbalanced scoring system,
consistency and stability factors, blue and green belts.
There are a lot of good algo but their average rank is 114.
You may find them just using single balanced metric Sharpe Ratio
without any scoring system complication.
And it will work on any term 1 month or 6 month.

@Vladimir - Web addresses with '#' links directly to your posts are in the part of your profile page that lists your most recent posts. Only the most recent 10 are listed, so you have to hurry and save them before they're pushed out of this queue by your newer posts.

Isn't it sorta by definition that the "pure alpha" institutional market would not take SPY as a benchmark? As I understand, the market wants something that is more like a high-yield CD than a stock ETF, right? The idea is that when SPY takes a dive, a good algo would just keep chugging out consistent returns. Or am I missing something about Q's hedge fund strategy?

Grant, you're largely correct. As you correctly understand, we don't care if we're ahead or behind the performance of SPY in any given month. The question is, can we keep making money, regardless of what SPY does? The part where I can't quite agree is where you compare the hedge fund to a high-yield ETF. The expected returns of those two instruments are different by an order of magnitude, and the mechanism by which they achieve those returns are entirely dissimilar.

Vladimir, I think your analysis highlights that there are differing opinions about what a "good" algorithm is. Quantopian is aimed at a specific algorithm profile, one that I would describe as "good for our hedge fund offering." I very much understand that the Sharpe ratio alone also defines a "good" algorithm - it's just not the sole descriptor for algorithms we're looking for.

Dan, I was thinking along the lines of a bank CD that returns 10% for the next 5 years (e.g. see around 1984). I suspect if you had something like that to offer, and could scale it to $10B, you'd be all set.

I entered some algorithms for the first time this month.
I don't mean to pick on anyone but I have been reading vlads posts and noticed he has an algo which seems to be 99% in cash?

Now here is my algo which happens to be a direct opposite!
It uses 2x leverage and goes 1x Long and 1x Short SPY.

Backtest comparison shows:
Returns 0.5% vs 35%
Volatility of 1% vs 26%
Sharpe -1.579 vs 1.255
DD -1.14% vs -25.38%
Stability 0.80 vs 0.72
Sortino -2.37 vs 1.49
Beta 0.024 vs 0.35
Correlation -5.61% vs 8.70%
Score 48 vs 39

Returns 5% vs 223%
Volatility of 1% vs 21%
Sharpe 2.4 vs 10.5
DD -0.34% vs -0.81%
Stability 3.17 vs 0.72
Sortino -2.37 vs 23.29
Beta 0.05 vs 11.28
Correlation -14.80% vs -45.54%
Score 81 vs 70

200% annualised returns is obviously unrealistic! Yet I think most people would prefer to run algo 2 vs algo 1 based on the metrics above.

It feels like algos are being unfairly punished for volatility. Forcing users to drop leverage just to improve test scores probably wont help much in terms of real life performance.

@Max: You are making a good observation. I kinda think Vladimir created that algo just to make a point that volatility is over-weighted by the current scoring system. I have provided some related thoughts on this post.

@Grant: I think Dan would agree with you, now that you identified which decade of bank CDs that you were referring to. Those returns are an order of magnitude greater than today's back CDs. :)

@Dan I don't suppose banning the use of triple leveraged ETFs will be changed any time soon? Any elaboration on this rule would be interesting.

There is one good way to prevent all the "gamble/bet" algorithms. The solution is that all algorithms have to pass a 10 years "backtest" before they attempt the contest, or even multiple backtests. For example, if the maximum draw down is over 30%, the algorithm is not able to join the contest, during the 10 years, there should be at least more than 8 years making positive profit. Please set the bar higher

Random backtest data for every contest, this is good way to prevent all those "bet" algorithm. There is no way a good algorithm can beat a "bet" algorithm. The real good algorithm and talented people will never have an opportunity to show there talent, and all the loser algorithm just gonna take over ever contest. Eventually, the contest is just non sense.

My understanding is that aside from consistency with paper trading, the backtest doesn't contribute to one's ranking:

No more backtest score component in your overall score

we stopped scoring the backtest

Since the paper trading score is multiplied by the consistency score, there is incentive not to game the system by crafting backtests. For 1-month contests, one could argue that it would still make sense to roll the dice and hope that an intentionally biased backtest would have persistent performance and win. However, it'll eventually fall apart, right? So, it probably would suffer a penalty in the rolling and 6-month contests. And closer examination for inclusion in the hedge fund would likely reveal bias.

One problem here is that Q wants the marketing buzz of having individual winners awarded the opportunity to make significant short-term gains, but on the other hand keeps talking about how everyone can be a winner by getting selected for the hedge fund. But to my knowledge, there has been no evidence that anyone other than the winners are getting anywhere with the Q platform. Has any money been put toward hedge fund algos? How many algos have been funded? How much capital? And where did it come from? Institutions? VC's?

Time will tell, but I think that with the new changes winning algos will become safer and more consistent than they have been. I think that the contest is in the healthiest spot it has been in yet. Almost every change that I previously believed was necessary to make the contest feel fair and not gameable has been made. Yes it is possible to game the one month contest (but near on impossible in the 6 month). With people try to do it? Yes of course they will. If the "gamers" scare you so much in the one month contest then try to win the 6 month contest (with that large of a sample size it is nearly on impossible to game in my opinion).

"However, it'll eventually fall apart, right? So, it probably would suffer a penalty in the rolling and 6-month contests. And closer examination for inclusion in the hedge fund would likely reveal bias."

This quote says almost exactually what I believe. Essentially if someone is in fact gaming the contest at this point then they are just wasting their own time and effort. If after 6 month their algo lost -4% does it really do them any good? They won't get selected for the hedge fund and they won't get any winnings from the 6 months.

To me when designing my algos I like to crash proof them for the 2008 crash. I do that even without the 10 year backtest being in the contest. I think many other top contest applicants subscribe to a similar school of thought. Believing 2008 offers what is the worst crash you are going to find in the Q pricing data. It is only logical to use that to prepare for potentially bear markets.

Another thing I believe many algos may crash during a longer backtest many use securities that my not have data that go back 10 years. The idea behind these algos may be sound, but they will be hamstrung by the insistence of a 10 year backtest which may be limited due to the securities one could potentially have in the backtest.

Also to add to what Grant said I believe that the consistency score is now completely gone. All that matters about your backtest score is if it makes money or not.

Basically if I'm thinking from Q's point of view (as well as what happens to be my own) I just made a ton of changes I want to see a few months of results these changes caused. Are the new algos more consistently profitable? When tested are their algos robust and well designed? To answer these questions we will have to wait a bit and watch how new winning algos perform. I believe before anymore changes are made the contest should be allowed to generate new winners under the current rules to develop a decent sample size (4 winners or so) then from that point you reevaluate. Just my 2 cents.

@Grant - Not only that (no compensation yet except to contest winners); even some winning algorithms are losing money, and so their "winner" authors are getting nothing.

Edited to add: As of today, Simon is up 6+%, Michael less than 1%, three other winners are in the red, and the July winner is still unknown: If their algorithms went to cash for the remainder of their 6 month terms, Simon would get $6,100+, Michael about $775, Grant, Jeff and Szilvia nothing.

Previous winners of the Quantopian Open may re-enter the contest, provided they enter using a different strategy.

Is this retroactive? For example, can I submit a contest entry prior to the end of August?

Also, how many total entries are allowed (across all contests)? Is it 3? Or 6 total--3 for the 1-month and 3 for the 6-month?

Is there any way to provide more detailed feedback regarding the goodness of a given entry vis-a-vis your envisioned hedge fund (obviously, I missed the mark with my winning algo)? For example, if my algo finishes 127 out of a field of 892, I may not have a clue if the algo has merit for the hedge fund, and what shortcomings need to be addressed. How are you handling this? When you do more detailed analysis of submitted algos, are you sending feedback to the entrants? Or is there a research platform notebook that can be applied by users in conjunction with the backtester to understand the relative suitability for the hedge fund?

I can't answer all of your questions Grant, but I can shed some light on a couple of them. I was told the rule restricting the winners from entering again for 6 months is no longer in place you just can't resubmit your winning algo or something that resembles it. So based on that you should be able to enter in the next available contest. Currently you are allowed 3 entries which are automatically entered into both the 6 month and the 1 month (there is currently no option to allow entry to just one contest you have to use the same algo in both). Hopefully that helps to address some of your questions though I believe that only a member of Q's team can address the last one.

Thanks Spencer,

Well, I just submitted to the contest. I'm not too excited about the algo, but we'll see how it plays out. Sure to learn something.