October Contest Changes: Monthly Prize Payouts, More 6-month Contests

We've been very happy with the 6-month contest. The out-of-sample results of the recent crop of entries is very promising. There are a few dozen of them that have a good chance of being used in the hedge fund.

One thing we haven't liked about the 6-month contest is that it takes a long time for the winner to get paid. A 6-month contest followed by a 6-month prize period means that it's a full year from entry until the winner gets a check. We're going to fix that by paying the winner after every month. At the end of every calendar month, we will pay the winner the returns they've earned so far less a 25% hold-back, with an end-of-contest payout of the full amount. Details of this payout calculation are below.

We're making this change retroactively. The winning algorithms with positive returns are being contacted and checks are being written. Michael Van Kleek is getting a check for $1500! Three changes going forward: • All contests going forward will be 6-month contests. The currently in-progress 1-month contest (the October Prize) will be the last 1-month contest. • Contest entries will no longer be permitted to use fetch_csv(). The problem with fetch_csv() algorithms is that they are very hard to evaluate for the hedge fund. They generally don't have any out-of-sample or historical data that we can use for stress testing. • We're adopting a different stop-and-liquidate threshold. In the past, it was a relatively hard stop at$90,000. Going forward, we're going to use $95,000 as the key level. If the winner's algorithm hits$95,000, it will be evaluated by Quantopian for viability. If in our opinion the algorithm is unlikely to return to $100,000, or is otherwise no longer financially wise to run, it will be stopped. We will use tools such as the Bayesian cone projection to make that determination. Monthly Payout Calculation As a general description, the algo author will get their profits paid monthly, with 25% held back, and respecting the algorithm's month-end high-water mark. The high-water mark consideration means that if the algorithm declines for a month, the algorithm must recover the loss and increase again before any future payments are made. At the end of the 6 months, the author will get the full profits, with no holdback. Example, starting value$100,000

month ending | ending balance | total cum. profit | total cum. payment | pay this month | comment
1       |    $101,000.00 |$1000 |                 $0 |$750 | Up $1000, get 75% 2 |$100,800.00 |              $800 |$750 |             $0 | Down month, get$0
3       |    $100,900.00 |$900 |               $750 |$0 | Up, but not fully recovered, get $0 4 |$101,500.00 |             $1500 |$750 |           $375 | Get 75% less previous payment 5 |$102,500.00 |             $2500 |$1125 |           $750 | Get 75% less previous payments 6 |$102,000.00 |             $2000 |$1875 |           $175 | End of prize period. 100%, less previous payments  Written more formally: The top-ranked submission will be traded live, with real money, for 6 (six) months (the "Trading Period"). The submission will be run with a$100,000 USD account funded and controlled by Quantopian (the
"Trading Account"). The 6 months of trading will start the day after
the winner is determined, or as soon as is reasonably practicable. All
the algorithm's operation will be paid/received by the account in the
same way a customer of Quantopian does real-money trading.

The winner's prize will be calculated and paid monthly. At the end of
each calendar month of the Trading Period, Quantopian will calculate
the algorithm's profit (the "Generated Profits") for that month. If
the profit is negative for the month, no payment will be made, and the
loss will be carried forward to the next month. If profit is positive
for the month, first any previous negative profits will be deducted
from the profit, and if the remaining profit exceeds $100, the winner will be paid 75% of the remaining profit. At the end of the 6-month prize period the profit will be calculated and any remaining unpaid profits - 100% of the unpaid profit - will be paid to the winner. If the account holds no Generated Profits at the end of the Trading Period, no payment will be made. If the payments made to the winner exceed the Generated Profits, the winner shall keep the payments without obligation to repay them back to Quantopian. The Generated Profits will be paid via check from Quantopian to the registered account holder of the account who submitted the winning algorithm (the "Winner"). The check will be mailed to Winner within five (5) business days after the end of the Trading Period, provided all relevant tax information has been provided. Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. 29 responses Dan, you said: Going forward, we're going to use a 5% drawdown as the key level. If the winner's algorithm hits$95,000, it will be evaluated by Quantopian for viability.

Does this mean that you plan on ending Szilvia Hegyi's algorithm that has a current balance of $93,254.76? After having the first four winners all lose money your rule change may be viewed by skeptics as Quantopian being desperate for a winner. I have followed the contest and read many of the posts on the comment board. A crowd sourced hedge fund does not seem to be a viable business model. This investment business, not as easy as it appears to be (grin). I would say that different types and classes of systems deserve very different stop loss levels. A 'pattern trading' system with hi-turn and low leverage that claims to be exploiting an information edge, may need a 5% or 10% stop. But this level is not realistic for many systems. Just look at many of the best traders - say a CTA like Winton - they've had DD's bigger than 10% 9 times since 1997, yet have over$20 billion in assets. Or Transtrend. They have 4 Billion under management, but have a 15% peak DD (on a monthly level, higher on a daily level) and are down 11% this year. The Millenium Fund (2.3 sharpe over more then 15 years), still has an 8% peak DD. The very best 'vol traders' in the options world have 50% plus peak DD's when Vix spikes (sometimes 80%), but they still make a lot of sense in a well-designed portfolio. And, classic long-only bottom up stock analysis and selection with lower turnover (and more tax efficiency) needs a much more generous stop loss. The very best people doing this will have 20% to 60% DD's on the long only stock portion. Given the issues with the platform not having short availability, min. liquidity (ADT) historically and cost to borrow for securities, do you really want everyone running long-short stock systems? Long-short 'simple factor' systems will also be fairly viable long-term, but will have sizable short-term DD's. Baskets of them can allow you to raise a lot of money. The more complex people make these systems (in terms of dynamic market timing and variable beta exposures, etc)... the more likely they are to break out of sample. The only way to really ensure staying above a 5% DD level is to invest a small portion of cash (and therefore have very small gains) - which makes it a lot less attractive to the designer (the 10% of gains is gonna be lower, too). So, it seems that better up front process for selecting and allocating to systems is needed - not an arbitrary 5% DD 'stop trading' limit. I am also confused as to limiting 'fetch.' It seems the only way to get most data into the system? At least for ETF trading systems and / or anything looking to use a new data source. And data is the core of any 'informational advantage' for a trading system. Whether it's VIX levels, or Sentiment or Fed Data, we need to import data to build good systems, don't we? I think the bigger issue with 'Fetch' is the possibility of 'look ahead' bias, and the difficulty in catching that. I am curious as I am planning on investing hundreds or hours or more in building systems for myself (and possibly the contest), will Quantopian be in it for the long haul? I am also curious as to your goal for the 'fund' - do you want zero beta, or a true market neutral fund long-term? Or are you looking to build a 'multi-strat' fund that benchmarks the hedge fund index and looks to achieve top 10% performance? The goals and construction of systems for each would be very different. If you are in the latter camp, might want to create 'buckets' - like Fundamental long-only large cap, Fundamental long-only small cap, Long-short stock systems, Pattern trading systems, market timing systems, etc. Then work to set up rules and criteria for each. Dan - glad to hear fetch is out. Sally believe Dan said 'going forward' so do not think it will impact Ms Hegyi ... also regarding the pessimism regarding the business model, take a look at https://www.youtube.com/watch?v=D8ZWSTWFbO4 (~47 mins) which explains quite a bit about the business model and shed some more light regarding its viability Here is the bit that I have a question about. Contest entries will no longer be permitted to use fetch_csv(). The problem with fetch_csv() algorithms is that they are very hard to evaluate for the hedge fund. They generally don't have any out-of-sample or historical data that we can use for stress testing. So if someone wants to get ETF holding data for a contest strategy how can a person do that now? Will Quantopian get a ETF holdings data provider to allow strategies which require ETF holdings data to obtain Alpha? Hi Spencer, We're working in that direction, yes. We want to have as much data available to you as a quant as possible. In fact, I met with a data provider today who has just that kind of data. All the best, Josh Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. I am all for more data; using fetcher correctly, without lookahead bias, was rather tricky and error-prone. Minute-level index and futures data will be a huge help. Re: the 5% stop, some algos need more, but in general people should be more sophisticated about volatility targeting anyway, myself included. The trouble is, it's complicated. Someone needs to write an easy-to-use strategy CVaR calculator, or something like that, so we, too, can make algos which automatically deleverage when everyone else is! "Contest entries will no longer be permitted to use fetch_csv(). The problem with fetch_csv() algorithms is that they are very hard to evaluate for the hedge fund. They generally don't have any out-of-sample or historical data that we can use for stress testing." So, no more using the VIX in contest algos? Say it ain't so! Simon, I've interacted with both Dr. Alexei Chekhlov and Stan U. They had the 'foundational paper' on CVAR opt: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=223323 Both run their own systems and I've followed them for years (including briefly working with Dr. U for about 6 months). Neither one has come close to staying under a 5% DD. This is Dr. Chekhlov's fund: http://systematicalpha.com/index.php?categoryid=43 It's a very good fund that's won multiple awards and has had a lot of very smart researchers working on model development. They've still had a 19% peak DD (more on a daily level). And Dr. U has done very well in terms of compounding money, but has a much larger peak DD, as he uses a lot of leverage. CVAR and significantly more optimization will be cool and I'll use it, but the only way to limit losses under 5% is likely: a) delever to a return target of 3-7% Annually b) have proprietary data that lets us see the future (such as front running pending orders in a high frequency system, or having some other propriety data). c) Get lucky over some portion of the market cycle (such as selling vol and options trading in a multi-year or multi-month period of declining vol). It's also not likely in the funds best interest. Having 'overly tight' stoplosses hurts system performance in general. The most important thing is a) low pairwise correlations between systems and b) positive expected forward returns. In fact, sub-system volatility is a driver of overall portfolio level returns and stability so long as the above two conditions are met. There are several papers on this. What setting stop losses this tight is going to do is encourage overoptimization of backtests - and stop out nearly every system - even very profitable ones. That's just my thoughts. If anything, I'd go the other way, and widen the stops to 15%, but work on the portfolio level 'allocation' and 'subsystem scaling' technologies. There are many attractive strat's (like trend following) that will have DD's larger than this - and it seems silly to kick them out of consideration. Best, Tom Tom - I agree 100%. Since the contest is already so heavily skewed towards ultra-low-volatility algorithms, the 5% stop isn't really a big difference. And I agree, for the fund, the correlations and expectations are most important, and I think for the fund that is what they are looking for. The fund requirements are different, and more reasonable, than the contest. My point about CVaR is that if you are doing volatility targeting, you can set the target to whatever you like, and if setting it to 5% expected maximum drawdown leads to a leverage of 0.2, well, there you go. I thought the contest was supposed to be a filter on algos for the fund. How are the fund requirements different? I got the sense that for the fund, Q is also looking for ultra-low-volatility algorithms, no? Thanks. I wasn't really clear on the difference between the fund and the contest? What is the 'application process' for submitting models for the fund? I am not a high turnover trader. I tend to trade a basket of systems that I hope will be non-correlated and that tend to hold positions from 2 weeks to 12 months (with most holding about 1 to 3 months on average). I rarely turn positions over more often than that as the slippage and trading costs in real life trading have proven too costly for the types of models I build. I have only been on Quantopian for a few weeks now, and still learning. I have been doing this about 8 years now with a large % of my family's money and living off of it. I would be interested in submitting some of the systems I trade that have higher capacity, after I've learned the platform more. So, I am kind of lobbying for my own interests (but I also think they align with the funds). https://www.quantopian.com/fund The fund requirements seem to be low beta, low pairwise correlation, Sharpe > 1.0, and high turnover. Thanks Simon. I hadn't seen that page. Comment by Sally Sanford above: A crowd sourced hedge fund does not seem to be a viable business model. My sense is that at some point, Q lost their way on this concept, and basically decided that they needed to steer the crowd toward a specific type of algo that directly mirrors what is attractive to their investors (which, at this point, may simply be their VCs wanting to get their money back). My original understanding was that the crowd would effectively create a large pool of synthetic securities from which Q could create a fund. Some algos would be short, some long, some neutral, some low volatility, some high volatility--everything under the sun. Then, the fund would be constructed from these synthetic securities in an optimal way, with perhaps some additional hedging applied by Q, if necessary. "Cool!" I thought. This approach would accommodate a respectable fraction of the Q user base to warrant the "crowd-sourced" label. For example, for a near-term goal of getting to a100M Q fund, with an average of $100K per algo, they could fund 1,000 users (which is only 2% of their user base of > 45,000, so that still leaves out 98%). Regarding the stop-and-liquidate threshold described by Dan above, what if the winner provides a copy of the algo, so that Q can understand it? Presumably this would always be an option, so that any analysis of the "black-box" performance could be put into context. Personally, I've decided not to play the game of having black-box algos. If I want Q to consider an algo for the fund, I'll just send it to them. Particularly with the path Q is on, my prediction is that this business of black-box algos is a Q sacred cow that will eventually keel over in the pasture, so why not just kill it now. Institutional investors aren't going to want to hear that their capital will be deployed to a handful of black-box algos written by a bunch of anonymous knuckleheads (myself included). It is interesting that the "stop-and-liquidate threshold" has only a downside component. What if the algo ends up making more money than it should? As we saw in https://www.quantopian.com/posts/the-process-of-naming-the-september-prize-winner, Q does not want unpredictable upside. So, shouldn't a funded contest algo be stopped if it starts making too much money? It would seem un-American, but that's the implication, no? If the contest is all about the Q fund, then unpredictable upside is just as bad as unpredictable downside. No gambling allowed. Is the Beta to SPY value for the badge based only on the initial backtest period performance for 6 months contests, or is it recalculated every month? @Charles The beta to SPY value for the hedge badge is recalculated every month using on the mean of 1 year rolling betas sampled at the end of each month for the twelve months prior to the current month. Currently, the beta badge filter for every contest algo has is based on a Sept 1, 2013 to August 31, 2015 backtest. Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. I will attempt to clarify some of the questions asked so far. First, it's important to remember that the contest and the fund are different and separate. The contest is not a filter for the fund; in fact the opposite is true! We need far more algorithms in the fund than there are contest winners. We've set the contest rules so as to encourage the development of algorithms that we can use in the fund, but that is very different from saying that contest rules, contest judging, or the contest prize are the same as what we want to do in the fund. Tom (and Simon) your point that the 5% stop-out isn't the best choice is very well taken. I probably should have emphasized more that the 5% is when it becomes a point of discretion. We're not planning on cutting algos off instantly at 5%; 5% is just the point where it becomes an option. If the author gives us permission to look at the code, as Grant suggests above, that will obviously be something we'd use as a part of the decision. But it is by no means necessary and not something we're even likely to ask for. Also, Tom, we totally agree with your point that there are a lot of different hedge fund types out there. I'm sure you understand that we have to pick one to be the first. Did the fund page explain better what we're looking for? We need to build a successful hedge fund and prove that this model works. Then, in the future, we'll be able to add other fund types. I look forward to that ;) If you have an algorithm that you'd like us to evaluate for the fund, but for some reason you don't want to enter it into the contest, please get it started doing paper trading and then send us the URL of the live algo (feedback button, or [email protected]). We can put that live algo into the evaluation process the same way we evaluate contest entries. To start paper trading, first run a full backtest in minute mode, then press the "live trading" button on the backtest screen. I know that the removal of fetcher does remove some strategies from the competition. I hope that we can soon and quickly replace the missing data with more robust data sources, as Josh said. I think that a CVaR model would be a great addition to Quantopian. It's a feature request I love. I can't say we've moved on it yet. Perhaps that's something that might come in from an open-source contribution to Zipline? If not, we'll get to building it eventually I'm sure. Grant, you are correct that the stop-and-liquidate concept is asymmetrical. There is no rule for upside; the upside belongs to the winner, not to us. Please take care not to conflate the contest judging process with the contest prize process with the hedge fund. They are each separate and distinct animals. P. S. The best reply to skepticism about one's plans and abilities is to simply go forward and do that which can't be done. Hi Dan, Unless I missed something, the limit is still 3 total entries. This is fine, except that it may inadvertently limit the pool of algos you can draw from down the road for the Q fund (to get to a$10B crowd-sourced fund, you'll need much greater participation). Presumably, you'd like as many live trading algos as possible to pick from, but my tendency has been to stop contest algos that aren't doing so well on the leaderboards, to make room for new ones.

For evaluation for the Q fund, it seems you are looking for up to a 10 year backtest, plus 6 months of Q paper trading (or better yet, real-money trading). However, with only 3 total contest entries allowed, there is incentive to minimize the pool of algos to analyze (unless you are grabbing all entries, copying them, and running them separately). I've realized that I should spin up two algos: one for the contest and a copy just in case I end up killing the one in the contest to make room for another entry. That way, I'll always have 6 months of paper trading for all algos that might be Q fund worthy.

By the way, how do you analyze the results of paper-traded algos? There's no way for users to pull the data into the research platform, but you must be able to do it. Or did I miss something?

Grant

The limit is indeed 3 total entries. I think it's entirely healthy for you to shut off algos that are performing poorly and work on new ones. That limit is there to deter a specific contest behavior of "spray and pray." In extreme, that's entering 8000 algorithms that each make a (hedged) bet on a single stock. That spray-and-pray will surely turn up some winners, but it is a form of survivor bias - the algos that survive are the ones that did best, but that's because of random chance, not wisdom or market insight. That's not an interesting contest.

By limiting the number of entries, we're encouraging/forcing everyone to to put forward only their best ideas, presumably the ones that have a strong market thesis behind them. Should the limiting number be 1 or 3 or 5? There are arguments, and the choice of 3 is a judgement call. But should it be 10 or 25 or 100? Those are easy to rule out. We don't want your 25 best algorithms, and see which one happens to have a good 6 months. We want your best algorithm, and see how it does over 6 months out-of-sample.

I think spinning up two versions of the same algorithm for your tracking purposes is a great thing.

We are planning on pulling live trading algos into the research environment, like we do for backtests. It's not done yet. What we do is run a backtest and evaluate the backtest. If you look at Pyfolio, it includes the notion of out-of-sample date, which is the date the live trading started, so you can see which parts of the backtest are in- and out-of-sample.

Dan,

I understand the reason behind limiting to 3 entries. But why do you disallow a winner from entering the next month's contest? I could potentially have a different algorithm and want to compete again :).

Pravin,

I thought that rule had been relaxed? I see Spencer Singleton's name on the leaderboard.

Pravin,

In August that rule was already changed :
Previous winners of the Quantopian Open may re-enter the contest, provided they enter using a different strategy.
https://www.quantopian.com/posts/august-contest-rules-update-new-prizes-staying-hedged-going-longer#55bf388eb383112faa000241
You may see Spencer Singleton's name on current leaderbords.

I was told that the rule was changed. The only stipulation is you can reenter, but not with the same algo that won. Besides that I think that the forum post Vladimir posted has all the information you need.

@Andrew Thanks. I would guess that it is also recalculated at the very end of the contest period? If I have an algo that is just off the requirements for 5 months, but gets the badge right at the end, could it win, or is it required to maintain the badge for the complete contest period?

@Andrew Looking at the numbers in the .csv, it seems that Contest #10 still uses old beta filter values (Sept 1, 2013 to August 31, 2015), while contest #11 uses new updated ones. Shouldn't #10 use new ones too?

Thanks!

@Charles, the beta-to-spy is a rolling 1 year period and gets recalculated each month. For contest #10 it was Sept 1, 2013 - August 31, 2015. The following month in contest #11 the beta is calculated from Oct 1, 2013 - Sept 31, 2015. Next month, it'll get recalculated again for the new contest.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thank you Alisa, I thought the filter would be recalculated every month, even inside a 6-month contest. So basically, it's a contest entry condition, and then the algo has no constraint.

I'm a little disappointed that there will only be two contest winners a year now instead of 12. Is it just too risky to have the top 12 people's algos going?

There will be a winner every month, as this gets up to speed. With a contest starting the first of every month, there will also be a winner every month. If you look at the leaderboard page it might be more clear.