Back to Community
contest algo

Here's an algo I ran in the Quantopian contest for 6 months. It was started on 2016-06-01, and is still running under live trading (SR = 0.49 & 2.48%). No contest money, no fund allocation, no nothing...but I suppose I gained some confidence in what may turn out to be a profitless hobby (hey, some hobbies are very expensive, so it's a win). It seems o.k., but I guess not good enough relative to the Q fund competition.

If anyone has a take on the Q fund thing, and the nebulosity of it all, I'm all ears. Seems like a lot of work and a lot of waiting for zilch (well, I could have claimed my teeshirt, but I have plenty of those).

Edit: Note that in the contest, I used a higher gross leverage ( context.leverage = 2.5). Here I'm using 1.0 gross leverage.

Clone Algorithm
61
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5887f3a2113cc45e404961d5
There was a runtime error.
26 responses

I'll speculate.

From what I can gather, I think the most immediate issue Q would have with this is:

    context.n_stocks = 50 # number of stocks  

Even if your code wasn't available for them to see, it would be clear through trading history that you trade a small-to-them basket of stocks, and a basket that rarely changes.

The two issues I believe they'd have with this constant:

  1. It's a low number.
  2. It's seemingly arbitrary.

I think you'd want to find a way to keep most of what you have here, but trade a much larger group of companies, or at least plausibly trade a larger group, or at least a group that changes more frequently. Rather than forcing the number too, it might be better to go with a "max," but a much higher than 50 max.

These are all just guesses though. I tried to raise the funding to $10,000,000, and companies to 500, but I didn't make it very far until I hit a MemoryError.

Yeah, relative to the current requirements on https://www.quantopian.com/allocation , it must have missed the mark. Oh well. I'm not sure I could have adequately explained the strategic intent, anyway.

Given that the Q team have some very talented members I wonder whether they are now taking a rather different route. They know exactly what Big Steve wants (Steve "Billions" Cohen) so perhaps they have decided it's better to service Big Steve direct and cook up their own algos for him.

My sense is that Steve Cohen/Point72 want the same thing as the overall market--something familiar, proven. The workflow and requirements are pretty generic, I gather. It is too bad that they didn't have these in place in 2014 when they conceived of the fund and kicked off the first contest. It's been a grand experiment, I guess. We're all learning to re-invent the wheel with the Quantopian team, with a crowd-sourced twist.

As far as cooking up their own algos, that could be a good thing. They'll learn how to do it, and can provide better guidance to the crowd. Also, increasing the capital carrying capacity of their fund, regardless of the source of the algos, would seem to be a good thing for all parties involved.

increasing the capital carrying capacity of their fund

Perhaps initial capital in the contest had to be increased due to few high quality algos? There is an issue at $10M with the 2.5% volume-from-previous-minute limit and resources available as even high volume stocks hit dry spells where few are buying or selling resulting in partial fills and that can force big losses. The solution is to broaden the number of stocks however that begins to hit timeouts limiting what one can do with them, fewer intraday checks for taking profit for example. Seems to me monthly balancing avoids timeouts yet when at predictable times might possibly be front-run by insiders on Wall Street at the high volumes required, not sure. Not utilizing all of the capital isn't an option because if the stocks are all quintupling in value the returns may appear to be low because of the way returns are calculated (the assumption of 100% usage).

This is intended as constructive since I appreciate Quantopian.
An increase in algo quality can be achieved by modeling the real-world better with an option of specifying a non-margin account, this will eliminate a good half or more of the frustration we see in the forums in my opinion from my three years experience, people will see more clearly what's happening to then make better decisions for a higher number of high quality algorithms. We can have nonmargin accounts at IB, we can't simulate them here yet. It might be as easy as providing set_nonmargin() to change context.account.buying_power from infinite to starting_capital.

Blue -

Not sure that "capital carrying capacity" is the correct idea. I meant that the larger the fund, the better, so if Quantopian employees/contractors contribute strategies, it would just make for a larger bucket of money sloshing around. It could be kind of an ethical slippery slope, though, since they are ones reviewing user ideas, at some level. Some legal/regulatory issues may be at play, too.

Regarding the retail business, I'm not sure if there is one. The focus is on the hedge fund. I guess the idea is that the free retail platform will be a draw for real-money trader types who either have a known strategy that would apply to the fund, or would eventually learn how to write one. Either way, I suspect just trying to do the fund concept alone would be difficult. Most folks aren't going to learn the API just for that, I figure. They'll want to trade their own capital, too.

Here's the tear sheet for the algo above (sorry for the multiple posts...having a bit of trouble getting things right).

Loading notebook preview...
Notebook previews are currently unavailable.

Here's the tear sheet with bayesian=True (not sure what that does for me, but I thought I'd try it).

Loading notebook preview...
Notebook previews are currently unavailable.

[2016] "It was Cohen's worst year on record other than 2008, when he lost nearly 28%—the only year the billionaire trader has lost money, though he did outperform the broader market."

http://fortune.com/2017/01/18/billionaire-steve-cohen-point72/

Based on 25 years of experience I can tell you my opinion on quantitative trading: expecting someone to be both a competent programmer and a trading strategy generator is an extreme position. These are two different jobs. The idea that you can find (many) people to combine both is ill-conceived in the first place. Maybe there are, I am sure there are but won't come to Quantopian for a promise of a cut, will go directly to a HF to get a salary and then bonus.

Here is how you set up a team,

  1. Ideas generators: These are people who have extensive trading experience and understand markets

  2. Competent programmers

  3. Intermediaries: People who can convey to programmers what to do without losing focus.

  4. Beta testers

  5. Desk traders who put (or monitor) orders (a third job and very important)

Again, the idea that you can get all of the above in a platform by individuals is ill-conceived. There is a price to pay if mistake is not realized.

Of course, most people believe they understand markets. But this is an illusion. Very few people understand markets. Actually fewer than very few.

@Grant

If I may speculate, one reason is the algo's underperformance in the past 6 months. In "How to get an allocation" webinar, sharpe ratio of 1 was mentioned. Also with Quantopian running just a handful of algos, and some contest entries having sharpe ratio of 2, your algo will be far down their list.

As for the fund's requirement, here's my take in decreasing order of importance.

  • Risk vs reward/sharpe ratio. Including during stress periods like financial crisis
  • Consistency. The algo's returns can't depend on randomly sparse large winning trades while losing most other trades, or vice versa, winning most trades by a small margin, then having sparse but large losing trades.
  • No long periods of drawdown.
  • Algo capacity. Can't do much with a $50k capacity algo
  • Have to trade a reasonably liquid universe, excludes penny stocks, leveraged ETFs.
  • Trades made must be reasonable. No going all in on a single trade. Strategy can't be built on shorting hard to borrow microcaps, etc

IMO, if you are willing to put your own money into the algo, it's half the battle won. The other half is a competition with blurry objectives

It looks like it glittered, but was not gold.

Thanks all for the comments. We'll see how things go. Fortunately, I don't have to do this for a living. Some progress has been made. When Quantopian first came out with the market neutral requirement, I didn't have a clue how to get things to work out. The algo I posted above I cooked up, based on what I'd learned from Quantopian and the forum, and it seems o.k. (assuming there isn't some gross over-fitting lurking, and it falls off a cliff).

@ Michael Harris - I'd agree that hard problems aren't solved in isolation. A multidisciplinary team, in close communication is usually best. Quantopian is compensating in a variety of ways (website, training, conferences, webinars, etc.). Regarding the programming, I'd say that Quantopian API relieves a lot of the heavy lifting. One does not have to write much low-level code. There is a difference, though, between a hack like me, and someone who programs for a living. The pro will do a much better job.

@Grant,

Q started as a platform for retail quant traders to develop algos of all kinds and share profits. Institutional grade algos to manage billions is a different ballgame. Even managing 100s of millions is a challenge. Small mistakes can cost fortunes. Cohen has already made the point that the space is crowded. The solution is not in a better API but what approach offers less data-mining bias. I think Q is doing well putting a team together. The retail aspect can offer some outlier in terms of a strategy (unlikely) and used to attract new talent.

Which is much what I was suggesting. Q has good talent and probably little to benefit from 90/- odd "retail" traders inhabiting this site.

I get the feeling they are moving away from the crowd sourced approach. And I think that is the right move.

Messing around with a website and 90/- users is probably time wasted for a team with their expertise and the (?) lessened activity here may point in that direction.

It's a dog eat dog world and Q will benefit from Big Billion Steve's money and connections. And who can blame them.

@Anthony, absolutely, I agree. I just hope they will let us small guys to continue using the platform. Power law is in action here (Pareto): < 20% deliver > 80%.

Does anyone know if the tear sheet tells me whether or not I meet the "Low Exposure to Sector Risk" requirement on https://www.quantopian.com/allocation :

As a rule of thumb, we are looking for algorithms that seek to limit their exposure to any signal sector to less than 30%.

And should it read "single sector"? And what is the definition of a sector? I'm guessing these, but there is no footnote or anything on the requirement:

SECTOR_NAMES = {  
 101: 'Basic Materials',  
 102: 'Consumer Cyclical',  
 103: 'Financial Services',  
 104: 'Real Estate',  
 205: 'Consumer Defensive',  
 206: 'Healthcare',  
 207: 'Utilities',  
 308: 'Communication Services',  
 309: 'Energy',  
 310: 'Industrials',  
 311: 'Technology' ,  
}

How can one check the sector diversity?

Fawce gives a good vision of the Q fund concept here, for those who missed it:

https://www.quantopian.com/posts/quantopian-platform-whats-needed-to-run-a-$10b-hedge-fund#55596fc1043ff69c630001e3

There, he states:

"As long as we find algos that are marginally better than random, we can combine them into something compelling."

Understandably, the strategy needs to be marginally better than your random long-short strategy constructed by a team of professionals, not the vast pool of algos on Quantopian. I guess if I were Steve Cohen/Point72, that's what I'd be thinking. The competition is not other Q members, but his team (which must be feeling some heat, given their poor performance last year). Makes sense. Perhaps part of backstory is that his arrangement with Quantopian carries less risk than one might think, if he doesn't have a better alternative. That's not to say that Quantopian is not onto something. It may be that "virtual quants" can beat the pants off of ones sitting at a hedge fund desks in NY/CT (which would be pretty cool, in my mind).

One interesting point is that the article mentions an old-school carrot-and-stick motivational technique:

In October, he announced a new bonus structure for his traders, upping the potential payout to 25% of their profits (from 20% previously) but only if they outperform certain benchmarks chosen by the firm. Meanwhile, traders who underperform will receive a lower proportion than they used to.

My sense is that this doesn't necessarily work with creative eggheads. Kinda surprising that their HR dept. didn't advise against it. The nice thing about Quantopian is that there is no stick (and only virtual egos to deal with--nobody in your face, stressing you out, although I've had some posts moderated out recently, which is rather unpleasant).

The article also says they want to "attract new talent to Point72, which has recently stepped up its recruiting efforts as Cohen himself focuses more on training and mentoring." Quantopian is a match made in heaven. I wonder if the Point72/Q agreement rules out talent-poaching by Point72? It would sure seem like a slippery-slope, based on the article.

Another reference point:

http://www.thefinancialrevolutionist.com/interviews/2016/12/16/interview-with-quantopian-ceo-john-fawcett

Note:

FR: Let's say you have a guy who licenses his strategy to you and he keeps on creating updates to the strategy year after year. Are you at all worried that he begins to look like an employee as opposed to a community member?

JF: Our dream is to find people who are prolific. If we run into someone who's creating many strategies, we’d like to create a fund around that person and their strategies. We become the platform and take care of the legal and fundraising, but they're the star."

This also makes sense, but kinda runs counter to the "crowd-sourced" concept of lots of relatively small, uncorrelated algos, whose sum is greater than its individual parts.

Here's another example, currently running in the contest. I figure it would not be attractive for the fund, under the current requirements. It uses an SPY hedging instrument, for one, and I think this is a no-no for the all-equity long-short algos that are in-demand. It sometimes goes to ~50% in SPY, which breaks the rule:

As a rough guide approximation, we are looking for algorithms that limit their single stock exposure to 10% of their portfolio value or less at any given time.

SPY is an ETF and not explicitly ruled out (although "stock" could include ETFs, I suppose), but the implication is that we shouldn't be using ETFs (at least in this fashion). Certainly, that is the guidance in the workflow and any example provided by Quantopian recently.

Clone Algorithm
30
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 588b29219063a05e4f8b8419
There was a runtime error.

Here's the tear sheet for the algo above.

Loading notebook preview...
Notebook previews are currently unavailable.

Grant - on the first one you posted, the biggest point to note is that it hasn't made any money during it's out-of-sample period. That could be just a natural drawdown period, in which case it will recover and becoming interesting. Or, it could be that the algorithm had some overfitting, and it's never going to make it. There's no way to know without waiting. Kayden and Frank's analysis was similar.

The second note on that one is the large position in QQQ that reverses repeatedly. That's not disqualifying, but it raises some questions that we'd be asking if you were in the due diligence process. There are times when filling in a hedge with a large ETF is a reasonable move, but it's also a warning sign of overfitting. When there is a single equity with a large position in the porfolio, the risk is that during development it was inadvertently "tuned" into being overfit.

Harrison's point is good - 250 is better than 100 is better than 50 - but a good algo with 50 is perfectly fine.

Anthony - We're not changing our business model. We're looking to our community to write high quality algorithms, to attract investor capital with those algorithms, and to compensate the algorithm authors. It's the core of our business, and we try to repeat that everywhere we talk about our business.

Michael, I understand that some quant shops use the division of labor that you describe. We aren't doing that exact division, but we do have one. At Quantopian, on a simplified level, we have one team that is building the platform, one team that is testing and making allocations of capital, and one team that is doing the trading. We're asking the community to do the idea generation, research of the ideas, and the coding of the ideas.

Steve Cohen: There have been several mentions of Steve Cohen in the thread, so I'll say a little bit about our relationship. He agreed to be our first customer, which we are very excited about. Through Point72 Ventures he purchased a relatively small fraction of Quantopian's stock. His organization has made some introductions for us to vendors and reporters and the like, for which we are also quite grateful. On the other hand, neither he nor his organization have a role in which algorithms are selected for allocations.

Grant, we don't have a great way, post-hoc, to figure out your sector exposure. It's something that we can do, and will in the future. It is possible to control your sector exposure in the algorithm using the fundamentals database. It defines sector codes. All of the early techniques were pretty manual, but the optimization API makes it a lot easier.

Grant, you added a phrase to Fawce's quote that does not need to be there. The original quote is "As long as we find algos that are marginally better than random, we can combine them into something compelling." It is correct as written, and does not need to be qualified with "constructed by a team of professionals" or anything else. Writing an algorithm that is consistently better than random is very difficult, but possible. It doesn't require a team of professionals. If one can combine many uncorrelated strategies with a positive sharpe ratio, the resulting portfolio will have a higher sharpe ratio. That's the point that Fawce is making.

Grant, on your 2nd tear sheet, a few notes:

  • two months of out of sample look good - more is needed,
  • a lot of the total return is from 2008, and that is presumably idiosyncratic. 1) if we take that out, how does it look? Is it still interesting, or did it lose too much of the returns? 2) If there is an idiosyncratic good thing, it presumably is equally possible to have an idiosyncratic bad month in the future. That would need to be investigated.
  • a 658-day drawdown is a loooooooong time to wait for a recovery. Hopefully that would also be resolved by whatever investigation happens into 2008.
  • There is a large SPY (50%) position which flips back and forth. Same concern I mentioned above.
  • There are many large positions in other stocks (20%), like NFLX and TSLA. Again, we'd need to understand the rational for that. Our null hypothesis with a tearsheet like that is that there is overfitting going on (unintentional, I'm sure), and we'd need to be convinced there is an underlying reason driving those positions.

I think Antony's questions are well-stated. If one starts with a hypothesis, a factor, one then needs to be sure it's not actually a proxy for some other risk. If you tackle it from that direction, you aren't stuck at the end, wondering if you have something real. At the end you'll have a well-tested hypothesis and the code to back it up.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

2017-01-25 06:31 pvr:244 INFO PvRp 0.0339 %/day 2007-01-25 to 2017-01-25 $10000000 2017-01-28 03:11 US/Eastern
2017-01-25 06:31 pvr:245 INFO Profited 18316432 on 21476897 activated/transacted for PvR of 85.3%
2017-01-25 06:31 pvr:246 INFO QRet 183.16 PvR 85.28 CshLw 5437208 MxLv 1.30 RskHi 21476897 Shrts 14502221
2017-01-25 06:31 pvr:341 INFO
Runtime 4 hr 58.3 min End: 2017-01-28 08:09 US/Eastern

Dan -

Thanks for your thorough and thoughtful comments. Some feedback:

  • Glad you hear that you'll be adding a standard way of assessing sector exposure.
  • I will reiterate the point brought up by others elsewhere that it is easy to bump up against memory limitations, in both the backtester and running tear sheets, with a dynamic universe of more than ~ 200 stocks. You are working on a fix, I presume, based on prior feedback.
  • There would seem to be a tension between your need to get at the underlying principle behind an algo and the idea that prospective allocatees could keep their IP completely secret, if desired. I agree that it would be best for authors to have gone through a rigorous, structured R&D process, and I can imagine that within "brick and mortar" hedge funds, they'd put together a slide pack or other write up, as part of their submission. Quantopian, on the other hand, should be willing to take algos as black-boxes, right? Or will you essentially need a compelling "story" to meet the Strategic Intent requirement? Personally, I'd have no problem sharing code or whatever with you; it would be advantageous in meeting the Strategic Intent requirement (if I were to tell you nothing, I suppose I'd be disqualified?). Others may have concerns, though. You may want to clarify how you are doing pass/fail on Stategic Intent. For those willing to "open up their kimonos" you could provide a template for the back-up material you'd like to see (it is listed under "What we look for" on https://www.quantopian.com/allocation so you might as well be explicit).
  • I agree that "over-fitting" in its various forms is the work of the devil; you might consider how long it will take to determine if it is not at play, given only a tear sheet. Presumably, the Bayesian analysis is a kind of hypothesis test to answer this question. It seems like given a relatively long backtest (e.g. ~10 years or more), it should be possible to estimate how much out-of-sample data would be required to rigorously test the "over-fit" hypothesis. If it turns out to be >> 6 months, then the algo is probably a non-starter. Maybe you could lump all of the fancy Bayesian plots into a single go/no-go number? For the 17 algos you selected for the fund, how are you determining how much out-of-sample data will be required to be convinced that no over-fitting is at play? Is there a kind of 95% confidence level set or something? And what if you know the detailed Strategic Intent? How does that factor into the required out-of-sample data to hypothesis-test over-fitting?

Hi Dan -

The new optimization API looks promising. The only relevant example I'm aware of is in Thomas W.'s algo posted on https://www.quantopian.com/posts/machine-learning-on-quantopian-part-3-building-an-algorithm , where it is convolved with all of his ML stuff. I can probably unravel it, but something simpler would be helpful, since it appears some additional pipeline outputs are required to set up the problem. If I sort it out, I'll publish an example.

EDIT - simpler example here:

https://www.quantopian.com/posts/optimize-api-now-available-in-algorithms

Grant:

  • On the strategic intent element: when we get to that point, it's in the context of an individualized due-diligence process. It's not automated. There are many ways for that discussion to go, with lots of flexibility. We don't need to pre-bind ourselves.
  • Your insight that different algorithms require different lengths of time out-of-sample is a good one. We use 6 months as a minimum, and some strategies that we check at 6 months we decide to let them "season" for longer. There is more work for us to do in terms of refining this part of the process.
  • You found the right link for the Optimization API. We don't have docs for it yet. Key quote from Scott: "The API is still marked as experimental, which means breaking changes are possible . . . " Nailing that API down and writing the documentation is high on the priority list, but not yet complete.

After some assistance from Ernesto (see https://www.quantopian.com/posts/problem-w-slash-dvmt ), I got this guy to run. Note:

set_slippage(slippage.FixedSlippage(spread=0.00))  
set_commission(commission.PerShare(cost=0, min_trade_cost=0))  

Now all I have to do is improve it to the point of being "marginally better than random" and I'll be all set to rake in those Wall Street big bucks! 6X leverage will do wonders.

Anyway, some progress, I think. Certainly nice to have the Q team making up all of these handy little APIs. I'm waiting for the one-liner:

from quantopian.algos import good_algo

good_algo()  
Clone Algorithm
345
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 588fdec1d310f15e1f02e5e7
There was a runtime error.