Back to Community
That's funny, 0% drawdown.

Over here I was kind of detracting away from the topic of how great Quantopian's accomplishments are, with a backtest that looks pretty extraordinary, so here's a place we can discuss it a bit if anyone's interested.

The screenshot again

Then, this Trade Info collects a lot of information, although it is from a later run with stops changed from tight to even tighter as an experiment.

KP asked whether its for real or maybe revealing a bug. There's at least one thing off and it could be in my drawdown code actually, mine says .8. This is every minute.

    if pf.returns > b['pf_hi']:        # For drawdown  
        b['pf_hi']  = pf.returns  
        b['pf_lo']  = b['pf_hi']         # Prep to look for new low  
    elif pf.returns and pf.returns < b['pf_lo']:  
        b['pf_lo']  = pf.returns  
        if b['pf_hi'] and b['pf_lo']:  
            b['dd_max'] = max(b['dd_max'], 100 * (b['pf_hi'] - b['pf_lo']))  

You can see in the trade info that there isn't a ton being invested in any stock. It's mainly that the winners far outweighed losers. Maximum lost on any stock was 15, yeah, dollars. PnL is the sort column. It invested in just over 1000 stocks, buying and selling. Commissions are default and were high, 5496. Since there are conditions on entering positions, they don't always go through, so only half of the initial capital actually ever wound up invested and max leverage was jawdropping low. Long-term and from every start date I've tried, it does great. That's not my experience with code that can qualify for the contest (this can't), in contest code, I could always find a start date where my algo would collapse, darn. So I guess the question is, can some of this be applied to an algorithm for the contest? I've tried some things so far, with very different results. By the way I should point out I have a collaborator on this. I'll add a zero to the input next.

Always a reason for optimism:
" ... the [phrase] that heralds new discoveries, is not 'Eureka!', but 'That's funny ...' " --Isaac Asimov

76 responses

Hi

Can you share the code which generates those logs? I would be interested in using it for mine.

My algorithm seems to share many striking parallels with yours, so it could be the case that there's been a new factor that's been uncovered here. When I ran it through the Quantopian risk model, there seems to be no known factor which accounts for the returns. When you run your code through the Quantopian Risk Model notebook what do your return attributions look like? I'd be really interested to see if they look similar.

With regards to your comments on the other thread:

On limit orders, would they see those and disqualify? Otherwise, one could, for example:
1. optimize early
2. set limits when orders are filled

I've tried these methods, the problem with optimising early (i.e. delaying your main logic behind the code designed to fit Quantopians requirements) I have found can lead to reduced effectiveness of the Optimise API, which may not be desirable for Quantopian given it's designed to protect their cash if they hand out an allocation.

I can convert my algorithm into a more suitable form for the contest by toning down its' turnover, perhaps you could try that? I don't know how your algorithm works, but when I tried that, it dragged down my returns significantly (especially my Common Returns attribution) but it made it viable (bar the Optimise API).

(Could you also tell me how to share images on here, there's some stuff I'd like to add here but can't because I don't know how to)

looks awesome.... if I would find a curve like that I would certainly start trading it.... and retire early

I like some detective work:
I looked at the list of tickers and in general, they are low priced, low market cap companies with a amount cash on hand and a big disparity between the target prices and the actual price. 52 week movement is quite volatile. They also have a high amount of institutional ownership....

@Gary, @Quant Trader, you both produced exceptional equity charts. Remarkable. Congratulation.

@Quant Trader, your notebook in the other thread showed your strategy design has real alpha which evidently hints to a lot of potential.

I presume you both are still at the investigative stage of your strategy development process. What is needed now is to show scalability and sustainability. The question is: how would your strategies do with $10M over the last 10+ years?

Hi Blue Seahawk,

Congratulations! Very impressive. In fact, a bit too impressive if you ask me. I’d say it’s more likely that you’ve found a bug/loophole in Zipline, giving you unrealistic prices and fills. If you can get similar results in a non-zipline based backtester (I’m not sure what QuantConnect, Quantiacs, Numerai, etc use) I’ll be truly impressed. Or even better, in a live environment (non-Quantopian) either paper trading, or ideally, real money in the real market.

That said, I’m still massively jealous even if it’s just a bug.

Assuming it’s not, which contest constraints is it not meeting? If it’s the long/short 50/50 requirement, can't you just invert and short the opposite? If your algo only does intraday trading, just enter the top least volatile stocks 50/50 long/short right before the close, hold them over night, and exit right after the open, and do the intraday trading thing during the day. This ‘workaround’ is a bit ethically questionable however, as it’s not really in the spirit of the contest, so may get you disqualified, especially if the ‘alpha factor’ really is due to a bug or other limitation in Zipline.

My guess is that these are 'market making / liquidity providing' strategies, where the algo joins the best bid and offer, and also moves up and down the ladder as the market moves (minute ticks). Zipline likely simulates some fills to the resting orders on both sides, based on historical volume, and the algo effectively collects the spread between the bid and ask prices ('free money'). I'm also guessing that Zipline wasn't primarily designed for market making strategies as you would need more granular prices than just minute bars.

Just for fun would be curious to see same backtest with $10,000,000 capital?

Mine works in Live Trading with $10M Capital, also works with $10M Capital in Backtests.

I would be really interested in seeing the result when you run your backtest through the Quantopian Risk Model Notebook. I'd be interested in seeing if we're exploiting the same factor.

@Quant Trader, thanks for showing this. It is an impressive profit curve. Extrapolating, this is a 30% CAGR with very low risk, low volatility and low beta. Simply awesome.

Observation #1: You're both trading intraday using limit and stop orders under the order_target_percent construct.
Observation #2: On the Live Trading Dashboard, the alpha, sharpe and volatility graph shows almost symmetrical patterns
Observation #3: Just looks too good to be true!

Bug or Factor Discovery, that is the question?.

By the way, the link/page @QuantTrader where posted his notebook? I remember seeing it and being inspired and jealous. Please don't repost it here, just post the link to it, thanks.

"share the code which generates those logs?"

Thanks for noticing. During the time building that code I probably could have made a quarter mil down the street at Amazon or Google, it provides value, has value and can be obtained in an exchange for value, its author deserves to be rewarded, it is tailorable, can be added to any algo with a single paste. It has active switches and one of them was essential in this result of mine. In developing them, testing on all of Quantopian's lecture series backtests, average returns and sharpe about 4x higher, drawdown nearly half. Ideally someone philanthropical could make it available to the community.

"toning down its' turnover"

Have started trying various ways to do that. Looking forward to the reduced commissions that'll go with that too.

"if I would find a curve like that I would certainly start trading it.... and retire early"

Trade live where? The requirements are: (a) a few fundamentals for screening like Q's built-in filters (b) close prices [and they wind up outside of those filters as a result] (c) stop & limit orders.

@PeterB: Thanks for that info.

@Joakim "still massively jealous even if". Ah good, a side-goal achieved. :) "can't you just invert and short the opposite?" Will try that.

100k and 1 million

The original screenshot in May was 56%. Tried increasing weights since cash isn't all used. Two of those have stalled, that happens sometimes. The better of the two in May, now 125%, so more than double.

Even a 1 year backtest takes over 4 hours because there's a lot going on. 2 yr Full underway to be able to run contest checker on it and post that.

@Blue Seahawk & @Quant Trader, I'm still jealous, but slightly less so now, as I'm pretty convinced that these returns are mostly originating from the default slippage model and other limitations in Zipline (e.g. minute bars only). Call it the 'Slippage Clippage' ;)

By default I believe you get 2.5% of the traded volume each minute at the limit price specified (if none was traded at your limit price the past minute, you get nothing), so if the algo has a buy order (in the handle_data function) at the last traded price, and an equal sell order at last_traded +1 tick, you're effectively collecting the spread from the default slippage model . Even if the price moves around a bit, on average the algo should be able to clip the spread more often than not, and any price movement should mostly cancel itself out (minus commission), as the orders don't actually affect the simulated market as it would in the real market. You could layer it too and have multiple buy orders at last_traded -n ticks and multiple sell orders at last_traded +n ticks, but unless you're a licensed market maker, I don't believe layering is allowed without a clear 'intention to trade', and will likely trigger a call from the broker's market surveillance department.

So, rather than a bug, I'd say this is more of a known limitation of Zipline when doing intraday trading in the same securities, which I don't believe Zipline was ever designed to be able to simulate effectively and realistically. I think you'd find that if you tried to trade this with real money in the live market (i.e. not simulated), the real Mr. Market will be much less predictive and accommodating.

I thought about putting my simulated money where my mouth is and try to code my theory to prove a point, but then I remembered the real reason why I'm jealous: My coding skills suck, and yours appear to be pretty damn good! I don't mean to burst anyone's bubble, and I do realise that I could be wrong of course, but my opinion at least is that if your strategy is doing minutely intraday trading with limit (and stop) orders in the same securities, any returns are more likely to be coming from the default slippage model rather than from some newly discovered 'alpha factor.'

In other words, while I do find the effect interesting, even fascinating, too much time spent on trying to mine the slippage model is in my view wasted. I hope you'll both prove me wrong though and retire in the near future!

Peace!

In that case you may be truly on to something. Did you get similar volatility and drawdown when you were live trading?

Why don't you migrate the algo to QuantConnect and either trade it live there through their Prime Service ($20/month should be worth it), or offer hedge funds to license it through their Alpha Streams service?

Just to add info, my similar algo on Robinhood (no commission) was around 4x original value in 8 months when that avenue was ended.
Couple of differences, that did have drawdown on RH. This commissions setting is default (not specified). Should be ok on IB.

Spectacular backtests, I assume you're trading smaller stocks? Have you tried applying the same logic to the larger stocks, or does the same logic not apply.

Either way, very impressive.

With regards to trading your strategy, remember, it doesn't necessarily have to be run through Quantopian. I'm sure there's a lot of people on Wall Street which would be willing to run it, especially if it can recreate those returns. The problem is it looks like it doesn't scale to stocks with larger volumes, which puts a cap on the potential of the algorithm. The obvious next step is to make it run with higher capital bases, even if it costs some of the returns. A backtest like this with $10,000 is incredible, a backtest like this with $10,000,000,000 is the holy grail.

I'll take a moment now to reply to some of the commenters:

James Villa:

Observation #1: You're both trading intraday using limit and stop orders under the order_target_percent construct.
Observation #2: On the Live Trading Dashboard, the alpha, sharpe and volatility graph shows almost symmetrical patterns
Bug or Factor Discovery, that is the question?.

With regards to Observation #1: The Live Trading version is intraday purely because it makes it easier to exploit the factor I'm making use of, I've reduced the current versions trading frequency to < 45% and the returns still remain similar (presumably because the commission reduction is similar to the monetary gain from increasing the turnover).

With regards to Observation #2: I'm actually not entirely sure why that's the case, seems to be an interesting feature of the algorithm, might look into that in the future.

Joakim Arvidsson:

I could be wrong of course, but my opinion at least is that if your
strategy is doing minutely intraday trading with limit (and stop)
orders in the same securities, any returns are more likely to be
coming from the default slippage model rather than from some newly
discovered 'alpha factor.'

I can't speak for @Blue Seahawk's strategy, I know the current version of mine isn't trading intraday though, of course it could be a limitation of the Quantopian Slippage model, but it's very hard to test which is which.

I assume the 'crossing the spread' strategies you're talking about are things like this?

https://www.quantopian.com/posts/a-simple-market-making-algo-with-a-net-sharpe-ratio-7-year-to-date

I've tested my algorithm with the slippage model the user @Luca suggested, it still works although not to the same degree as can be expected. Might be worth you trying it as well @Blue. On the plus side, if we do take our algorithms to market, I don't think we'll be treading on each others toes as they do vastly different things by the looks of it.

These are my backtests over varying capital levels, all have a turnover of ~42%

Then the fateful $1Bn

The reason the backtest becomes more 'market-like' under higher capital levels is because I change:
a) The kind of stocks I'm trading under higher capital levels
b) Some of the logic is changed because it doesn't necessarily apply in the same way

With higher market cap stocks I've found I'm more likely to be taken for a ride with them (i.e. experiencing their movements, and hence making my algorithm more like an index tracker than anything else) rather than exploiting what I am looking for. Not that I have a problem with that.

@blue, just curious if yours also scales down to something lower in 20K range, or if 100K is sort of the sweet spot for getting enough trades across enough equities.

@quant, very very cool that is scales from so small to so large...wow...

@Quant Trader,

Correct me if I'm wrong and just to be clear, the backtests at different capital levels you shared above is NOT intraday ( I guess it's on a daily frequency now) and all have a turnover of ~42% and order execution is based on market order NOT limit/stop order. Is this correct?

@James Villa

Rebalances twice a week, order execution is still limit/stop order

I'm still trying to complete a 2-yr full backtest. Looked stalled, tried refresh (figured it would reset and continue), says initializing.
@JA Trying. Slow going. @QT Thanks for those screenshots, I too am amazed yours can do so well at such high capital, and consistent. That inspires me. To answer, seems maybe the less mainstream the stock, the better mine does.
Yes I'm doing some intraday opens. Daily. No slippage/commissions specified. Also limit/stop.

@qt Why does $10m have much better sharpe than $10k test? Seems that between $500m and $b you just hit a liquidity problem, nothing to worry about (by the time you got $500m of capital to trade this you would find another algo :)

Very impressive! I’m back to being properly jealous again. Rebalancing twice per week to me means the results are much less likely to be coming from the slippage model. Well done!!

@Blue, instead of ordering from handle_data, couldn’t you run pipeline and rebalance in a scheduled 390 range for-loop from initialise to run pipeline and rebalance every minute? If that works, I believe Optimise API supports limit orders but not stop orders.

@vladimir I'm pretty sure it's because the hedging gets more accurate at higher capital levels. Because you can only order whole numbers of stocks the net dollar exposure is likely to be lower the higher capital we use because it's more likely it'll divide in.

I also change the logic slightly at each level, it would seem that it's more effective than the logic I use at $10k etc.

@Quant if you dont mind, I'd be curious:

Do you use the order_optimal_portfolio and OptimizeAPI or primarily just pipline and order_target_percent ?

@Quant and @Blue
Did you start in the notebook and migrate to a backtest, or pretty much develop all from the backtest? -- I've seen varying opinions, just curious what tools best helped in your exploration.

All that is required to have a scalable trading strategy is a scalable trading unit. For instance: u = q∙p. Double the trading unit, or make it proportional to equity, and you will have: k∙u = k∙q∙p. That's it.

At the strategy level, you can look at a strategy's payoff matrix and conclude to the same thing: k∙F(t) = k∙F(0) + Σ(k∙H∙ΔP), since k∙H, the inventory held is being scaled by k.

The question should be: is the quantity traded (k∙q) marketable at that moment? In the beginning, it should. It is as the equity grows larger that it becomes more difficult due to the lack of available shares on the bid or ask. But there are methods to alleviate that kind of problem.

@Quant Trader's strategy does display its scaling abilities quite nicely and is the demonstration to make to show that indeed your own trading strategy is scalable.

Great job, @Quant.

@Quant Trader,

Rebalances twice a week, order execution is still limit/stop order

Have you tried it using market order and/or order_optimal_portfolio just to eliminate the possibility that you are not exploiting the limitations / bug of limit orders as highlighted here simulation-of-non-marketable-limit-orders

When the student is ready, the teacher will appear.

Is there a backtest that could be attached to illustrate the apparently phenomenonal results? This would make it easier to understand hands-on what might be going on.

My earlier post was deleted by The Man. Sorry if I offended anyone.

Hi @Grant,

Unfortunately I think that's something I think I'll have to work out for myself, handing over the code would unfortunately not be in my best interest (just in case it's actually that good). I'm going to assume it's the same for @Blue.

No offence taken, hopefully none given : D

IP can sometimes be obtained in exchange for something of value.

Update on tearsheets, for days I've been wrestling to provide one. It's likely unusually copious data. Often backtest loading would halt, hang, stall even overnight, sometimes 100% but with no output, sometimes various errors. I'll keep trying, or one of you might like to give it a whirl, here's an ID: 5af5e3c638c194441ae0608f
Anyone know of a list of all of the different tear sheets that can be run?

That ID is 2 yr and although not the greatest metrics, completed. It took 7.4 hrs and brings a side-note to mind, I was thinking of logging ETA every few months or something and wonder if anyone else might use such a thing for long-running tests.

Right now about the only other thing I can add is nearly doubling the original returns by merely using more of the starting cash.
Edit: Added another screenshot later.

Edit: This is 1 year from a 2007 high to see what happens during the big downturn, I'm surprised it did this well then.

@ Gary,

How much is the value of this IP?

A 2 year backtest that takes 7.4 hrs. to run suggests that there may be some unrealistic assumptions for an algo trading only $10K. The fact that it can't be loaded into a research notebook probably indicates that there are a tremendous number of transactions per unit time, which are likely the dominant factor resulting in the backtest execution time (although it could also be computing within the algo itself, versus order processing).

What slippage and commission models are being used?

Regarding publishing a tear sheet, reduce the backtest duration to the point where the data can be loaded into the research platform is my recommendation. At least then we can get a sense for what the algo might be doing.

If you think this is a bug, you could submit a help request to Q, to see if they'll look into the code (you'd need to grant permission).

At the risk of my post being deleted again, it would be most helpful if the salient details were be summarized in perhaps a bulletized form by editing the original post above (this is what I had basically requested, but with a lot of snarkiness intended to be humorous, which I guess didn't go over well with the powers-that-be).

Regarding the exchange of IP for something of value brings up an interesting thought:

I mentioned to Blue Seahawk the prospect of starting a small group of traders who have substantial IP and share + collaborate. Substantial IP is relative but you can see Blue Seahawk's results above and infer what I'm talking about. This is unrelated to the contest or constraints - the care would be more aligned towards substantial annualized returns that can scale capital (capital, in this case, assumes the groups combined allocations per strategy to avoid alpha decay).

Let me give you a real-world example. Q reached out to me about an invite for an allocation. Then things dissipated as their requirements changed. My algo has a short only alpha signal and I was looking to add a long side instead using a static index hedge against it. I decided to post a crude version on Q about this IP and Blue Seahawk reached out to me. I took a leap of faith disclosing the code to him after seeing his activity on Q and his engineering/coding abilities. That compelled Blue to share some of his IP with me and we collaborated/shared multiple algorithms - with some awesome results and I think some future collaborations that will lead to the same (also this advanced my personal development from watching his coding style). I think this can be extended to a small group but it requires a few things: a leap of faith and trust in the code of protecting the group's IP.

So if someone has an awesome alpha signal, would they consider the same route?

@John Scaife

Excellent point/observation. The original intent of Quantopian was to promote a collaborative environment to develop trading algorithms which would be better than any one of us could do on our own. There are plenty of examples of this working in the open source software realm. Linux is a great example. There are some contributors who volunteer in their spare time. There are some full time developers. Then there are whole industries like Ubuntu that take that open source IP and add their own value to it and make money from it. The takeaway is a lot more people benefit, all in their own way, than if it wasn't open sourced.

Now look at the quant community. Exact opposite. I have always been amused by the opacity and isolation which is promoted. It almost borders on paranoia. I do wonder if there isn't a healthy (or rather un-healthy) amount of hubris we all share too. In most cases this isn't a zero sum game. Someone else winning doesn't make me a loser.

There may be some valid reasons to hold on to IP ideas. I didn't intend on getting into a debate on that. I would however like to move the ball forward and say one step may be, as @John Scaife suggested, to start with smaller collaborative groups of engaged individuals to develop trust and promote openness. I truly believe that working together those individuals can benefit more than working in isolation.

Count me in.

I have always treated Quantopian as a hobby, and still do. Part of this hobby is for it to be self-funding--start from $0, and work up from there. I am now in a position where I have a little pot of capital from the Quantopian contest, and I'd like to think of ways of parlaying it (up until recently, "Off-White Seal" had been in the top ten almost from the contest start, so you can do the math). Perhaps others are in the same boat, thinking, "O.K. I have some capital. What next?" If I get an allocation from Quantopian, I intend to take the same approach, and hopefully would have a lot more capital. I've been keeping an eye on zipline-live and participating in some of the discussion on Slack. We'll see how this plays out. It is water under the bridge, but if Quantopian still had Robinhood integration, I'd be trading my contest winnings by now (perhaps with an ETF-based asset allocation type algo that I'd share, since it would have no real "secret sauce"). The idea of rolling my own and setting up zipline either at home or in the cloud, etc., etc., is not something I want to do. Quantopian already had the thing working; why should I re-invent the wheel? Anyway, if anyone also has contest money burning a hole in there pocket, or otherwise, perhaps collaborating would be fun (although I must emphasize that this is a hobby, so many other things take precedent).

I am dumbfounded by the "deafening" silence of the Q team on the results of the above backtests / notebooks by @Quant Trader and @Blue. I have read somewhere in the forum that Q has an automated screening procedure of viable backtests for allocation and sends out emails as attested by @John Scaife and @Blue. Given the phenomenal results shown by the above backtests, shouldn't the Q team be bombarding you guys with possible allocation emails? There is something that is missing here. Are these backtest results for real and possibly a newly discovered alpha or alpha combination or are they a result of an exploitation of bug or limitations of the limit/stop order routines? I think it is time for the Q team to chime in!

Lack of Optimise API maybe?

If one wants to try to be in the driver's seat, the basic allocation process, as I understand, is:

  • Write algo
  • Run full backtest and note backtest ID
  • Wait 6 months (a long time if the algo is a non-starter, but one can gauge, via the contest rules and evaluation tools)
  • Contact Q with backtest ID (I don't know if one is guaranteed a thumbs up/down ruling and the justification)

There is also an automated process which is applied, which screens algos with full backtests. So, presumably, if it works, Quant Trader and Blue would be contacted, should their algos be useful to Q. It would be nice if Q provided the date on which a given backtest was screened automatically. Then, at least one could make sure that algos are being reviewed; I don't see why this needs to be hidden information for users.

I guess I'd recommend cajoling the strategy to work either as a factor to be combined with other factors, or as a single factor algo, and submit to the contest. If it bubbles up to the top and sticks there with an anomalously high score, then presumably Q would become engaged, since it would likely be viewed as a bug or "gaming" of the contest (or be found to be the real deal jackpot).

@ Dan Whitnable, thanks for the shout. @Blue Seahawk mentioned you in a personal conversation regarding your generosity and openness so glad to see you as a part of the conversation and your willingness to join a proposition like I mentioned.

Truly I have no real plan in action, rather just 'speaking out loud.' You all make some great points. There are definitely traces of paranoia on here - and I get it. What compelled me to be so open with someone with credibility on here is pretty logical. If someone is either smart or crafty enough to develop some really good IP, then that person should also be smart enough to retain and protect that IP or new IP for obvious reasons. One big hedge fund gets a hold of it and goodnight alpha. I do want to mention there are clear reasons not to share - contractual, capital base limitations, etc. So these should always be considered.

@James - I hear you - Q wants a very specific product, which is ok - plus scaling 10mm is so much harder than scaling 10 or 100k. That's why I made the comment about the need for live trading here. You can develop amazing IP and it could just sit there and collect dust due to their product desires. Now this is a bit dramatic of me to say because there are viable alternatives - just nothing as clean and pretty as what Q offers.

@Quant Trader'

Lack of Optimise API maybe?
@John
Q wants a very specific product...

Guys, let's get real here, this is a money making business. If you really got something, Q is in the best position to judge that since they created the framework from which we all create our algos. Given that their investor, Point72, committed $250M in capital for allocations, I'm sure Q can convince Point72 to allocate some chunks of capital to algos like that of @Quant Trader and @Blue, assuming that they are true alpha discoveries and validated by Q. While it is true that Q is looking for something specific as manifested by the contest parameters and rules, it is not the only avenue to get an allocation. A couple of months back, I received an invite from Q to participate in their Emerging Managers program and the only requirements as far as I can remember are your algo should have been trading live with real money for at least six months and backed by broker's statements. So this is another avenue for you guys to consider, pull together $5K-$10K and trade it in IB using probably zipline live and after six months present your broker statements to Q!

Emerging Managers program - now that's a new one. If I look at https://www.quantopian.com/allocation, I don't see anything about it. I guess the idea is that a strategy would be ported over to Quantopian? A topic for a different forum discussion thread...

Emerging Managers was presented in a Webinar, and I had attended a live presentation in NYC. As an alternative to the conventional allocation process, a fund manager can simply send a daily file of buy/sell signals, and Q will evaluate the signals to make a determination. Either 6 months of historical broker trade statements, or six months of trading signals are required. Also, must be a fully automated, version controlled algo. Further, must demonstrate to Q that you have a robust infrastructure to reliably generate trade signals. Otherwise, all is same. Q will evaluate the trade signals and results, as if the trades occurred in the backtester. I believe the intent is to address the IP question, enabling managers to gather their own data and run algos off-platform from Q. Further, it seems Q believes managers have this infrastructure in place already, and Q wants cast a wider net to include them, without forcing their algo onto the Q platform.

With my jealousy hat on, a few thoughts:

  1. What are the odds of two people on the same platform, using same order types, independently, and within a matter of months, both finding a new ginormous and previously undiscovered 'alpha factor' ? Possible? Sure. Probable? Not at all.

  2. Most alpha factors are very tiny. That's ok though as they are essentially risk free, so as long as they are somewhat scalable, you get more free money the more you throw at it. These returns shown here though are enormous with very little risk, so unless they are from a combination of hundreds or thousands of alpha factors, the 'returns' are most likely coming from somewhere else.

  3. The market is pretty efficient these days, even in the short term. Sure, there's plenty of alpha out there still, but single factor alpha of this size, even from a handful of combined factors, simply don't exist in the real market. Only in simulation-land.

  4. If these were true alpha returns, half of Wall Street would be knocking on your door wanting to license your strategy, and the other half would want to hire you.

  5. There might still be some real alpha in your logic that might be worth exploring further. Most of it however are unlikely to be real.

  6. Simulation will always have it's limitations. The Q Team are probably well aware of theirs, but perhaps don't have a simple solution available. As long as not too many are exploiting it, it may not be a priority for them to address.

  7. If the returns were likely to be real, @Quant and @Blue would have already heard from Quantopian as the risk/return ratio is literally 'unbelievable.'

  8. If it is a known limitation or bug in Zipline, Q may not want to confirm or deny until they have a workable and prioritised solution available as it may be possible to exploit within the constraints of the contest (or by spoofing the constraints checker), which would be unfair to other entrants, and not aligned with the real purpose of the contest.

  9. I'm still fascinated, and I'd be curious to know if these strategies have a large exposure to any of the known risk factors (without revealing which ones specifically). My guess is that @Blue's algo is overly exposed to the size factor, and @Quant's perhaps to the volatility factor.

  10. There may be some truths in cliches like ' All that glimmers isn't gold' and 'If it seems too good to be true, it usually is.' Dream big, but be realistic. These risk/returns are simply unrealistic in my world.

Taking off my jealousy hat now to keep hammering away at my 4% (if that) average annual (in-sample) combined alpha factors, but with minimal volatility, drawdown, position concentration, market, sector, and factor exposures.

Thanks Doug! Nice explanation. Makes sense. Q should include this in their main messaging, rather than having it as a side project. I hadn’t understood it until now.

I suppose some could collaborate and go modern. STOs, based on blockchain securities, will be the next big thing. People band together, form a company, issue some security tokens (STO) secured by a part ownership of the company. Add some zero-coupon bonds with a 20-year maturity to further increase the AUM and voilà. The talented people, the outstanding strategy, the funding, the know-how.

There is enough talent here to do all that.

Also, @Quant, @Blue, or anybody else, could contact directly other funds (like Point72) with their great programs. What they have is not just an idea, it is a program that is executed on someone else's machine and using the provided price data. When I run a program on Quantopian, all I can do is run a trading script and have confidence that the results are what will come out. And if somebody else was running my program, they would get exactly the same results. If not, it would cast a doubt on all simulations performed to date since we can not make sure that limit orders were not used (except for posted strategies).

Guys, keep your IP. Protect it. You show the code, you lost it. However, you could show the backtest analysis which will not give away the how you do it. If you do, I prefer with the option: round_trips = True.

@Guy, what does the round_trips = True option do?

@Joakim, it gives statistics on the number of trades (longs and shorts), the average profit per trade, the average loss per trade, the average profit per winning trade, the average loss per losing trade, among other things. I find it useful in identifying strengths and weaknesses.

Nice! Thank you.

Loose Ends. I went through this thread again and wanted to cover a few things.

@Everybody: Good to see the words of encouragement and so forth, thank you.

@JA "Probable? Not at all" "Only in simulation-land"
Again my RH code (which was similar) over just 8 months was around 4x original value before that ended.

. . . Does that suggest this is real and not a bug?

"half of Wall Street would be knocking on your door wanting to license your strategy"
I wish. I'm willing to entertain offers. All they have to do is click user names on this page to contact any of us. An amount that would seem small to them would be life-changing for me. Just make an offer guys.
I spoke with a Q rep last fall when brokerage trading ended, they were being kind in initiating that call, and before we were done talking, I asked whether they could be at all interested in running my RH code on the side, the response was, that's the very thing they want to avoid. If I recall, the original contest was 100k, then went to 1M and quickly to 10M. So far, I haven't found a way for my code to survive at those levels. @QT has.
"My guess is that @Blue's algo is overly exposed to the size factor"
Single stocks with too much allocation? In one backtest, my code started around 60 stocks and ramped up to double that over 1 year (a spike was 157). It's adjustable and I'd have to say that's one area where I'm pretty confident that risk will be manageable.
"instead of ordering from handle_data". Misperception. No, I'm opening via schedule, a few times per day if conditions match.
"_ believe Optimise API supports limit orders" Optimize, if it sees any limit or stop, will error out. But you can cancel them all before opt.
"hope you'll both prove me wrong though and retire in the near future!" Thanks! This is just a start trying to make money for a long list of projects I want to carry out hoping to wow the world. To a buyer I'd happily reveal some of those if they're at all interested in where their money will be going.
"Why don't you migrate the algo to [censored]" Banging my head against the wall would probably be more enjoyable than warring that other party's bugs which are legion imho but I'm resting up now for the next battle with those bugs.
@QT: "I would be really interested in seeing the result when you run your backtest through the Quantopian Risk Model Notebook. I'd be interested in seeing if we're exploiting the same factor."
Also. I want to repeat that this is an ID of a 2 year test: 5af5e3c638c194441ae0608f If you, the reader, have a powerful computer and want to try, feel free to use any contest checker notebook you wish. I would like it a lot if this concise constraint checker could complete as it reveals the percentile levels etc.
"assume you're trading smaller stocks" Not mainstream, yes.
"I've tested my algorithm with the slippage model the user @Luca suggested _ Might be worth you trying it as well @Blue." Will do if you can point to or copy the slippage line for me to try.
"Rebalances twice a week, order execution is still limit/stop order" Is that on open, close or both?
Would like to ask @QT if you can say, on average about how many stocks are typically in play?
Where is QT's notebook?
@PB: Thanks again for that detective work
@GF: Thanks for the round_trip info. And for others interested in more about round_trip, and for myself, an additional reference on that: https://www.quantopian.com/posts/round-trip-trade-analysis
"Guys, keep your IP". Indeed. And for any young who are dabbling here, so you won't feel left out, IP merely stands for Intellectual Property, a widely-used term relating to original innovations.
@KP: "Did you start in the notebook and migrate to a backtest, or pretty much develop all from the backtest?" For me, almost all backtest. I would not be surprised if I have started an average of about 30 backtests per day, sometimes noticing something & needing to restart, so not anywhere near all completed but that comes to 49,275 backtests. Thank you Quantopian for the longsuffering patience and vision.
"just curious what tools best helped in your exploration" I also depend on some "Third party essentials", tools mentioned there for efficiency in finding code, comparing code & storing results, so incredibly useful.
@Someone said: "Is there a backtest that could be attached to illustrate". There is a backtest. I hear that everyone has their price. Would like to find out if that's true of me.
"What slippage and commission models are being used?" Default in my case because none specified.
"nice if Q provided the date on which a given backtest was screened automatically" Agreed and imagine a page of results of those screenings (everyone) sorted showing where our various algos land.
@Blue said: "Anyone know of a list of all of the different tear sheets that can be run?" Anyone? It's possible some lighter ones might finish.
Regarding @JS, it's true, the communication between us offline was beneficial to both of us, continuing good times.
@DW: "isn't a zero sum game. Someone else winning doesn't make me a loser" Along the lines of a caviat, I was only willing to share my RH code with @JS because it was no longer running live. Imagine if two are running the same code, whichever orders make the queue first would have an advantage however unlikely. We all like to add our own creative genius, make changes, make it ours. Anyway, you mentioned some things worth thinking about for sure.
@JV: "dumbfounded by the deafening silence of the Q team" Yeah it might be that even 100k is tiny for them although @QT is another story, scales fine.
"bug or limitations of the limit/stop order routines? I think it is time for" I recall a post where some were claiming a bug so I looked into it using track_orders and candidly I think they were simply misunderstanding the way limit orders operate and Q is spot-on. I could be wrong but I haven't seen actual information indicating a bug yet and would welcome some if possible (in a separate post). Throughout time, appreciate your interesting perspectives by the way.
"Emerging Managers program _ another avenue for you guys to consider _ trade it in IB using probably zipline live and after six months present your broker statements" Hmm. With zipline-live, would I be able to screen stocks for a few fundamentals exactly like Q's pipeline filter basics combined with a screen on their close prices ?
@DB: Thank you for clarifying the above.

I need to find some of @QT's magic in scalability. This is only 10k starting capital but seems to double in a little over 3.5 months. The increased beta is yet another one of those many things along the way that belong in the category box labeled 'surprise'. Skepticism makes sense of course because long-term, a large part of this came about by wandering off the well-traveled logical path.

Thanks @Blue! That would have taken you a while. :)

If I were you I'd do whatever it takes to figure out how to trade this with my own money on RH if these curves are indeed replicable in the real market, and thanks to compounding, retire in no-time. I congratulate both of you, but I still remain on the sceptical side. Here's QT's notebooks/backtest results.

@Blue,

I believe this is the slippage model from Luca that QT is referring to. Can be found here.

DEFAULT_VOLUME_SLIPPAGE_BAR_LIMIT = 0.025

def initialize(context):  
    set_slippage(myVolumeShareSlippage(volume_limit=DEFAULT_VOLUME_SLIPPAGE_BAR_LIMIT, price_impact=0.1))  


class myVolumeShareSlippage(slippage.SlippageModel):  
    """Model slippage as a function of the volume of shares traded.  
    """

    def __init__(self, volume_limit=DEFAULT_VOLUME_SLIPPAGE_BAR_LIMIT,  
                 price_impact=0.1):

        self.volume_limit = volume_limit  
        self.price_impact = price_impact

        slippage.SlippageModel.__init__(self)

    def process_order(self, data, order):  
        volume = data.current(order.asset, "volume")

        max_volume = self.volume_limit * volume

        # price impact accounts for the total volume of transactions  
        # created against the current minute bar  
        remaining_volume = max_volume - self.volume_for_bar  
        if remaining_volume < 1:  
            # we can't fill any more transactions  
            #raise LiquidityExceeded()  
            return None, None

        # the current order amount will be the min of the  
        # volume available in the bar or the open amount.  
        cur_volume = int(min(remaining_volume, abs(order.open_amount)))

        if cur_volume < 1:  
            return None, None

        # tally the current amount into our total amount ordered.  
        # total amount will be used to calculate price impact  
        total_volume = self.volume_for_bar + cur_volume

        volume_share = min(total_volume / volume,  
                           self.volume_limit)

        price = data.current(order.asset, "close")

        simulated_impact = volume_share ** 2 \  
            * math.copysign(self.price_impact, order.direction) \  
            * price  
        impacted_price = price + simulated_impact

        if order.limit:  
            # this is tricky! if an order with a limit price has reached  
            # the limit price, we will try to fill the order. do not fill  
            # these shares if the impacted price is worse than the limit  
            # price. return early to avoid creating the transaction.

            # buy order is worse if the impacted price is greater than  
            # the limit price. sell order is worse if the impacted price  
            # is less than the limit price  
            if (order.direction > 0 and impacted_price > order.limit) or \  
                    (order.direction < 0 and impacted_price < order.limit):  
                return None, None

            # For "non-marketable" limit orders (limit price has been crossed)  
            # the final price must be the limit price.  
            # To disinguish between marketable and non-marketable limit  
            # order we can use the following check:  
            # if both open and close price are below/above (buy/sell)  
            # the limit price the order is markettable  
            # if open price is above (or below for sell) limit price  
            # and close price is below (or above for sell) limit price,  
            # then the order is non-marketable  
            open_price = data.current(order.asset, "open")  
            # Note: no need to check for close price if we are here  
            non_marketable = (order.direction > 0 and open_price > order.limit) or \  
                             (order.direction < 0 and open_price < order.limit)  
            if non_marketable:  
                impacted_price = order.limit

        return (  
            impacted_price,  
            math.copysign(cur_volume, order.direction)  
        )

I remain skeptical, too. Assuming there is no bug in the backtester, I'm wondering if the prices used for execution are realistic. I recall awhile back being cautioned about something called bid-ask bounce that can skew backtest results. I don't have time to dig into it now, but for example:

https://quant.stackexchange.com/questions/1348/control-for-bid-ask-bounce-in-high-frequency-trade-data

There is a suggestion that it can lead to spurious mean-reverting behavior. Might be worth considering here.

On the topic of monetization of this and other low capacity algos, I'd like to repeat here a post from the thread https://www.quantopian.com/posts/contributors-and-reviewers-needed-for-zipline-live

"Regarding business model, the goal is individual profit by combining community quant and coding skills. In the context of often asserted low capacity winning algos of interest only to smaller accounts, I am willing to give my coding time in exchange for implementing an algo in my personal account that will compensate me for that time. I expect our community has members who would take the opposite side of this trade, sharing algo knowledge in exchange for an execution platform. Together, we build out the platform for public use and private benefit.

I am open to joining a club of quants and coders with such a mission statement."

@JA Thanks * 2. Running that slippage model now.

TradeInfo for the @QT backtest above

Meanwhile a 6 month report on mine cleared the tower.

Loading notebook preview...
Notebook previews are currently unavailable.

@QT,

Man that is one impressive curve! Honestly I hope it's all real, but I remain fascinatingly sceptical.

@Blue,

Note that Jess recommended (in a separate thread) using 5 bps slippage, which I believe is the default for equities.

@Joakim Just to clarify, both the TradeInfo Blue shared above and the backtest I shared aren't the algorithm I was originally displaying, I shared it to clarify what exactly crossing the spread was.

@JA I saw that too and you're correct it is the current default. As slippage is a big topic I'd hope it could be discussed here where I'll post a slippage comparison.

@Gary, it is when looking under the hood that we can see if there is an engine. Took a look at last notebook (provided only as an example), and the “round_trips=True” option reveals a lot.

The average net profit per trade came in at: $0.25. That is two dimes and a nickel per trade. Sure there is a profit of $32,365.32 on 129,728 trades. And since money is money, who cares if the machine has to work harder?

However, after noticing that the strategy had eliminated commissions and slippage, even if in a limit order scenario slippage should not apply, the overall picture changed considerably.

I re-ran the trading script with Q's default settings. All I got was an equity curve with low beta, low volatility steadily going into the abyss of oblivion with no bounce in sight. It is why I hate it when people do their simulations with no frictional costs. It distorts the picture considerably.

Also, the number of executed trades (129,728) averages out at 992 trades per day which are spread out into 10,000 to 14,000 partial fills per day. Total, over 1.4 million transactions executed. Since the limit order is good at the limit price or better, unless partial fills are done at the limit price, the strategy will benefit should it get, at times, the better price on those partial fills. So, @Joakim's question holds.

This also casts a shadow on @Quant's strategy due to equity curve similarities. @Quant, are you ready to display the backtest analysis with: bt.create_full_tear_sheet(round_trips=True)? That is if Q can run that line of code at all, since I think the analysis will simply time-out.

@QT, I wasn't implying it was, but I'm still impressed with the curve on both of them, and the code in your older algo that you shared (thanks for that - it'll take me days to figure out what it's doing but it won't be an immediate focus for me). I don't expect you or @Blue to share your IP (I wouldn't either) and if there really is some real alpha there, you deserve to keep it to yourself (until it 'gets discovered'). I remain very sceptical though.

Could this be related?

@JA "Could this be related?" I addressed it there for you.

I may have missed this point, but has any of the two algo (@Blue's and @QT's) been backtested with default slip and commission models?

Mine has

Mine is default slippage and commissions also.

The “round_trips = True” backtest analysis option is most revealing at times. Among other things, it gives the gross profit as well as the net profit generated by a trading script.

Any portfolio has for equation: F(t) = F(0) + Σ(q∙Δp) – Σ(exp). Where F(0) is the initial capital, Σ(q∙Δp) is the gross profit generated by all the trading activity (+/-), and Σ(exp) is the total frictional costs (expenses) incurred in all the trading.

Therefore, the total profit (Σ(q∙Δp)) divided by (n) the total number of trades should give the gross profit per trade. The round_trip option also gives the average net profit per trade which is (Σ(q∙Δp) - Σ(exp)) divided by (n).

Now, if the gross profit and the net profit turn out to be equal, then you have a sure sign that no commissions were charged since for those two numbers to be equal requires Σ(exp) = 0. Which is what I was referring to in the above notebook.

We can not see if commissions were charged unless we have a look the round_trips option. And over the years, I have learned to be very skeptical. It is why I like math, especially the equal and not equal sign because those are bold statements.

Correction requested:

Using the “round_trips = True” backtest analysis option, we get the total profit and the number of trades, and using: Σ(q∙Δp)/n will result in the average profit per trade.

However, when calculating what is labeled as “average trade net profit”, the “net” does not really hold. The answer it gives is the “average profit per trade”. This means that whatever the simulation, frictional costs or commissions are not calculated there, at least, not provided using the round_trips option:

[Σ(q∙Δp) – Σ(exp)] / n = Σ(q∙Δp) / n

has for only solution: Σ(exp) = 0.

As a first step, taking out the word “net” would make it at least a more accurate label.

It is frustrating if we cannot trust the numbers or the labels displayed in a backtest. It raises questions about every other number. Do we have to check them all to make sure that we understand what is really presented? For instance, is the number of trades real, or, is the “average trade net profit” simply badly labeled?

I have modified the code posted by @Quant on May 14 on this thread such as to remove like what I felt like some minor logic errors, e.g. the flush_portfolio are called from the closes instead of the opens like the other calls, etc. Here is the algo and back test. Clearly, Q can't simulate the orderbook properly. Ran this on iBridgePy/IBRK paper and off course, results were dismal, of course I added back to P&L the transaction fees.

Any though from the best and brightest?

Clone Algorithm
51
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.pipeline import Pipeline, CustomFilter
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.factors import AverageDollarVolume, AnnualizedVolatility
import numpy as np
import pandas as pd

def initialize(context):
    pipe = Pipeline()
    volatility = AnnualizedVolatility(window_length=30)
    pipe.set_screen(QTradableStocksUS())
    pipe.add(volatility,'VOL')
    attach_pipeline(pipe, 'pipe')
    schedule_function(flush_portfolio, date_rules.every_day(), time_rules.market_close())
    
    set_slippage(slippage.FixedSlippage(spread=0))
    set_commission(commission.PerShare(cost=0.000, min_trade_cost=0.00)) # 0.0003 and 0.00 is about the most we can pay right now for this.

    schedule_function(test_waters_beginning, date_rules.every_day(), time_rules.market_open(minutes=1))
    schedule_function(flush_orders, date_rules.every_day(), time_rules.market_open(minutes=2))
    schedule_function(order_list, date_rules.every_day(), time_rules.market_open(minutes=(3)))
    schedule_function(trade_market, date_rules.every_day(), time_rules.market_open(minutes=3))
    schedule_function(flush_orders, date_rules.every_day(), time_rules.market_open(minutes=7))
    schedule_function(flush_portfolio, date_rules.every_day(), time_rules.market_open(minutes=7))
    
    for i in range(7, 360, 6):
            schedule_function(test_waters_beginning, date_rules.every_day(), time_rules.market_open(minutes=i))
            schedule_function(flush_orders, date_rules.every_day(), time_rules.market_open(minutes=(i+1)))
            schedule_function(order_list, date_rules.every_day(), time_rules.market_open(minutes=(i+2)))
            schedule_function(trade_market, date_rules.every_day(), time_rules.market_open(minutes=i+2))
            schedule_function(flush_orders, date_rules.every_day(), time_rules.market_open(minutes=(i+6)))
            schedule_function(flush_portfolio, date_rules.every_day(), time_rules.market_open(minutes=(i+6)))

    schedule_function(flush_portfolio, date_rules.every_day(), time_rules.market_close(minutes=1))
    

def before_trading_start(context, data):
    context.output = pipeline_output('pipe').nlargest(20, 'VOL')
    context.position_values = 1.0

def limit_order_price(price, up):
    delta=0.0005
    min_delta = 0.01
    if delta * price < min_delta:
        if up:
            return price + min_delta
        else:
            return price - min_delta
    else:
        if up:
            return float(int(price * (1.0 + delta) * 100))/100.0 # round to penny
        else:
            return float(int(price * (1.0 - delta) * 100))/100.0
        
    
def test_waters_beginning(context,data):
    context.currprice = data.current(context.output.index, 'price')
    
    context.order_list = []
    
    value = 10000/len(context.currprice)
    
    for stock in context.currprice.iteritems():
        try:
            order_target(stock[0], value, style = LimitOrder(limit_order_price(stock[1], False)))
            order_target(stock[0], -value, style = LimitOrder(limit_order_price(stock[1], True)))
        except:            
            pass        

def test_waters(context,data):
    context.currprice = data.current(context.output.index,'price')
    
    value = 10000/len(context.currprice)
    
    for stock in context.currprice.iteritems():
        if stock in context.order_list:
            try:
                order_target_value(stock[0], value, style = LimitOrder(limit_order_price(stock[1], False)))
                order_target_value(stock[0], -value, style = LimitOrder(limit_order_price(stock[1], True)))
            except:            
                pass
        
    for stock in context.currprice.iteritems():
        if stock not in context.order_list:
            try:
                order_target_value(stock[0], value, style = LimitOrder(stock[1] - stock[1] * 0.05))
                order_target_value(stock[0], -value, style = LimitOrder(stock[1] + stock[1] * 0.05))
            except:            
                pass
            
    context.order_list = []
        
def order_list(context,data):   
    for stock, orders in get_open_orders().iteritems():  
        for order in orders:
            cancel_order(order) 
            
    for stock in context.currprice.iteritems():
        if stock[0] in context.portfolio.positions:
                context.order_list.append(stock)
       
    
def trade_market(context,data):
    order_list = context.order_list
    
    number = len(order_list)
    number_2 = len(context.currprice)
    
    loose = number_2 - number
        
    for stock in order_list:
        try:
            order_target_percent(stock[0], 0.75/float(number), \
                                 style=StopLimitOrder(limit_order_price(stock[1], False), \
                                                      stock[1] * 0.99))
            order_target_percent(stock[0], -0.75/float(number), \
                                 style=StopLimitOrder(limit_order_price(stock[1], True), \
                                                      stock[1] * 1.01))
        except:
            pass
            
    for stock in context.currprice.iteritems():
        if stock not in order_list:
            try:
                order_target_percent(stock[0], 0.25/float(loose), \
                                     style = StopLimitOrder(stock[1] - stock[1]*0.05, \
                                                            stock[1] - stock[1]*0.10))
                order_target_percent(stock[0], -0.25/float(loose), \
                                     style = StopLimitOrder(stock[1] + stock[1]*0.05, \
                                                            stock[1] + stock[1]*0.10))
            except:
                pass
    
def flush_orders(context,data):
    for stock, orders in get_open_orders().iteritems():  
        for order in orders:
            cancel_order(order)        
 
def flush_portfolio(context,data):
    for stock in context.portfolio.positions:
        order_target_percent(stock, 0)
There was a runtime error.

I guess as proposed by @Granrt, it is bid/ask bounce. As you increase the spread in "test_waters" , the P&L decreases. Then, the question is, what is the correct spread to use. In any case, it was very interesting to see how differently such an algo behaved live as opposed to backtest.

/L

I too played with this in the paper trading for the heck of it and it has dismal results. I tried adding in the more advanced IB specific tools for bracket orders, but no help....

If you turn commissions back on, it falls apart even in the backtest...so if you add that in AND can somehow make the backtest still good, then I think it would have a chance at surviving the paper trade.

My theory was this was showing how powerful short-term mean reversion really is. But it could also be the minute bar intervals hide some the "true" momentum movement before the mean reversion actually occurs...that momentum stops you out before you get to the mean reversion...maybe some extra checks to on the typical std deviation amount would help...but I didn't get to circle back to those experiments yet.

While we're showing off our charts:

Returns
Stats

I'm going to start live trading it very soon so I can put to rest whether it's simply exploiting some flaw in the slippage model. It uses limit orders. It takes out a lot of small positions, so I wouldn't expect a lot of real-world slippage, but who knows...

Nice chart, what's the error?

Not sure. Quantopian stopped all my paper trades on the 1st of the month. It wasn't an error on my side.

I want to point out that $0 commissions are never realistic, since you're never free of SEC and FINRA fees. This will give you a rough approximation of the bare minimum fees, even when using a no-commissions broker:

set_commission(commission.PerShare(cost=0.0006, min_trade_cost=0.01))

Seems negligible but if your algorithm is buying and selling millions of shares it'll add up.

want to point out that $0 commissions are never realistic

Lest anyone get the wrong idea, my algo has commission charges (and slippage), both default due to no line referencing them.
Sorry to be repetitive, this is the sixth time I'm pointing this out but my thread here has other backtests and discussions. Confusing.

For algorithms that trade often and/or small amounts per trade, I think it would be important to include the min_trade_cost=0.01part which I think Q's default model doesn't include, since SEC and FINRA both round up to the nearest cent.

I would suggest: consider the worse. Make frictional costs higher than default. It makes it more realistic, not less. The math goes like this:

F(t) = F(0) + Σ(H∙ΔP) – Σ(Exp.)

The long-term portfolio equation accounts for generated trading profits less all expenses. If for any trading scenario you have frictional costs as a major point, it can only mean total profits barely cover all trading expenses. And such a strategy is not producing much. Just consider when: Σ(H∙ΔP) ≈ Σ(Exp.), there might not be that much profits to enjoy to say the least.

The only way to compensate for this is to have a very large number of trades. A small edge, even in the order of commission size can still be profitable, but certainly not in a big way on a minute to minute basis. The strategies designed here on Quantopian are not HFT.