Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
OLMAR implementation - fixed bug

I fixed a bug -- please give it a try and post your results and suggestions here. I recommend only running it on daily tic data, due to the use of the batch transform to compute the moving average.

See http://icml.cc/2012/papers/168.pdf for a complete description of the algorithm.


72 responses

Hi Grant - What was the bugfix?


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hello Fawce,

Sorry...a bit fast and loose. The algorithm should be all-in (approximately zero cash balance) and long only. However, it was neither. I replaced the canned moving average transform, mavg, with one based on the batch transform, get_avg. I also added these lines to the start of handle_data:

if get_avg(data,context.stocks[0]) == None:  

This way, the algorithm doesn't do anything until sufficient time has elapsed to compute the average.

I think everything is fixed now--I welcome testing and comments!

I've gone over to the batch transform for a variety of reasons. I'll post them here (perhaps along with some questions), as time allows.

By the way, it'd be really nice to have something like github's version control and change highlighting. Could I use github as an individual for free? Maybe I could put more mature algorithms there to keep track of what I'm doing (and share the changes with others). At this point, I'm not concerned with keeping my algorithms confidential.




Thanks for the explanation. You can do code diffs between full backtests. If you navigate to an algorithm, and then to the list of all backtests for that algorithm, you can choose any two backtests (check the boxes next to them), and then click compare. In other words, there is implicit versioning with each full backtest. You can also just simply browse the source code associated with a full backtest by navigating to the full backtest results, and in the small context menu in the top right there is a "view code..." option. I'd post a video, but I'm on the train and don't have great connectivity.

You can definitely setup your own public repo(s) on github. There is a chorus of users asking for git deployment to Quantopian - would love to know your thoughts on that idea.


Thanks John,

I'll check it out. I'll probably get going with github, too. Unrelated to Quantopian, I have another (Python) project I'd like to post there, as well.


Grant, awesome! Will your other python project be public? If so, let us know, everyone would love to check it out.

Yes...no problem making it public. It's a somewhat efficient and simple way of generating a list of IP addresses hosting websites based on reverse DNS lookup. There's a lot out there that you don't see with a google search. --Grant

Hello Peter,

Sorry about that...as I said in the post, I don't recommend minutely backtesting with the algorithm, as-written. I'm pretty sure it is because of the batch transform (a known sluggishness problem). When I get the chance, I'll tinker around with it to see if I can get to run in an equivalent fashion with the provided transform, mavg, which is fast. Or see if you can fix the problem and post the solution here.

In the meantime, please try some other securities, settings, and time frames, and post the results here...I'd be interested.


Thanks y'all. This does look promising. I wish you luck optimizing the code Peter.

Hello Peter,

See https://www.quantopian.com/posts/batch-transform-testing-trailing-window-updated-minutely for a discussion of the speed of the batch transform. As it is presently coded, it will bog down a backtest if called every minute. I've been thinking about how to apply the OLMAR algorithm on a minutely basis, and there are other issues, besides code execution speed. If you've read the paper (see the link above), you'll see that the algorithm was proven out on daily trade data.

At this point, I'm focused on working out the kinks for daily trading, using the batch transform, but would be glad to assist if you run into difficultly getting the OLMAR algorithm (or a variant) to run minutely. Also, Quantopian engineer/researcher, Thomas Wiecki is familiar with the OLMAR algorithm, since we worked on it together, so he might be able to give you some guidance.



See attached for a backtest on individual stocks. I also changed context.eps to 100--the algorithm seems to be working.


@Grant: Just getting back to this algo.

Is this the only fix? Seems like you could just use a counter that doesn't access the mavg() until the window is full. It'd be more efficient to use the iterative built-in mavg() rather than code your own using batch_transform. Or maybe there is something I'm missing?


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hello Thomas,

Good to hear from you...hope all's well.

Yes, the only fix...the code seems to be working. I haven't received any complaints.

My main motivation in switching to a custom moving average via the batch transform is that I will be able to customize it (e.g. add weights). Also, to implement the so-called BAH(OLMAR) extension of the basic algorithm, it seems natural to me to grab the full trailing window of data (e.g. 30 trading days) and then compute the moving average over the range of periods. Also, I'm not so interested in minutely trading, so calling the batch transform once per day seems efficient enough.

Take care,



Some additional input...I'm open to suggestions regarding computation of the moving average for the OLMAR/BAH(OLMAR) algorithms (subject to my comments above). Efficiency could become an issue if we take the next step and apply walk-forward optimization (discussed in blog.quantopian.com/parameter-optimization/).

Is there an alternative to the batch transform to extract a trailing window of data? Or will the various shortcomings of the batch transform be fixed eventually:



Actually I'm planning on using OLMAR as an example to optimization (not walk-forward yet, however). I'll share my results.

As to the batch_transform:
* There is an option to turn the filling off and give you the raw data (set clean_nans=False).
* We actually did some modifications that should speed it up. I think it's close to the maximum now. However, creating an array will always be slower than an iterative version that doesn't need to do that; there is nothing we can do about that.


Thanks Thomas,

Presumably, this is the usage for the batch transform decorator:

@batch_transform(refresh_period=R_P, window_length=W_L, clean_nans=False)


Also, should this be on the help page? If so, I'll put in a request.

I look forward to your optimization example.


Seems like there has been a follow-up paper by the same authors. Looks interesting:

Someone notified me Quantopian and the thread. It is really intersting~
Thanks for implementing and backtesting the algorithms!

In the past several years, we have designed several trading algorithms, for research.
All details are listed on our project site at http://www.cais.ntu.edu.sg/~chhoi/olps/index.html.

Comments and discussions are welcome.

Hi Bin,

It's fantastic to have the author of the algorithm here, thanks for dropping in!

I'm currently playing around with the OLMAR algorithm and testing the influence of the parameters.

Did you have a chance to look at our implementation? Grant and I have gone over the logic many times and I think it's correct but it would be soothing to have your validation since you probably spent even more time thinking about this :). Specifically, it seems that the normalized b vector mostly puts all eggs in one basket (i.e. the vector contains one 1 and the rest are 0s). Is that intended?

We set our portfolio long-only. The normalization is intend to satisfy the constraint of non-negative weights, which is ignored during the derivation of the algorithm (proposition).

In some extreme cases, it does happen that the vector contains one 1 and the rest are 0s. We are still looking methods to control its behaviors. Currently, I think tuning the parameter can make it less extreme.

Grant: I found another (critical) bug. The way we were using the batch_transform is not valid since one can only call it once in every handle_data() call. The reason is that calling the batch_transform also appends the data frame to the window, so if you call it multiple times it will get confused. Sorry for not making this point clear in the documentation, we'll update that accordingly.

I also changed the eps value to be in a better range (as Bin suggested) and the portfolio is now distributed much more evenly. I think this looks pretty good now!

Thanks Thomas,

I'll take a look when I get the chance. Seems the updated code should be posted on github, right?


Ah, that's right. Thanks for reminding me.

I'm actually doing some more clean-ups and trying to convince myself 100% it's working correctly. Will update once I'm confident.


Sounds good. Once you get it debugged and checked out, you might want to do a new post to the Quantopian community forum, as well, with a link to the github repository.

By the way, that's really odd, unexpected behavior by the batch transform. I'd assumed that it would behave like a normal function...multiple calls would not generate corrupted data. Can you just fix the problem, rather than documenting the seemingly whacky behavior? At a minimum, it ought to generate an error, right?


@Grant: Yeah, you are totally correct, it's really unintuitive. This just occurred to me but I agree we should probably just fix it.


Another consideration for the OLMAR algorithm is handling of missing tics (e.g. in the case of thinly traded securities or suspended trading of a security in the portfolio).


Here is a major refactored version of this algorithm. I'm confident that it's doing the correct thing now.

There is another version of this that uses the universe feature here.

Finally, I added the algorithms to the github: https://github.com/quantopian/quantopian-algos

Hello Thomas,

I just cloned the algorithm you posted immediately above and ran it without changes. When I look at the Position Values, I see CASH bouncing around versus time between some pretty big numbers, including negative ones (magnitude ~$100K-$200K). It should be ~$0, so that it models a long-only, all-in, non-margin account, right?

Any idea what's going on?


Hi Grant,

Certainly what you describe is the correct behavior. Seems like it must be a bug in the order filling (I rewrote it a bit) :-/. Maybe we'll restore your rebalance_portfolio function? I did think there was a problem with that it did not achieve the desired portfolio but I might be wrong. Can you try that?


Yes...when I get the time, I'll read through your updated code to understand the changes (might learn some Python tricks, too...bonus!). Then, I'll do some testing to sort out what's going on and let you know.

For now, I'll leave the OLMAR plus universe selection to somebody else to check. I posted some comments and concerns there, as well, which you've no doubt seen.



Hello Thomas,

I attached a backtest...I only call the batch transform once per handle_data call...seems that I get roughly the same results as you, except that the swings in the CASH value are smaller. I haven't had much time lately to work on this...perhaps someone else can pick it up and continue improving the code.


Hi Grant,

OK, there was indeed a bug in the rebalance_portfolio function (init was always True), thanks for pointing that out! Attached is the fixed version.

Good...looks like we obtained the same result...comforting. --Grant

@Grant: Agreed. That basically means that your algorithm was correct all along.

I found that the VolumeSlippage model can cause many problems since the portfolio may not be successfully rebalanced. I thus switched out the model for the FixedSlippage one whcih produces more consistent results. I also added plotting of the step-size.

Hello Thomas,

Yeah...if a re-balancing order doesn't get fulfilled during the next trading day, then I can imagine that the algorithm could fall apart.

I've also realized that unless borrowing is allowed (negative cash balance), the OLMAR algorithm requires a small side fund of cash to fund the re-balancing. Maintaining the side fund will cut into returns a bit, since it is not invested. Or if borrowing is allowed, it won't be free, right?

Working on this algorithm has me wondering if there is a more fundamental, underlying measure of the tendency of a collection of securities to mean-revert. And there should be an associated characteristic mean-reversion time scale with associated statistics, right? The OLMAR algorithm is taking advantage of a kind of time-dependent "inefficiency" in the market. Any idea what's going on?


Hi Grant,

I just finished reading the paper and this algorithm seems really interesting. I thought about your question and I wonder if the characteristic is volume.
I would imagine when the stock is losing momentum (volume), the price would stop increasing the mean reversion becomes much more likely on the next day.

Hello Ben,

If you are interested in digging deeper into the problem formulation, here's a collection of references:

On-Line Portfolio Selection with Moving Average Reversion

PAMR: Passive aggressive mean reversion strategy for portfolio selection


Efficient Projections onto the l1 -Ball for Learning in High Dimensions

Efficient learning of label ranking by soft projections onto polyhedra


In the OLMAR paper, the author uses an ad hoc technique of smoothing out the variability in the mean reversion time scale (see the BAH(OLMAR) discussion). Basically, the optimization problem is formulated for a price moving average computed over a fixed-length trailing window, but then the algorithm is augmented by computing the expected return over a series of window lengths, and then weighting the portfolio re-balancing accordingly. In effect, I think this approach adapts to changing mean-reversion time scales. However, if the time scale could be predicted, then the optimum window length could be selected every day. Or perhaps better, the weights of a weighted moving average (e.g. exponential) could be adjusted daily.

Nobody has implemented and posted the BAH(OLMAR) algorithm on Quantopian...give it a try!

Regarding incorporating the effect of volume into the algorithm, it should be straightforward in Quantopian to switch from the mavg transform to vwap in the OLMAR algorithm. I'll try it, or give it a go yourself.



Here's an updated version of the algorithm. I've exposed the volume, so you can play with it, if you want (I don't have time now). I prefer to use the batch transform to grab the trailing window of data, however you could modify the code to use the built-in transforms (e.g. mavg, vwap, etc.). The main drawback of the batch transform (as I've coded it) is that it is too sluggish to run on minute data.


@Grant I made a simple modification to use the moving volume weighted average instead of moving average price. Here's the result.


Nice! Note that numpy also has a built-in weighted average (http://docs.scipy.org/doc/numpy/reference/generated/numpy.average.html#numpy.average).

If you want to give this result some visibility, feel free to do a separate post. So far, Thomas Wiecki and I have collaborated...good to have someone else looking at it.


Please excuse a newbie question but.. I see cost/trade is set at 0 in Ben's algo above. If the cost was set at $8/trade I presume it would be shown like this


And by changing that parameter in the code in my backtest when from > +250% growth to more than -100%. Is this correct?

Also, how is the benchmark calculated and what is it actually?

Many thanks


Yep...commissions have a dramatic effect...I've never tested it, but my understanding is that the code you show above will result in an $8 charge per trade per security.

Regarding the benchmark, it is the S&P 500 (but Quantopian may be using SPY as a proxy...not sure). Again, I haven't tested it, but I believe it is the return for an all-in investment in the index at the very start of the backtest (no commission & no slippage).

I'm seeing a problem with the commissions, by the way. For set_commission(commission.PerTrade(cost=0.99)), I see no effect, but for set_commission(commission.PerTrade(cost=1), there is a strong impact...a bug? I'll send in a question.


@Grant I would be interested in working on the BAH version but I am not too sure about the details of the implementation.

This part :
"for each period, we treat the individual OLMAR with a specified w ≥ 3 as an expert and combine multiple experts’ portfolios weighted by their historical performance."

Perhaps you can clear things up for me. Thanks


I'll get back to you in a day or two...I think I know what the author does. We can also contact him.


@grant Thank you for the customary brilliant, helpful reply. Appreciated.

re: Trade cost -- so many of the algorithms here are 'pure' calculations without consideration of the trade cost. Money management is such a vital component to successful trading that shouldnt more of the algorithms take this into account? Spread, slippage and trade cost can destroy so many otherwise brilliantly profitable algorithms.

These facts may also influence how you choose your universe,

Hello Ryan,

According to the help page, both slippage and commissions have default settings that effectively eat into return. So, unless the author of an algorithm has explicitly changed the settings, the Quantopian backtester automatically imposes slippage and commissions (however, I don't know if they are representative of real-world retail trading).

It seems that the OLMAR algorithm should incorporate the cost of re-allocating the portfolio in the decision to re-allocate (rather than just taking it on the chin for every trade). This may just be a matter of adjusting epsilon so that the threshold for the expected return is sufficiently high to overcome the cost of re-allocating. Also, since commissions are on a per trade per security basis, a larger portfolio will be more expensive to manage. I think that the OLMAR algorithm could be impacted by restrictions on trading odd lots, as well (not captured in the Quantopian model). And then there are taxes on the short-term capital gains, if the account is not tax-sheltered. And of course inflation eats into returns, too. And there is the opportunity cost of active trading in the first place (i.e. in the long run, it might be more profitable to spend the ~ 10 hours per week on your day job or on part-time contract/consulting work).



I forgot to add that a small side fund of cash is required for the re-balancing, so that no borrowing is required (or the cost of borrowing could be added in). This will eat into profits, as well.

I think that you are on the right track with your thinking...Quantopian has been advertising their upcoming move to live trading with Interactive Brokers (IB), so for any algorithm posted, there should be a hard look to sort out if it would work under real-world Quantopian/IB trading (which I expect initially to be more expensive on an operational basis than working with IB directly, unless IB is planning to share their profit with Quantopian or Quantopian is planning to operate at a loss for awhile to build up a customer base).


Thank you Grant. Your posts are always so interesting and helpful.

(Btw my comments weren't meant as a criticism of any of the outstanding brain power and amazing effort being expended here, just trying to get my head around implementation).

The reason why I was curious about trade handling for example was you might hedge losing trades and then trade out of them on the next entry signal to minimise any loss. Ie. Hedge a losing position, market continues to move against you. Close the profitable side on the new entry/reversal signal.

It would be less entries potentially and may improve profitability.

This money management dimension to the algorithm is as essential as the pure math / price action science to create an initially positive play.


Yes...perhaps some combination of ETF's (including reverse ones, like SH) could be used in an algorithm that gets updated weekly/monthly/quarterly under a high-cost, unleveraged retail trading scenario.


... more thinking out loud... perhaps this algorithm could have multiple 'threads' and open orders / rebalance lots based on a Fast & Slow MA


In the paper, the author outlines the "BAH(OLMAR)" approach, which utilizes a range of moving average window lengths (3-30 days, I think), effectively eliminating the window length as an adjustable parameter. To me, this may be the best approach. With fast & slow moving averages, there would still need to be a way to adaptively pick the optimum corresponding window lengths.

I owe Ben V. (see above) an explanation of how I think the BAH(OLMAR) works...stay tuned.


I've been thinking about how the commission cost would affect the optimal portfolio allocation each period.
The OLMAR algo. minimizes the norm distance between the weights which does not necessary lead to optimal allocation with the commission cost considered.

Hence we consider the per share commission model.
If S is a vector where each element S_t[i] is the number of shares for stock i in portfolio at time t
Then we can minimize the distance norm between S_t and S_t+1
And then the period's epsilon constraint is to be adjusted to ensure a positive return including the commission as Grant mentioned.

Perhaps this is a better formulation?

Hi Ben & all,

There is a bug in the way Quantopian handles commisions. See https://www.quantopian.com/posts/odd-behavior-olmar-algorithm-and-commissions.

Regarding the BAH(OLMAR) approach, the author basically outlines a linear weighting of the portfolio, by cumulative return versus trailing window length (W days) for the moving average. For example, if the cumulative returns for W = 3, 4, & 5 were S(3), S(4), & S(5), respectively, with corresponding new portfolios b(3), b(4), b(5), then the best-estimate for the optimal new portfolio would be:

b = [ S(3)b(3) + S(4)b(4) + S(5)b(5) ] / [ S(3) + S(4) + S(5) ]

Bold typeset indicates a vector.

Thus, the portfolio re-balancing would be based on b.

Make sense?



I've been working on the BAH version. So far I have everything set up except the calculation of cumulative returns for each window size.
If I understand correctly, I would need to keep track of different portfolios that uses OLMAR with different window sizes.
Not really sure how to do that yet. For now, I find the b_norm for each W and set the return for each W's to be the same for all periods. Hence the b vectors are equally weighted.

I attached a backtest for W = [6,8,10].
I tried to set W = [10] but it doesn't match with the original OLMAR version so there are definitely bugs somewhere.
EDIT: The algo with a single window actually matches the orignal OLMAR. I forgot to change the commission cost when comparing. Looks like we are good for now.

PS. Backtesting takes a while since the algo. needs to calculate multiple b_norm vectors for each period.

Hi Ben,

Looks like you've made some progress. I'll take a crack at it when I get the chance. The cumulative return for each w can be computed iteratively as a product, as described in the paper (see page 5, Algorithm 1 section, upper left-hand corner, and Section 2 of the paper).


I just finished working on the BAH. Here's a backtest for W = [3,4,5]. It should be noted that when I set commission per share to be 0.03, the returns is below the benchmark... perhaps next step is to take into account the commission cost when rebalancing the portfolio..
Anyways, this was a great learning experience working this!


Hello Ben,

Glad you made some headway. I recall that Thomas W. had some trouble with the per share commission model with this algorithm. You might switch to the per trade one to see what happens.

Also, as I understand, if you run it for W = [3,...,30] there should be a smoothing effect in the returns.

I've cloned your code and will have a look.


Hi all

I've been playing around with this algo for a short while. I'm trying to figure out how to get it to work for short positions though. Once a value of b_optimal is negative it rapidly becomes very large indeed leading to all sorts of side effects.

Anyone have thoughts on how to do this?

Hello Ronnie,

Not sure about that one. I sent an e-mail to the author of the paper describing the algorithm. If he doesn't post here, I'll pass along his response (with his permission).



I'm not sure the value of b_optimal ever should get negative so that might be a bug.


Hi Thomas

Apologies for not clarifying! b_optimal won't be negative given the code as is, as you have a filter w = (v>0) to stop that. However without that, it can, and then fed back in, produces continuously bigger numbers. So no bug ; sorry for confusion.

I slept on this short problem overnight and wondered whether a possible interpretation is

if b_optimal for a stock is < N % then go short

with N being an arbitrary number based on your particular set of stocks and back test. That keeps the algorithm as is, but still offers a way of identifying short opportunities.

Still, I look forward to Grant's follow up :)

Hello all,

Paraphrasing Bin's response (private e-mail), the short version should ignore the simplex step. However, the portfolio then needs to be constrained (e.g. with a 130/30 strategy).


Anyone ever figure out how to have both long and short positions for this algo?

Along the lines of Bin's response. Couldn't do something like this:

# Example portfolio vector as computed by OLMAR  
x = np.array([.2, .3, .5])

# Rank and compute cumulative sum  
In [12]: np.cumsum(sorted(x))  
Out[12]: array([ 0.2,  0.5,  1. ])

# Find the bottom 30% (can be less)  
In [13]: np.cumsum(sorted(x)) < .3  
Out[13]: array([ True, False, False], dtype=bool)

#And then flip the sign of the bottom 30% and thus short them?  
In [18]: x[np.cumsum(sorted(x)) < .3] *= -1

In [19]: x  
Out[19]: array([-0.2,  0.3,  0.5])  

We might want to renormalize the longed ones so that they sum up to 1 again.

Yes, I was thinking along those lines. However, the next time the algo is run, it'll have negative allocations for the current portfolio, which could result in weirdness when the update is applied and put through simplex_projection. I'll have to give it a try.

Another thought is to have essentially two algos running at once. One with a portfolio of long stocks, and another for shorting. The former would use the standard OLMAR and the latter would use a re-formulated one, with an allocation vector that sums to -1. It seems like mean reversion should work in both directions, right?

That's a good idea. I think we can run plain OLMAR and then convert the portfolio via -(1-b) only for the allocation part.

Hi Thomas,

I'll have to take a closer look. When we started working on this, I actually dug into all of the references, and the claim was that the solution is an exact one for the inequality contraint (see Section 4.2 of http://icml.cc/2012/papers/168.pdf). So, I'm wondering if instead of a '>=' we have a '<=' everything works out. It seems like the solution for going short across an entire portfolio should be re-formulated, no?

Also, I was still thinking of trying a generic optimizer that'll handle inequality constraints, which would be an alternative approach.


given that 'batch_transform' is deprecated and the above code does not run,
has anyone an updated version of the OLMAR algorithm to share?
I am not able to understand how to substitute 'batch_transform()' with 'history()' from the documentation.

PS - Apologies, just found the answer at https://www.quantopian.com/posts/olmar-3-dot-0-using-new-order-and-history-features

Is this algorithm still working? I would love to try it out.

Hi Anna -

I could post an updated version, if it would be of interest. Note that the original version is long-only, so it is incompatible with the Quantopian contest and fund, if that's your angle.

Hi Grant,

Would love to see an updated version as well!

Here's a first-cut. Comments/questions & improvements welcome.

Here's another tweak. I increased the number of stocks to 300, but also increased the max position size to 10X equal weight:


I also changed to the standard Q Contest commission:

set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  

It is interesting that over the long haul, it does o.k., with a SR ~ 1 (but with a 40% max drawdown, which I suppose would be the killer as an investment, versus a leveraged long-short algo).

NOTE: Backtest is refusing to load. I've asked Q support to have a look.