Back to Community
Parallelize a Single Algorithm with Zipline

I'm using zipline to backtest a strategy involving thousands of equities with daily-level historical data. I'd like to speed this up by splitting the single backtest among all of my cpu cores, but as someone with shallow knowledge in both Python and CS, I'm not sure how to proceed or if this is even do-able.

4 responses

Run separate instances of python by splitting equities.

Luke, have you heard about our plans for a new Research environment? This will allow you to do this kind of analysis in an IPython Notebook. Take a look here and you can sign up to reserve your beta spot:

Here is a sneak-peek demo of the environment:


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I've signed up.

That said, I'm still looking for an easy way to parallelize now, perhaps using pool or some other multiprocessing tool. Pappu, your answer seems like a pain to implement, especially in the analysis phase when I'll have to somehow combine a bunch of separate performance outputs into one. Is there nothing simpler?

I was able to do this fairly easily a year or two ago using picloud. Unfortunately, it looks like that service has been shut down.