Back to Community
Backtesting performance

The minute bar back-testing is considerable slow?
I normally use backtesting to optimize parameters and evaluation testing strategies. But a 5 years minute-based simulation takes a considerable amount of time with zipline/quantopian.

I was not sure if it was the live interface updating in quantopian, zipline or python itself.

I started to investigate zipline but the lack of high-resolution data didn't help me a lot. But I noticed that zipline computes a huge amount of information each bar/step. Is there a way to enable batch mode or post-pone these computations for after the simulation.

I just got my IB account back running and I will be able to get up to 1year of high-resolution data but before I go into performance tuning, it would be nice to know if someone has already looked into it? How much is the web and how much is zipline?

has anyone tried to compile it into c?

Thanks,
Lucas

3 responses

Hi Lucas,

Parameter optimization is a feature request we frequently receive and we're heading in the direction to make it easier. We wrote a couple posts on our blog about this topic that you may find useful.

Backtest performance is always a big consideration for us, and we're constantly investing in it. We recently increased the size of the universe you can test, from 100 securities to 200 securities in a single backtest, and these improvements make for faster testing. If you have a large universe, over a long period of time, with complex calculations, the backtest performance may slow down. To speed it up, I'd suggest to use a smaller universe, over a shorter time period, with fast performing functions. To the last point, I'd suggest using history() instead of batch_transform, and pandas' rolling transformations of .mean() and .stdev(), instead of the built-in .mavg() and .stdev() functions.

If you'd like help, I can take a look at your code to see what improvements can be made for faster backtesting. You can invite me via collaboration - my email is adeychm[email protected].

You can use Zipline to develop your strategy offline and fine-tune your parameters. If you're coding offline you will need another data source, like Yahoo Finance. And if you have any questions, you can post directly to the Zipline Google Group: https://groups.google.com/forum/#!forum/zipline

Cheers,
Alisa

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Alisa,

Thank you for the reply, actually my algorithm was just a simple benchmark algorithm.

The algorithm places one order per bar and doesn't do much computation.

1 Year backtesting with 1min data took around 3.3min for me then I added one TA and Log it went to 10.26min

For 5 years it takes a bit longer.

Clone Algorithm
6
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Put any initialization logic here.  The context object will be passed to
ma = ta.MA(timeperiod=30,matype=0)

# the other methods in your algorithm.
def initialize(context):
    pass

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.

    orders = get_open_orders(sid(24))
    if len(orders)>0:
        order(sid(24), 50)
    else:
        order(sid(24), -50)
        
    ma_result = ma(data)[sid(24)]    
    log.info(str(data[sid(24)].datetime) + " , " + str(ma_result))
    record(ma_value=ma_result)    
           

There was a runtime error.

TA is a Quantopian-specific wrapper we built, making it easier to use technical analysis in your algorithm. As we've grown, this wrapper has shown some holes and we're not currently working to support it. Instead, you should use "import talib" to use the open-sourced technical library. Here's some examples showing the syntax of commonly used talib functions: https://www.quantopian.com/help#api-talib

That being said, it looks like your code is using the simple moving average. It's easier to use the built-in mavg() function or use pandas' rolling transformations with history() to get this value. I attached the code below, take a look and hopefully that gets you a step in the right direction.

Clone Algorithm
5
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
#ma = ta.MA(timeperiod=30,matype=0)

def initialize(context):
    pass

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # You can use the built-in mavg() function. Example:
    mavg = data[sid(24)].mavg(30)
    
    # or use history() with pandas mean rolling transformation
    # for larger windows, its recommended to use pandas .mean() function
        # prices = history(30, '1d', 'price')
        # mavg = prices.mean()
    

    orders = get_open_orders(sid(24))
    if len(orders)>0:
        order(sid(24), 50)
    else:
        order(sid(24), -50)
        
    ma_result = mavg 
    log.info(str(data[sid(24)].datetime) + " , " + str(ma_result))
    record(ma_value=ma_result)    
           

There was a runtime error.