Back to Community
error - TimeoutException: Call to before_trading_start timed out

Any idea what's going on? Something ain't workin' right...

TimeoutException: Call to before_trading_start timed out  
There was a runtime error on line 26.  
Clone Algorithm
5
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 57def24410cc041028364823
There was a runtime error.
13 responses

This one completed, with no change to the code on my end. Why would before_trading_start hang for 5 minutes?

Clone Algorithm
5
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 57defd5eec20b21030ffad14
There was a runtime error.

I've also had erratic timeout results in the past two weeks, perhaps three.
I might kick off a series of pipeline back tests with just a parameter change and one or more may just pause execution early in the run and eventually express that error.
On second attempt all such runs have executed without code change.

I found that my overnight run generated a new error:
InterfaceError: connection already closed
There was a runtime error on line 192.
192 is: context.output = pipeline_output('ranked_stocks')
This code ran successfully with a different set of parameters.
These parameters affect the post-pipeline handling of the equities.

I recently tried to re-run code posted on https://www.quantopian.com/posts/the-q500us-and-q1500us#57cc0a2ead60fa742e0008e6, and it failed to complete, with the error:

Something went wrong. Sorry for the inconvenience. Try using the built-in debugger to analyze your code. If you would like help, send us an email.  

More weirdness...

Another sporadic error:

TimeoutException: Call to before_trading_start timed out  
There was a runtime error on line 44.  

Weird. I think it means that something is really broken under the hood, since my calls to before_trading_start aren't taking close to 5 minutes, and then sometimes slipping over the edge. It must be hanging.

import numpy as np  
import pandas as pd  
from quantopian.algorithm import attach_pipeline, pipeline_output  
from quantopian.pipeline import Pipeline  
from quantopian.pipeline.data import morningstar as mstar  
from quantopian.pipeline.filters import Q1500US

def initialize(context):  
    # set_commission(commission.PerTrade(cost=0))  
    # set_slippage(slippage.FixedSlippage(spread=0.00))

    # parameters  
    # --------------------------  
    context.n_stocks = 200 # number of stocks  
    context.N = 10 # trailing window size, days  
    context.eps = 1.0 # optimization model parameter  
    context.leverage = 1.0 # gross leverage  
    context.pct_etf = 0  # ETF percent, 0 to 0.5  
    context.min_a = 0.2  # min. allocation, percent of 1.0/len(context.stocks)  
    # --------------------------  
    schedule_function(get_weights, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60))  
    context.first_day = True  
    context.n_periods = 0  
    # context.bad_data = [sid(27357),sid(27358),sid(5328)]  
    # Attach our pipeline.  
    attach_pipeline(make_pipeline(context), 'my_pipe')  
def make_pipeline(context):  
    market_cap = mstar.valuation.market_cap.latest  
    top_market_cap = market_cap.top(context.n_stocks, mask=Q1500US())  
    return Pipeline(screen =top_market_cap)  
def before_trading_start(context,data):  
    """  
    Called every day before market open.  
    """  
    output = pipeline_output('my_pipe')  
    context.stocks_current = output.index.tolist()  
    # context.stocks_current = [stock for stock in context.stocks_current if stock not in context.bad_data]  
    if context.first_day:  
        context.stocks = context.stocks_current  
        context.first_day = False  
    record(leverage = context.account.leverage)  
    num_secs = 0  
    for stock in context.portfolio.positions.keys():  
        if context.portfolio.positions[stock].amount != 0:  
            num_secs += 1  
    record(num_secs = num_secs)  

I found the same error in my algo. After going through the code for a long time, I believe the problem was that the pipeline dataframe output was empty on some days. This made any code lines in before_trading_start() and elsewhere that depended on context.output to fail on the first empty day.

Any ideas on how to make the algo handle an empty pipeline output?

I am having the same issue. Has anyone been able to find a work around?

Thanks

I am having the similar problem. before_trading_start timeout in the middle of backtest. I don't think my before_trading_start itself takes that because in that case it will fail on the first day. I am wondering if it is caused by an empty pipeline on that day, as mentioned by Richard. Has anyone confirmed if this is the cause and if there is any workaround?

Thanks

It's hard to tell directly from the error message specifically what is causing the timeout. If you're stuck and would like help debugging, shoot us a message at [email protected]. It may the an empty pipeline. It may be the size or the query. Or it could be something else. Seeing a replication will help us figure out what's going on.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I posted this to the machine learning post thinking it was related to the morningstar data issue but I get the same issue even without the morningstar data. This will hopefully add some more details to the issue above:

From what I can see in backtesting, pipeline appears to batch multiple days of factor calculations and if the arbitrary batching breaches the before_trading_start timeout of 5 mins the whole thing fails. I wish there was a way for us to control how the batching gets done as even if we are sure that one day of factor calculations only takes say 100secs because of the arbitrary batching we seem to get seemingly random timeouts.

The example below resulted in a timout even though the before_trading pipeline itself ran in under 250 secs for a 6 day batch job.

2017-01-03 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-03 08:45:00-05:00
2017-01-03 21:45 compute:482 INFO weekday 2017-01-03 00:00:00+00:00 1
2017-01-03 21:45 compute:484 INFO 2017-01-03 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.717320 secs
2017-01-03 21:45 compute:482 INFO weekday 2017-01-04 00:00:00+00:00 2
2017-01-03 21:45 compute:484 INFO 2017-01-04 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.708877 secs
2017-01-03 21:45 compute:482 INFO weekday 2017-01-05 00:00:00+00:00 3
2017-01-03 21:45 compute:484 INFO 2017-01-05 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.675035 secs
2017-01-03 21:45 compute:482 INFO weekday 2017-01-06 00:00:00+00:00 4
2017-01-03 21:45 compute:484 INFO 2017-01-06 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.717600 secs
2017-01-03 21:45 compute:482 INFO weekday 2017-01-09 00:00:00+00:00 0
2017-01-03 21:45 compute:484 INFO 2017-01-09 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.901481 secs
2017-01-03 21:45 compute:482 INFO weekday 2017-01-10 00:00:00+00:00 1
2017-01-03 21:45 compute:484 INFO 2017-01-10 00:00:00+00:00
2017-01-03 21:45 compute:489 INFO Training took 1.795630 secs
2017-01-03 21:45 before_trading_start:1089 INFO Time to run pipeline 246.63 secs
2017-01-03 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-03 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-03 21:45 WARN sklearn/cross_validation.py:417: Warning: The least populated class in y has only 1 members, which is too few. The minimum number of labels for any class cannot be less than n_folds=3.
2017-01-03 21:45 WARN sklearn/cross_validation.py:417: Warning: The least populated class in y has only 2 members, which is too few. The minimum number of labels for any class cannot be less than n_folds=3.
2017-01-03 21:45 WARN Logging limit exceeded; some messages discarded
2017-01-04 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-04 08:45:00-05:00
2017-01-04 21:45 before_trading_start:1089 INFO Time to run pipeline 0.00 secs
2017-01-04 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-04 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-04 21:45 before_trading_start:1151 INFO Time to run optimizer 33.54 secs
2017-01-04 21:45 before_trading_start:1156 INFO total weight: 1.0, total abs weight: 2.0, maxweight : 0.005, -0.005
2017-01-05 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-05 08:45:00-05:00
2017-01-05 21:45 before_trading_start:1089 INFO Time to run pipeline 0.00 secs
2017-01-05 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-05 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-05 21:45 before_trading_start:1151 INFO Time to run optimizer 30.15 secs
2017-01-05 21:45 before_trading_start:1156 INFO total weight: 1.0, total abs weight: 2.0, maxweight : 0.005, -0.005
2017-01-06 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-06 08:45:00-05:00
2017-01-06 21:45 before_trading_start:1089 INFO Time to run pipeline 0.00 secs
2017-01-06 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-06 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-06 21:45 before_trading_start:1151 INFO Time to run optimizer 29.93 secs
2017-01-06 21:45 before_trading_start:1156 INFO total weight: 1.0, total abs weight: 1.99999999999, maxweight : 0.005, -0.00499999999999
2017-01-09 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-09 08:45:00-05:00
2017-01-09 21:45 before_trading_start:1089 INFO Time to run pipeline 0.00 secs
2017-01-09 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-09 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-09 21:45 before_trading_start:1151 INFO Time to run optimizer 32.78 secs
2017-01-09 21:45 before_trading_start:1156 INFO total weight: 1.0, total abs weight: 2.0, maxweight : 0.005, -0.005
2017-01-10 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-10 08:45:00-05:00
2017-01-10 21:45 before_trading_start:1089 INFO Time to run pipeline 0.00 secs
2017-01-10 21:45 before_trading_start:1124 DEBUG Number of secruities to be considered: 200
2017-01-10 21:45 get_best_portfolio:1052 INFO one good optimization done
2017-01-10 21:45 before_trading_start:1151 INFO Time to run optimizer 30.61 secs
2017-01-10 21:45 before_trading_start:1156 INFO total weight: 1.0, total abs weight: 1.99999999998, maxweight : 0.00499999999998, -0.00499999999997
2017-01-11 21:45 before_trading_start:1086 INFO started running pipeline on 2017-01-11 08:45:00-05:00
End of logs.

I'm having similar issues and something I'm very confuse by is the number of calls to the Factor.compute() method which happen on the first day, then I don't see the calls in any other day. Is this by design?

For example, I have logs set in the compute function of the factors and on a 7 day training window I see:

2016-01-04 05:45 before_trading_start:278 DEBUG Before trading  
2016-01-04 05:45 compute:183 INFO Training took 0.902551 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  
2016-01-04 05:45 compute:183 INFO Training took 0.865033 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  
2016-01-04 05:45 compute:183 INFO Training took 0.829860 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  
2016-01-04 05:45 compute:183 INFO Training took 0.826926 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  
2016-01-04 05:45 compute:183 INFO Training took 0.830506 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  
2016-01-04 05:45 compute:183 INFO Training took 0.836539 secs  
2016-01-04 05:45 compute:185 DEBUG Accuracy on train set = 100.00%  

I only have the training set to run once, however it seems to have run 6 times in a single call to before_trading_start(). This grows as the training window increases but due to logging limits I'm not sure if it's a linear growth or not.

Pumplerod, this unpredictable behavior is sort of intentional. Pipeline batches several days together to improve overall performance (e.g. run DB queries only once). However, if you want to do expensive calculations (in time or memory), this can lead to timeouts or memory errors even if your algo would easily fit into the 5min window for a few days, but not if Pipeline starts batching several months into a single run during before_trading_starts().

One suggestion for Quantopian would be to let us take more control over the batching process. The zipline API already supports this (see my other post here). It would be great if we could get that finer grain control enabled in the backtester too.

+1

Thanks @Grant for sharing the algo. I just ran it without making any changes and it didn't time out, but I fully trust you that it did for you (randomly?). Happens to me as well seemingly randomly, so personally I find it very difficult to troubleshoot.

Maybe someone in Q Support or Engineering team could try as well, perhaps with a verbose debug logging option if available?

Is Q able to tell how long each algo spends in before_trading_start? If so, if a contest algo that normally spends less than a minute in before_trading_start all of a sudden times out, that to me hints that it's not the code itself but something else at fault, no?

Also, I wonder if there's any pattern with contest algos being pulled because of time outs? If they all tend to happen on the same day then it may not be because of inefficient code?