Back to Community
running backtest locks up my browser

Upon running the attached backtest, my browser froze up. I'm using:

Firefox
56.0.2 (64-bit)

Windows 10 Home

Clone Algorithm
3
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5a0574856e6be745529846b1
There was a runtime error.
6 responses

From lots of logging I see my browser stuck. If I wait long enough it catches up. FF 56 (64-bit) and Win7.

First try commenting the last two lines, logging.

Secondly, the unfilled orders logging: The ordering at market close might be to reduce drawdown from partial fills and price drift. To reduce content to the logging window further while maintaining the partial fills limitation, ordering can be run at any time other than market close (with get_prices prior to it) and then this canceling of open orders. This can't be used at market close, that would prevent orders from occurring at all if cancel_oos were run also at market close in the same minute just after the ordering so you might simply move everything ahead of market close a few minutes. Returns can be higher allowing a few minutes before cancel_oos, you would have to decide how much of the increased drawdown from incomplete orders is acceptable.

def initialize(context):  
    schedule_function(cancel_oos, date_rules.every_day(), time_rules.market_close(minutes=   [some number of minutes after ordering]   ))

def cancel_oos(context, data):    # Cancel open orders  
    oo = get_open_orders()  
    for s in oo:  
        for o in oo[s]:  
            cancel_order(o.id)  

Another idea to address drawdown by limiting partial fills: Since track_orders sees when an order is a partial fill, could inject some code into it to make decisions at that point, for example allowing the order to continue if its percentage is above some threshold. I find that beneficial.

I think this is a problem that has been around forever. The algo IDE/backtester should be much more lightweight (or have the option to toggle into a lightweight mode), with detailed backtest analysis left to the research platform. Apparently, all of the transaction data are loaded and it swamps the browser (and in some cases, the entire OS, since RAM can be completely consumed). Logging may be an issue, too.

@Grant Yeah, this problem has been around forever. I found out that by setting the commissions/slippage model to 0 reduces memory usage a lot when loading transactions. You can even create tearsheets over longer time periods this way. Working on reducing overall portfolio turnover rate also helps...

Some suggestions for Q:

--Allow backtests to be launched directly into the background, so that nothing gets loaded into the browser and bogs it down.
--Provide access to some high-level tabulated summary statistics once the backtest is run, along with the backtest ID (such that it could be copied and pasted into the research platform).
--If the total memory consumed by the stored backtest is incompatible with analysis in the research platform, alert the user (or put it in the table of high-level summary statistics).
--Provide a button to clear the memory consumed by a full backtest, freeing it up. Frankly, it is shameful to provide a pig browser-based API that can consume all of the RAM of a pc, forcing it to start using swap memory. Imagine visiting Facebook or Google, and having your pc bog down until the browser is exited. So a better approach would be simply to fix the problem.
--Consider a "run backtest in notebook" option, so that users could avoid the browser altogether, and be able to run analysis straightaway.
--Provide an e-mail (or text) notification option that a backtest is complete or has crashed (for long-running ones).

Improving the backtest experience is high on our list. We're aiming to do a pretty comprehensive revamp of the backtest experience - to provide a much more useful feedback loop, and also to improve the overall workflow and page performance. We're still designing that experience internally, but the feedback in this thread is super helpful, and we'll definitely take it on board.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Abhijeet -

I would add to the list to be able to launch multiple backtests in parallel programmatically from the research platform, and then be able to pull in the results and analyze them. I realize that you might have to throttle/cap such functionality, so your system doesn't get overwhelmed, but it would be nice to be able to do some parameter space exploration/optimization (or gross over-fitting!) from a research notebook directly.