Back to Community
Pipeline in Research: what are the run time limits?

My NB easily fails whenever I try to run pipeline for a period of time too long or too many factors. It's a pity because pipeline is very useful in research but with current limitation it's very difficult to get the most out of it.

May I know what are the current limit in Research environment? Any advice on pipeline usage to improve performance?

I'll attach here a NB. It tries to collect 22 factors, for 3000 equities over a period of 5 years (252*5 days).

Rough estimation of memory needed:
22*3000*252*5 = 83.160.000 entries
64 bits floating points: 83.160.000 entries * 8 bytes = 665.280.000 ~= 635MB

Are 635MB too much? (I know it's more than that but the order of magnitude should be correct)

Loading notebook preview...
Notebook previews are currently unavailable.
3 responses

Please Q, any updates in what are the technical limitations in using Pipeline in Research? Thanks!

Hi Luca,

Sorry for the long response delay. Your original post got forwarded to me when it was first written, but I lost track of it in my holiday travels.

A couple general notes on resource constraints in research:

  • Resources are constrained by user rather than by notebook. This means that if you have lots of notebooks open, and each notebook is consuming a moderate amount of RAM, a computation might fail even though it would succeed if it's the only thing you're doing. We've started thinking more recently about how we can provide better visibility into resource consumption for our users. If you have thoughts on what that might look like I'd be interested to hear them.

  • We haven't publicized definite resource limits in research, in part because we want to reserve the right to change them in the future. That said, the current resource limits are on the order of 4GB of RAM allocated per user.

The peak resource usage of your Pipeline is probably a fair greater than what you've calculated. Here's my napkin math:

You're running your Pipeline from 2010-01-01 to 2015-01-01. That period contains a total of 1258 trading days:

In [2]: from zipline.utils.tradingcalendar import trading_day  
In [3]: from pandas import date_range  
In [4]: days = date_range('2010-01-01', '2015-01-01', freq=trading_day)  
In [5]: len(days)  
Out[5]: 1258  

In the 5-year period over which you ran your pipeline, there were a little under 12000 assets that existed for at least one day:

In [10]: from zipline.assets import AssetFinder  
In [11]: assets = AssetFinder('assets.db')'  
In [12]: lifetimes = assets.lifetimes(days, include_start_date=False)  
In [13]: lifetimes.any().sum()  
Out[13]: 11874  

You're only ever doing a 1-day lookback window for shares_outstanding, but you have a 200-day lookback on SimpleMovingAverage. This means we'll end up loading a buffer of (1258 + 199) * 11874 64-bit floats for close price, and 1258 * 11874 64-bit floats for shares outstanding).

This means that the "root terms" of your Pipeline are contributing ~150 MB of peak usage:

In [14]: bytes_from_close = ((1258 + 199) * 11874) * 8  
In [15]: bytes_from_shares = 1258 * 11874  
In [16]: from humanize import naturalsize  
In [17]: naturalsize(bytes_from_close + bytes_from_shares)  
Out[17]: '153.3 MB'  

In addition to the raw input buffers, each Factor in your pipeline will allocate a temporary output buffer of 1258 * 11874 64-bit floats, and each Filter will allocate the same number of 8-bit unsigned ints. (This gets chopped down significantly based on your screen as a postprocessing step, but peak memory usage is is determined by the number of assets that existed at any point in your pipeline's execution.)

In total this means that we end up allocating 233 bytes per asset per day in temporary space:

bytes_per_asset_day = 0  
‚Äč
# NOTE: Nothing in this expression is public API.  
# Absolutely no guarantees that this will work moving forward.  
non_input_nodes = filter(  
    lambda term: not term.atomic,  
    pipe.to_graph('', USEquityPricing.close.mask).nodes()  
)
for node in non_input_nodes:  
    if node.dtype.name in ('float64', 'int64'):  
        bytes_per_asset_day += 8  
    else:  
        assert node.dtype.name == 'bool'  
        bytes_per_asset_day += 1  
bytes_per_asset_day  
Out[25]:  
233  

This means that we end up allocating about 3.5 GB in temporary space at peak usage:

In [18]: naturalsize(1258 * 11874 * 233)  
Out[18]: '3.5 GB'  

These are all conservative lower bounds on memory usage, and they don't account for memory used by the rest of your IPython kernel process, or any of the other supporting machinery in research. Keeping all those things in mind, it's not surprising to me that this runs into memory limits.

The best way for you to reduce your high-water memory usage is to chunk up your run_pipeline calls into smaller increments and then concatenate (e.g. with pandas.concat) them together. For example, if you want to run a Pipeline with lots of graph terms over a 5 year period, you might break that up into five 1-year run_pipeline calls, or even ten 6-month calls. This reduces memory usage in two important ways:

  • Running over a shorter window straightforwardly translates to allocating fewer rows in the input/output buffers of each Pipeline term.
  • More subtly, running over a shorter window reduces the number of assets that are active during that window, which reduces the number of columns in the input/output buffers allocated to your Pipeline terms.

We actually do the chunking I describe here behind the scenes when you attach a Pipeline in the backtester. By default the first chunk is just 5 days so that you get immediate feedback if there's a bug in one of your CustomFactors, and then subsequent chunks are 126 days (roughly half a trading-year). The code that controls this lives in zipline.algorithm.TradingAlgorithm

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thank you very much Scott for the detailed answer. I find this information very useful and I can now understand how to calculate and minimize Pipeline memory usage in my NB