Back to Community
What is a run time Error 32 - Broken Pipe?

I noticed that my algo was all of a sudden backtesting far too slowly, then after a few minutes I received the subject error reference I've never seen before referring to a line of code that simply calls a two parameter function that has been working fine all day, no changes, and was working during the sample summary day results printed below of multiple 1 min trades per day.

What is a 'broken pipe'? Is there a current Q system problem?

2013-08-02PRINT('1600', 'Size 0', 'PL 0', 'Cum 0', 'LBT 0', 'SBT 0', 'HLT -4950')
2013-08-05PRINT('1600', 'Size 0', 'PL -3245', 'Cum 0', 'LBT 0', 'SBT 1', 'HLT -4840')
2013-08-06PRINT('1600', 'Size 0', 'PL -952', 'Cum -3245', 'LBT 2', 'SBT 2', 'HLT -4950')
2013-08-07PRINT('1427', 'Size 0', 'PL -5437', 'Cum -9634', 'LBT 1', 'SBT 3', 'HLT -4950', 'MAX HALT')
2013-08-07PRINT('1600', 'Size 0', 'PL 265', 'Cum -9634', 'LBT 1', 'SBT 3', 'HLT -4950')
2013-08-08PRINT('1600', 'Size 0', 'PL 1188', 'Cum -9369', 'LBT 0', 'SBT 0', 'HLT -4950')
2013-08-09PRINT('1600', 'Size 0', 'PL 3469', 'Cum -8181', 'LBT 0', 'SBT 0', 'HLT -5060')
2013-08-12PRINT('1600', 'Size 0', 'PL 3031', 'Cum -4712', 'LBT 0', 'SBT 2', 'HLT -4840')
2013-08-13PRINT('1600', 'Size 0', 'PL 11091', 'Cum -1682', 'LBT 0', 'SBT 2', 'HLT -4620')
2013-08-14PRINT('1600', 'Size 0', 'PL 4689', 'Cum 9410', 'LBT 0', 'SBT 1', 'HLT -4620')
2013-08-15PRINT('1600', 'Size 0', 'PL 3125', 'Cum 14098', 'LBT 1', 'SBT 2', 'HLT -4620')
2013-08-16PRINT('1600', 'Size 0', 'PL -853', 'Cum 17223', 'LBT 0', 'SBT 0', 'HLT -4510')
2013-08-19PRINT('1600', 'Size 0', 'PL 3552', 'Cum 16370', 'LBT 0', 'SBT 1', 'HLT -4510')

The error must have happened sometime during 2013-08-20. I've been running backtests through this time period all day...

Thanks,

Mark

20 responses

I often get the "broken pipe" error when I have a debug breakpoint and exactly on the line where I set the breakpoint.
No breakpoint, no error... it must be a problem in the debugger...

To reproduce the "[Errno 32] Broken pipe" error, build the attached algorithm after having set for example a breakpoint at line 142 and an enough long back-testing period (i.e. from 01/03/2002 to 01/11/2015)

Clone Algorithm
7
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Wesley Gray, Tobias Carlisle  - Quantitative Value
           
    All ranks are summed and then we buy the top 25 stocks with the highest composite rank.
    The positions are rebalanced quarterly (every 63 day - one month 21 days).
"""

import pytz
import math
from datetime import datetime, timedelta
import pandas as pd
import numpy as np
from scipy.stats import norm

def initialize(context):
    context.max_num_stocks = 200
    context.num_top_stocks = 25
    
    context.days = 0
    context.days_in_month = 21
    context.days_in_quarter = context.days_in_month * 3
    context.days_in_half_year = context.days_in_month * 6
    
    #: context.fundamental_dict holds the date:dictionary reference that we need
    context.fundamental_dict = {}
    
    #: context.fundamental_data holds the pandas Panel that's derived from the fundamental_dict
    context.fundamental_data = None
          

def before_trading_start(context): 
    context.days += 1
    
    mkt_cap_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap
        )
        .filter(fundamentals.company_reference.primary_exchange_id == "NYSE")
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)
        )
    
    ## We eliminate all stocks below the 40 th percentile breakpoint of the NYSE by market capitalization.
    nyse_breakpoint = mkt_cap_df.T.quantile(.6).values[0]
                       
    fundamental_df = get_fundamentals(
        query(
            fundamentals.asset_classification.morningstar_sector_code,
            fundamentals.balance_sheet.cash_and_cash_equivalents,
            fundamentals.balance_sheet.current_assets,
            fundamentals.balance_sheet.current_liabilities,
            fundamentals.balance_sheet.invested_capital,
            fundamentals.balance_sheet.long_term_debt,
            fundamentals.balance_sheet.minority_interest,
            fundamentals.balance_sheet.net_ppe,
            fundamentals.balance_sheet.preferred_stock,
            fundamentals.balance_sheet.receivables,
            fundamentals.balance_sheet.total_assets,
            fundamentals.cash_flow_statement.depreciation_and_amortization,
            fundamentals.cash_flow_statement.financing_cash_flow,
            fundamentals.cash_flow_statement.free_cash_flow,
            fundamentals.cash_flow_statement.operating_cash_flow,
            fundamentals.earnings_report.diluted_average_shares,
            fundamentals.financial_statement_filing.period_ending_date,
            fundamentals.income_statement.cost_of_revenue,
            fundamentals.income_statement.net_income,
            fundamentals.income_statement.net_income_continuous_operations,
            fundamentals.income_statement.operating_income,
            fundamentals.income_statement.selling_general_and_administration,
            fundamentals.income_statement.total_revenue,
            fundamentals.operation_ratios.roic,
            fundamentals.valuation.enterprise_value,
            fundamentals.valuation.market_cap,
            fundamentals.valuation.shares_outstanding,
            fundamentals.valuation_ratios.book_value_per_share,
            fundamentals.income_statement.ebit,
            fundamentals.operation_ratios.gross_profit_annual5_yr_growth,
            fundamentals.operation_ratios.roa5_yr_avg,
            fundamentals.operation_ratios.avg5_yrs_roic,
            fundamentals.operation_ratios.roa,
            fundamentals.operation_ratios.current_ratio,
            fundamentals.operation_ratios.gross_margin,
            fundamentals.operation_ratios.assets_turnover,
            fundamentals.asset_classification.financial_health_grade
        )

        # No Financials (103), Real Estate (104), Utilities (207) and ADR
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
        .filter(fundamentals.share_class_reference.is_primary_share == True)
        
        # Only NYSE, AMEX and Nasdaq
        .filter(fundamentals.company_reference.primary_exchange_id.in_(['NYSE', 'NAS', 'AMEX']))
        
        # Check for data sanity (i,e. avoid division by zero)
        .filter(fundamentals.valuation.market_cap >= nyse_breakpoint)
        .filter(fundamentals.valuation.shares_outstanding > 0)
        .filter(fundamentals.cash_flow_statement.free_cash_flow > 0)
        .filter(fundamentals.balance_sheet.invested_capital > 0)
        .filter(fundamentals.balance_sheet.cash_and_cash_equivalents > 0)
        .filter(fundamentals.balance_sheet.invested_capital != fundamentals.balance_sheet.cash_and_cash_equivalents)
        .filter(fundamentals.income_statement.total_revenue > 0)
        .filter(fundamentals.balance_sheet.total_assets > 0)
        .filter(fundamentals.balance_sheet.receivables > 0)
        .filter(fundamentals.income_statement.cost_of_revenue > 0)
        .filter(fundamentals.income_statement.selling_general_and_administration > 0)
        
        #TODO remove
        .limit(context.max_num_stocks)
    )
  
    #: Insert a new dataframe into our dictionary. Only year and month are used as key.
    current_month = get_datetime().replace(day=1)
    context.fundamental_dict[current_month] = fundamental_df
    p_man = {}
    p_fd = {}
    p_fs = {}
    if context.days > 1:
        context.fundamental_data = pd.Panel(context.fundamental_dict)
        p_man = get_beneish_scores(context.fundamental_data, current_month)
        p_fd = get_financial_distress_probability(context.fundamental_data, current_month)
        p_fs = get_financial_strength_probability(context.fundamental_data, current_month)
    
    # Check if we have enough data to go on...
    if p_man == None or len(p_man) == 0:
        return
    if p_fd == None or len(p_fd) == 0:
        return
    if p_fs == None or len(p_fs) == 0:
        return
    
    # ****************************
    # ***  Screening Criteria  ***
    # ****************************
    
    # 1. Identify Potential Frauds and Manipulators
    
    # 1.1 Accrual Screens
    # Scaled Total Accruals (STA) = (Net Income – Cash Flow from Operations) / Total Assets
    sta = (fundamental_df.loc['net_income'] - fundamental_df.loc['operating_cash_flow']) / fundamental_df.loc['total_assets']
    p_sta = sta.rank(ascending=False) / len(sta)
    
    # SNOA = (operating assets (t) − operating liabilities (t)) / total assets (t)
    # Scaled Net Operating Assets (SNOA) = (Operating Assets − Operating Liabilities) / Total Assets
    # where: 
    #         OA = total assets − cash and equivalents
    #         OL = total assets − (invested capital + minority interest + preferred stock)
    #         Invested capital = book common equity + ST debt − LT debt
    operating_assets = fundamental_df.loc['total_assets'] - fundamental_df.loc['cash_and_cash_equivalents']
    operating_liabilities = fundamental_df.loc['total_assets'] - (fundamental_df.loc['invested_capital'] + fundamental_df.loc['minority_interest'].fillna(0) + fundamental_df.loc['preferred_stock'].fillna(0))
    snoa = (operating_assets - operating_liabilities) / fundamental_df.loc['total_assets']
    p_snoa = snoa.rank() / len(snoa)
    
    comboaccrual = (p_sta + p_snoa) / 2
    p_comboaccrual = comboaccrual.rank() / len(comboaccrual)
    p_comboaccrual.name = 'p_comboaccrual'
    fundamental_df = fundamental_df.append(p_comboaccrual)
    
    # Eliminate Stocks at Risk of Sustaining a Permanent Loss of Capital
    # Eliminate all firms in the top 5 percent of the sample based on COMBOACCRUAL.
    fundamental_df = fundamental_df[[stock for stock in fundamental_df if fundamental_df[stock]['p_comboaccrual'] >= 0.05]]
    # Eliminate all firms in the top 5 percent of the sample based on PMAN.
    # Eliminate all firms in the top 5 percent of the sample based on PFD.


    # Step 2: Find Cheapest Stocks
    # To calculate PRICE we simply calculate EBIT enterprise value for each stock and then rank all stocks on PRICE.
    # PRICE = EBIT/TEV
    # TEV = market cap + total debt + preferred stock + minority interests – excess cash
    # excess cash = cash + current assets – current liabilities.
    excess_cash_adjust = fundamental_df.loc['current_liabilities'] - fundamental_df.loc['current_assets']
    tev = fundamental_df.loc['enterprise_value'] + excess_cash_adjust  
    ebit = fundamental_df.loc['ebit']
    ebit[ebit<0]=None
    price = ebit / tev
    p_price = price.rank() / len(price)
    p_price.name = 'p_price'
    fundamental_df = fundamental_df.append(p_price)
    # Here we screen on the cheapest 10% of the universe, or 100 stocks.
    fundamental_df = fundamental_df[[stock for stock in fundamental_df if fundamental_df[stock]['p_price'] <= 0.10]]
    
    # Step 3: Find Highest-Quality Stocks
    # Note "Long-Term Free Cash Flow on Assets" and "Margin Stability" are not cosidered
    # 5 years instead of 8 years period.
    # Simple instead of geometric average
    
    # 1. Franchise Power
   
    # Five-Year Return on Assets
    roa5_yr = fundamental_df.loc['roa5_yr_avg']
    p_5yr_roa = roa5_yr.rank() / len(roa5_yr)
    
    # Five-Year Return on Capital
    roc5_yr = fundamental_df.loc['avg5_yrs_roic']
    p_5yr_roc = roc5_yr.rank() / len(roc5_yr)
    
    # Five-year gross margin growth)
    mg = fundamental_df.loc['gross_profit_annual5_yr_growth']
    p_mg = mg.rank() / len(mg)
    
    # Franchise Power
    p_fp = (p_5yr_roa + p_5yr_roc + p_mg) / 3
    
    
    quality = (p_fp + p_fs) / 2   
    
    
    update_universe(fundamental_df.columns.values)
    
    
def handle_data(context, data):
    pass


def get_financial_distress_probability(fundamental_data, current_date):
    """
        TODO
    """
        # 2. Identify Stocks at High Risk of Financial Distress
    
    # 2.1 Probability of Financial Distress (PFD)Calculate PFA variables:
    # NIMTAAVG = weighted average (quarter's net income/MTA)
    #    MTA = market value of total assets = book value of liabilities + market cap
    # TLMTA = total liabilities / MTA
    # CASHMTA = cash & equivalents / MTA
    # EXRETAVG = weighted average(log(1 + stock's return) − log(1 + S&P 500 TR return)
    # SIGMA = annualized stock's standard deviation over the previous 3 months (daily)
    # RSIZE = log (stock market cap / S&P 500 TR total market value)
    # MB = MTA / adjusted book value
    # Adjusted book value = book value +.1 × (market cap-book value)
    # PRICE = log(stock price), where stock price is capped at $15
    
    # weighted averages of four quarters of data from the original, calculated as follows:
    # XAVG =.5333 × t + .2666 × t−1 + .1333 × t−2 + .0666 × t−3
    
    pfd = {}
    current_data = fundamental_data[current_date]
    
    #: Find the score for each security
    for stock in current_data:
        stock_data = current_data[stock]
        
        total_liabilities = stock_data['long_term_debt'] + stock_data['current_liabilities']
        mta = total_liabilities + stock_data['market_cap']
        
        nimtaavg = stock_data['net_income'] / mta
        
        tlmta = total_liabilities / mta
        
        cashmta = stock_data['cash_and_cash_equivalents'] / mta
        
        exretavg = 0 #TODO
        sigma = 0 #TODO
        rsize = 0 #TODO        
        
        book_value = stock_data['book_value_per_share'] * stock_data['diluted_average_shares']
        adjusted_book_value =  book_value + 0.1*stock_data['market_cap']
        mb = mta / adjusted_book_value
        
        price = stock_data['market_cap'] / stock_data['shares_outstanding']
        if price > 15:
            price = 15
        price = math.log(price)
        
        
      	lpfd = -20.26 * nimtaavg + 1.42 * tlmta - 7.13 * exretavg + 1.41 * sigma - 0.045 * rsize - 2.13 * cashmta + 0.075 * mb - 0.058 * price - 9.16

        # Calculate the probability of financial distress (PFD) value:
        pfd[stock] =  1/(1 + math.exp(-lpfd))
        
    return pfd


def get_last_year(fundamental_data, current_date):
    all_dates = fundamental_data.items
    
    utc = pytz.UTC
    last_year = utc.localize(datetime(year=current_date.year - 1, month = current_date.month, day = current_date.day))
    
    #: If one year hasn't passed just return None
    if last_year < min(all_dates):
        return None
    
    #: Figure out which date to use
    for i, date in enumerate(all_dates):
        if i == len(all_dates) - 1:
            continue
        if last_year > date and last_year < all_dates[i + 1]:
            break
        elif last_year == date:
            break
        
    return date


def get_financial_strength_probability(fundamental_data, current_date):
    """
        Measure Company Financial Strength (FS). It's based on the Piotroski Score.
    """       
    pfs = {}
    last_year = get_last_year(fundamental_data, current_date)
    if last_year == None:
        return None
    old_data = fundamental_data[last_year]
    current_data = fundamental_data[current_date]
    
    #: Find the score for each security
    for stock in current_data:        
        # ROA = return on assets
        roa = current_data[stock]['roa']
        fs_roa = roa > 0
    
        # FCFTA = free cash flow (t) / total assets (t)
        # FS_FCFTA = 1 if FCFTA > 0, 0 otherwise
        fcfta = current_data[stock]['free_cash_flow'] / current_data[stock]['total_assets']
        fs_fcfta = fcfta > 0
        
        # ACCRUAL = FCFTA – ROA
        accrual = fcfta - roa
        fs_accrual = accrual > 0
    
        # 2.2 Stability
        lever = (old_data[stock]['long_term_debt'] / old_data[stock]['total_assets']) - (current_data[stock]['long_term_debt'] / current_data[stock]['total_assets'])
        fs_lever = lever > 0
        
        # LIQUID = current ratio (t) − current ratio (t − 1)
        liquid = current_data[stock]['current_ratio'] - old_data[stock]['current_ratio']        
        fs_liquid = liquid > 0
        
        # NEQISS = net equity issuance from t − 1 to t
        fs_neqiss = current_data[stock]['shares_outstanding'] <= old_data[stock]['shares_outstanding']
    
        # 2.3 Recent Operational Improvements
        # ROA = year-over-year change in ROA
        d_roa = current_data[stock]['roa'] > old_data[stock]['roa']
        
        # FCFTA = year-over-year change in FCFTA
        old_fcfta = old_data[stock]['free_cash_flow'] / old_data[stock]['total_assets']
        d_fcfta = fcfta > old_fcfta
        
        # MARGIN = year-over-year change in gross margin
        d_margin = current_data[stock]['gross_margin'] > old_data[stock]['gross_margin']
        
        # TURN = year-over-year change in asset turnover
        d_turn = current_data[stock]['assets_turnover'] > old_data[stock]['assets_turnover']
    
        # 2.4 P_FS = Financial Strength
        fs = float(fs_roa + fs_fcfta + fs_accrual + fs_lever + fs_liquid + fs_neqiss + d_roa + d_fcfta + d_margin + d_turn) / 10
        pfs[stock] = fs
        
    return pfs


def get_beneish_scores(fundamental_data, current_date):
    """
        This method finds the dataframe that contains the data for the time period we want
        and finds the total Beneish score for those dates
    """
    pman = {}
    last_year = get_last_year(fundamental_data, current_date)
    if last_year == None:
        return None
    old_data = fundamental_data[last_year]
    current_data = fundamental_data[current_date]
    
    #: Find the score for each security
    for stock in current_data:        
        dsri = compute_dsri(current_data, old_data, stock)
        gmi = compute_gmi(current_data, old_data, stock)
        aqi = compute_aqi(current_data, old_data, stock)
        sgi = compute_sgi(current_data, old_data, stock)
        depi = compute_depi(current_data, old_data, stock)
        sgai = compute_sgai(current_data, old_data, stock)
        lvgi = compute_lvgi(current_data, old_data, stock)
        tata = compute_tata(current_data, stock)
        
        probm = -4.84 + 0.92 * dsri + 0.528 * gmi + 0.404 * aqi + 0.892 * sgi + 0.115 * depi - 0.172 * sgai + 4.679 * tata - 0.327 * lvgi
        # Calculate probability of manipulation from PROBM
        pman[stock] = norm.cdf(probm)
        
    return pman


def compute_dsri(current_data, old_data, sid):
    """
        Days' sales in receivable index
    """
            
    current_dsr = current_data[sid]['receivables'] / current_data[sid]['total_revenue']
    old_dsr = old_data[sid]['receivables'] / old_data[sid]['total_revenue']
        
    return current_dsr / old_dsr
    
    
def compute_gmi(current_data, old_data, sid):
    """
        Gross margin index
    """
        
    current_gm = (current_data[sid]['total_revenue'] - current_data[sid]['cost_of_revenue']) / current_data[sid]['total_revenue']
    old_gm = (old_data[sid]['total_revenue'] - old_data[sid]['cost_of_revenue']) / old_data[sid]['total_revenue']
    
    return old_gm / current_gm


def compute_aqi(current_data, old_data, sid):
    """
        Asset quality index
    """
 
    # Other L/T Assets [TA-(CA+PPE)]
    current_other_lt_assets = current_data[sid]['total_assets'] - (current_data[sid]['current_assets'] + current_data[sid]['net_ppe'])
    old_other_lt_assets = old_data[sid]['total_assets'] - (old_data[sid]['current_assets'] + old_data[sid]['net_ppe'])
    
    if old_other_lt_assets == 0:
        return 0
    
    current_aq = current_other_lt_assets / current_data[sid]['total_assets']
    old_aq = old_other_lt_assets / old_data[sid]['total_assets']
    
    return current_aq / old_aq


def compute_sgi(current_data, old_data, sid):
    """
        Sales growth index
    """
    
    return current_data[sid]['total_revenue'] / old_data[sid]['total_revenue']


def compute_depi(current_data, old_data, sid):
    """
        Depreciation index
    """
    
    current_aq = current_data[sid]['depreciation_and_amortization'] / (current_data[sid]['depreciation_and_amortization'] + current_data[sid]['net_ppe'])
    if current_aq == 0:
        return 0
    
    old_aq = old_data[sid]['depreciation_and_amortization'] / (old_data[sid]['depreciation_and_amortization'] + old_data[sid]['net_ppe'])
   
    return old_aq / current_aq


def compute_sgai(current_data, old_data, sid):
    """
        Sales and general and administrative expenses index
    """
    
    current_sga = current_data[sid]['selling_general_and_administration'] / current_data[sid]['total_revenue']
    old_sga = old_data[sid]['selling_general_and_administration'] / old_data[sid]['total_revenue']
    if old_sga == 0:
        return 0
    
    return current_sga / old_sga


def compute_lvgi(current_data, old_data, sid):
    """
        Leverage index
    """
    
    current_lvg = (current_data[sid]['long_term_debt'] + current_data[sid]['current_liabilities']) / current_data[sid]['total_assets']
    old_lvg = (old_data[sid]['long_term_debt'] + old_data[sid]['current_liabilities']) / old_data[sid]['total_assets']
    
    if old_lvg == 0:
        return 0
    
    return current_lvg / old_lvg


def compute_tata(current_data, sid):
    """
        Total accruals to total assets
    """

    return (current_data[sid]['net_income_continuous_operations'] - current_data[sid]['operating_cash_flow']) / current_data[sid]['total_assets']

There was a runtime error.

I hope, that the Quantopian Team will prioritize this issue, it's a real bottleneck! I reported it for the first time in Januar.
The Quantopian platform looks very promising, but without a debugger that works properly, it is very hard to do serious development.

You've hit the issue on the head, this error is a bug in the debugger. I've gone ahead to file it internally and we'll take a look to fix it.

I'm not quite sure yet what's causing the message and I'm sorry for the troubles. Once we patch it, I'll follow up in this thread.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I've been running into several broken pipe error recently, any progress?

Unfortunately the error appears to be random, the same breakpoint will work maybe 85 percent of the time, then it will throw an intermittent error. It does appear to be related to print statements more often than not. I will set a date and time logical test then set the result to print 'OK' after the date time if statement. I set the breakpoint on the print statement to stop processing at the specified date and time. Again, the error appears to be random on the print statement breakpoint. I have been testing all day and have set other breakpoints within the code on 'regular' statements past the 'initial' date and time if statement breakpoint, for example...

if date == 140506 and time == 1320 :
print('OK')

Broken Pipe on stackoverflow

In the neighborhood of buffer, SIGPIPE, keep-alive, seems a process was already abandoned/timed-out and doomed but user doesn't find out until the breakpoint is hit later.

Im my experience this error occurs when the backtesting period is quite long, i.e. from 01/03/2002 to 01/11/2015 and when the code has several if-blocks.

If you servers run on a Linux/Unix System, maybe this could help: http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-06/3823.html

+1

I get it all the time too

Something new about this problem?
I reported it for the first time 4 months ago.
I think you should prioritize its solving: without a working debugger it's impossible to develop seriously.

The "broken pipe" problem doesn't occur anymore in my algorithm!

I had to many call to get_fundamentals(.) in before_trading_start(.), I limited them introducing a date logic (practically perform get_fundamentals(.) only once in month) and now the debugger works!

Maybe the reason of the "broken pipe" was to many data in memory to be serialized on the hd because of debugging modus.

Workaround:
If it is basically a timeout at some number of minutes (maybe built in to the browser, not sure), then ...
Suppose you have a breakpoint with a condition to break in if the date is near the end of a two year backtest, hitting 'Broken Pipe'.
Set another breakpoint on another line to break at around one half of that time, like str(get_datetime().date()) == '2014-05-01'.
When that first point is hit, it should reset the timer, to be able to make it to the next.

any updates on this?
still running into these at times.

I just ran into this issue as well while backtesting over 9 years. I needed to use the debugger to inspect a variable so I changed my backtest period to the three days I knew would populate the variable I wanted to check. That solved the problem so it does seem to be an issue with the debugger and length of backtest. Even with the backtest at three days though it hung up for almost two minutes before the breakpoint triggered and let me use the debugger which seemed odd since it was just a simple data.history call for 30 days of price data.

Just to bump up this post, the bug isn't fixed yet...

Also having this issue, and it really hurts development time of algorithms. Seems to happen with a test that has a long back-test period, or simply takes a long time.

Nope. Not fixed at all:/

@Danny Zilberg could you elaborate on the issues you are seeing. This thread is quite old and much has changed with the code and the platform. This also is not an issue that get's reported to our support team, so other than the replies above there isn't a lot for our engineering team to go on. Do you have an algo you could attach? Thanks!

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi sorry for the late reply. I got ahead in my algorithm and thought it was an internet issue.
Now I've encountered the same problem. If I set the breakpoint at a certain date, way ahead in the backtest, the broken pipe error comes up. If not, the algorithm runs untill it reaches the error that I am trying to find.
For some reason I'm not able to attach that specific algorithm (it only shows 5 of my algorithms)
Sorry and thanks