Back to Community
Catastrophic Failure(~150% drawdown) on mean-reversion algo

Hey guys,

I am working on a basic mean reversion algo, using the PPO indicator as a ranking factor. I'm seeing a huge drawdown that only occurs under certain conditions. I haven't been able to narrow down the cause just yet. I'm hoping the community might have some ideas.

Conditions:

  1. If you run this algo from 7/1/15 through the present, it chugs along just fine, with average returns(~15%).
  2. If you back the algo up, and run from 6/1/15, there is a ~150% drawdown that occurs the week of 12/21/15, and runs through 3/7/16. During that time my leverage jumps from around 1, to around -4.

I've looked at the logs during those times, and my only guess is that one of my short positions is unable to be filled, and is skyrocketing, thus pushing my returns down.

Any ideas? Is it possible to prevent such a high drawdown?

Clone Algorithm
7
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Mean reversion algo using pipeline and PPO indicator as ranking factor.
"""
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume, Returns, SimpleMovingAverage
 
def initialize(context):
    
    context.returns_lookback = 5
    context.long_leverage = .5
    context.short_leverage = -.5
    
    attach_pipeline(make_pipeline(context), 'mean_reversion_ppo')
    
    set_slippage(slippage.VolumeShareSlippage(volume_limit=0.025, price_impact=0.1))
    set_commission(commission.PerShare(cost=0.0075, min_trade_cost=1))
    
    schedule_function(rebalance, date_rules.week_start(0), time_rules.market_open(hours=1, minutes=30))
    
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close(minutes=1))

def make_pipeline(context):
    #create pipeline
    pipe = Pipeline()        
    
    #create our long and short SMAs
    sma_short = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=30)
    sma_long = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=100)
    
    #create a combined factor, sma_quotient. this factor is based on the PPO indicator, which is the percentage version of the         MACD. What this tell us, is the distance of the short mean from the long. The more extreme the value, the further it is from       the mean, and the more likely it is to revert.
    sma_quotient = sma_short/sma_long
    
    #create a screen to remove penny stocks
    remove_penny_stocks = sma_short > 1.0
    
    #rank our securities based on their PPO value
    sma_rank = sma_quotient.rank(mask=remove_penny_stocks)
    
    #add our rank factor to the pipeline
    pipe.add(sma_rank, 'sma_rank')
    
    #grab our longs and shorts
    shorts = sma_rank.top(50, mask=remove_penny_stocks)
    longs = sma_rank.bottom(50, mask=remove_penny_stocks)
    
    #add our longs and shorts to the pipe
    pipe.add(shorts, 'shorts')
    pipe.add(longs, 'longs')
    
    #set a screen to grab only the stocks we need for our pipeline
    pipe.set_screen(shorts | longs)     
    
    return pipe

def before_trading_start(context, data):    
    
    #get the securities that made it through the pipe(returns pandas dataframe)
    context.output = pipeline_output('mean_reversion_ppo')
    
    #get our longs and shorts from the pipeline
    context.longs = context.output[context.output.longs]
    context.shorts = context.output[context.output.shorts] 
    
    #get our list of securities
    context.security_list = context.shorts.index.union(context.longs.index)
    
    #convert them to a set, for faster lookup
    context.security_set = set(context.security_list)
    
    #print our top 5 securities with the highest sma rank
    print "Top 5 Shorts: "
    log.info("/n" + str(context.shorts.sort(['sma_rank'], ascending=True).head()))
    
    #print our top 5 securities with the lowest sma rank
    print "Top 5 longs: "
    log.info("/n" + str(context.longs.sort(['sma_rank'], ascending=False).head()))
    
    
def assign_weights(context):

    #assign weights equally relative to leverage
    #example: (.5 leverage)/(17 stocks) = .029 leverage per stock
    #perhaps divide by the union of longs and shorts to make sure we use the entire portfolio?
    context.long_weight = context.long_leverage/len(context.longs)
    
    #assign short weights
    context.short_weight = context.short_leverage/len(context.shorts)
    
def rebalance(context, data):    
    
    assign_weights(context)
    
    open_orders = get_open_orders()
    
    #order long and short securities
    for sec in context.security_list:
        if sec not in open_orders and data.can_trade(sec):
           if sec in context.longs.index:
               order_target_percent(sec, context.long_weight)
           elif sec in context.shorts.index:
               order_target_percent(sec, context.short_weight)
               
    #sell everything that's not in the list
    for sec in context.portfolio.positions:
        if sec not in (context.security_set or open_orders ) and data.can_trade(sec):
            order_target_percent(sec, 0)
            
    #log this week's long and short orders            
    log.info("This week's longs: "+", ".join([long_.symbol for long_ in context.longs.index]))
    log.info("This week's shorts: "+", ".join([short_.symbol for short_ in context.shorts.index]))  
    
def record_vars(context, data):
    
    longs = shorts = 0
    
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1  
        if position.amount < 0:
            shorts += 1
            
    record(leverage = context.account.leverage, long_counts = longs, short_count = shorts)     

def handle_data(context,data):
    pass
There was a runtime error.
11 responses

Here's a pyfolio run for the algo you posted. It looks like there's something wrong with one of your long positions in the Dec 2015 timeframe. Maybe a missing split in the data (that would be my first guess).

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I looked at the logs. I think ticker TROV_U is the culprit.

Thanks guys, running to work now. Will check these out this evening. Thank you!

brings up an interesting question for me. how do we know that there is not systematic error in adjustments made that are too small to detect but pervasive in backtesting?

Here are my findings:

  1. Yes it does look like TROV_U.
  2. There is no stock named TROV_U, but there is a TROV.
  3. TROV split in 2012, but not in 2015 where the issue is happening.(http://performance.morningstar.com/stock/performance-return.action?p=dividend_split_page&t=TROV)

@Josh, is it possible that a split was applied that should not have been?

brings up an interesting question for me. how do we know that there is not systematic error in adjustments made that are too small to detect but pervasive in backtesting?

Good question!

May be the question: "how do we know that there is not systematic small errors?" is interesting.
But for me more impotent question: "when Q going to fix hundreds of known big data bugs ?", like above, which pipeline pushing into algos.

Vlad, this made me smile. I'm glad i'm not the only one who is frustrated.

Hey guys,

Still curious what the process is for getting this looked into on the data side. Is there a place to report bugs? Or is it possible for me to dig into this myself somehow?

Best,
Ian

Hi Ian,

This is a data bug. The price of $199,999.00 for TROV_U looks bad but it's not clear to me why. We'll have to do some digging to figure out why.

In general, the best way to report a data issue is by emailing in to [email protected]. Posting in the forums in tandem is a good idea so that others can see that it's been reported if they find the same issue.

Some data bugs are easier to fix than others. There will be a batch of data fixes rolling out soon but this bug won't be included in it simply because we don't yet know what's causing it.

In the meantime, my best suggestion would be to take a look at this notebook posted by Quantopian engineer Scott Sanderson. It's a very good filter for pipeline and I suspect it will take TROV_U out of your universe. It's a good baseline filter to remove a lot of securities that aren't usually good candidates for trading.

Thanks for reporting the issue.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Gotcha. Just getting started on the platform, so I wasn't sure if it was me or not. I've never met a platform without a bug though, so it's good to know what to keep an eye out for, and what to do with them :-)

Thanks for the help!