Back to Community
Trading Strategy: Capitalize on panic in the SPY - please review

The idea is simple. This strategy is essentially meant to capitalize on the panic in the market by buying when the market is in a panic and selling after the markets calm down.

  • Getting in when an n-standard deviation event is observed in the space of market returns, buy in by some predetermined amount up to some risk limit in leverage.
  • Getting out when the absolute returns are less than the calculated conditional standard deviation of the return distribution persistently for several days, this indicates that the market has returned to something of a normal state and the panic has passed.

The conditional return distribution of the SPY is based on fully flexible probabilities which could be used in other strategies as well.

This strategy obviously performs best for the patient investor and will not perform well in backtesting periods ending in recessions as it will be taking heavy paper losses. I am aware that the position would be subject to margin calls and may be subject to premature shutdown but by the same token, these shortcomings are known in advance, thus they should be easier to plan for in an implementation environment.

Comments and advice are greatly appreciated!

Clone Algorithm
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import math
import pandas

def rename_col(df):
    df = df.rename(columns={'Settle': 'price'})
    df = df.fillna(method='ffill')
    df = df[['price', 'sid']]
    return df

def initialize(context):
    ###    parameters
    context.nobs = 252 # the number of observations to look back on, rolling window
    context.decay = 0.97 #decay in the exponent
    context.multSDIn = 4 #how many standard deviations should the return be to get in 
    context.bettingOpen = .2 # get into long position at bettingOpen*initial_cash
    context.bettingClose = -0.05 #close out long position 10% at a time
    context.lowVolDays = 5 #number of days to get out of the position
    context.maxlong = 3 #maximum long position as multiple of initial wealth (risk control)
    context.patience =  4 #  a value of 4 means that I neeed 5 times more days of low
    #                         vol when I have reached my risk limits
    context.startingIn = False
    context.vix = 'vix'
    context.spy = sid(8554)
    context.vixrets = []
    context.spyrets = []

    #an exponential decay for the fully flexible probabilities
    context.timeShade = [ context.decay**e for e in range(1, context.nobs) ]
    totalShade = sum(context.timeShade)
    context.timeShade = [ e/totalShade for e in context.timeShade ] #normalization
    #memory allocation for the vix shade
    context.vixShade = [0]*context.nobs

    #download the vix data from quandl

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    if not context.startingIn:
        order(context.spy, math.floor(([context.spy].price))
        context.startingIn = True
    if len(context.vixrets) >= context.nobs:
        #record new returns and kick out old ones;
        context.vixrets = context.vixrets[1:]
        context.spyrets = context.spyrets[1:] 
        #calculate the vix shade based on current stats for the vix
        vvar = np.var( context.vixrets )
        context.vixShade = [ math.exp( -1.0*(e - context.vixrets[-1])*(e - context.vixrets[-1])/(2*vvar) )/math.sqrt(2.0*math.pi*vvar) 
                            for e in context.vixrets]
        totalShade = sum(context.vixShade) 
        context.vixShade = [ e/totalShade for e in context.vixShade ] #normalize
        #convolution of the weighting schemes
        finalShade = np.array([ s1*s2 for s1, s2 in zip(context.vixShade, context.timeShade) ])
        totalShade = sum(finalShade)
        finalShade = finalShade/totalShade #normalize
        #calculate the statistics based on this flexible probability approach
        characteristicMean = sum( [ shade*ret for shade,ret in zip(finalShade, context.spyrets) ] )
        characteristic2ndmoment = sum( [ shade*ret*ret for shade,ret in zip(finalShade, context.spyrets) ] )        
        characteristicVariance = characteristic2ndmoment - characteristicMean*characteristicMean
        characteristicSD = math.sqrt(characteristicVariance)
        # see if we are at our risk limits
        riskLimitHit = math.fabs(context.portfolio.capital_used) < context.maxlong*context.portfolio.starting_cash
        buymetric = math.fabs(data[context.spy].returns()) > context.multSDIn*characteristicSD
        if buymetric:
            if riskLimitHit:
                order(context.spy, math.floor(context.bettingOpen*context.portfolio.starting_cash / data[context.spy].price ) )
                log.warn('risk limits hit: capital used is currently at %f, cannot buy' % context.portfolio.capital_used )

        #incorporating patience into the number of days that are required to have low volatility
        actualnumberofdays = max([int(math.floor(context.lowVolDays*(1+riskLimitHit*context.patience))),1])
        #we sell if the number of days that the squared return is below the volatility is the 'actualnumberofdays' 
        sellmetric = sum( [ e*e < characteristicVariance for e in context.spyrets[-1*actualnumberofdays:] ] ) == actualnumberofdays
        if sellmetric:
            order(context.spy, math.floor(context.bettingClose*context.portfolio.positions_value/data[context.spy].price) )
        #record(cv = characteristicSD )
        #record(spyrets = math.fabs(data[context.spy].returns()) )
        #record(buy = buymetric*1.0 )
        #record(sell = sellmetric*-1.0)
        #we're still in the recording historical data period; training the model.

    record(cap = context.portfolio.capital_used )
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.