Back to Community
A year of live trading

I recently almost made it to a year of live trading three algos; I had to shut them all down due to work compliance regulations.

Since a lot of people are curious how the Quantopian backtester compares with reality, I thought I'd share the results of live trading vs simulation of the algo which was derived from my March 2015 contest winner.

This account had only $10k in it, and was trading VXX and XIV. The plan was essentially short both in equal parts, and rebalance daily. Pretty classic short gamma-ish trade. You can see from the graph that Quantopian's current default slippage model seems pretty accurate these days. I don't recall it being so good in the past, but perhaps they have made a change; I haven't really been following.

I'll share the algo next. It's pretty simple at heart, but made so much more complicated by the edge cases surrounding short fails and whatnot. The idea really is to just be short $10k of VXX and $10k of XIV, for instance. Everything else is tedious nonsense.

Loading notebook preview...
Notebook previews are currently unavailable.
34 responses

This is the algo. Please trade this at your own risk; we all know that this algo had a rough patch on Aug 24th, 2015, and there's no reason to believe that something worse couldn't happen in the future. I didn't think it was really worth the risks, which is why I never really funded it.

Clone Algorithm
348
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    #set_slippage(slippage.FixedSlippage(spread=0.01))
    #set_commission(commission.PerShare(cost=0.0035, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    #uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: -1.0,
            vxx: -1.0
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=180),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = cash_target / last_price
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 0.002):
            if basket_all_traded:
                for sid in basket.index:
                    log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                    order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

ps - apologies for the bad language in the algo, just noticed that was still there...

pps - when I did the analysis the first time around last year, the spikes in the profit of this strategy corresponded to jumps in the shares outstanding of XIV, so it's possible that there is some mechanic at work in that process which causes XIV to drag its index when those units need to be created. That was sufficient cause for me to invest.

Thanks Simon -

Perhaps you could explain the idea behind this strategy? I'm lazy, and so don't want to read through your code, or sort out what is meant by a "short gamma-ish trade." In layman's terms, what is the logic behind the strategy? Why would one expect it to work? Is there any evidence that it does, in fact, work (the attached backtest suggests that it isn't too shabby)?

Also perhaps more a question for the Quantopian folks, but I'm wondering how such a strategy would fit, if at all, in their hedge fund portfolio. I guess it would somehow just plug into the workflow as another alpha factor, but it is not obvious how it would be cast as a long-short factor?

Clone Algorithm
96
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    #set_slippage(slippage.FixedSlippage(spread=0.01))
    #set_commission(commission.PerShare(cost=0.0035, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    #uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: -1.0,
            vxx: -1.0
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=180),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = cash_target / last_price
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 0.002):
            if basket_all_traded:
                for sid in basket.index:
                    log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                    order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

Basically, I suspect there's a defect in how XIV creates new share units, causing it to lose unexpected amounts of money when they have to create many more units. Though this is just a suspicion. By shorting both VXX and XIV in equal amounts, rebalanced prior to the afternoon (when those unit creations seem to happen), you hope to catch XIV underperforming on those days.

However, the risks are plenty. On Aug 24th, the ETNs were trading way off of their net asset value, and so this portfolio showed a terrible loss (though perhaps illusory). There's also an awful risk that one or the other of the ETNs halt suspensions or creations, as TVIX did in 2012 I think, causing them to trade permanently off their NAV.

It's short gamma-ish because as the VIX moves, one leg will gain in value and the other will lose in value. However, because we are short, the leg which makes money gets gradually smaller, while the leg that loses money gets gradually larger. Therefore to remain hedged, we need to be constantly rebalancing, taking small losses, to get our position back into line. If these rebalancing fail, because we can't find the shorts to share, or because quantopian has a problem in their back end, then the portfolio is exposed to risk. In fact, even having a fixed rebalancing schedule at all is a risk; I believe there is a move the VIX could make overnight where the short gamma-ish nature of this position could be catastrophic.

I was also thinking more about the slippage model; perhaps it is no so accurate after all, since this account is exposed to 2x short borrow fees, which quantopian doesn't account for, yet their punitive slippage model happens to be of the same order, so in the end it works out. Had the strategy been going long, it would be quite a different story of live vs backtest.

Thanks for sharing Simon - this is going to be useful information for a lot of people.

The notebook I was originally using to try to figure out the root cause of this anomaly. Regrettably I don't have the shares outstanding data any more, it was erased from Research some time in the past year or two, and the originals were on my old laptop.

Loading notebook preview...
Notebook previews are currently unavailable.

Simon
Forgive the ignorance and laziness on my part but since this is your algo, it will be easier for you to answer than for me to look up. Shorting both gives you opposite trades I guess hence market neutrality? One is inverse the other long?

So this is some kind of arbitrage of a structural oddity in one or other product? I see one loses catastrophically from contango and the other makes a little money? Can you explain the opportunity a little further?

Correct, though it's only truly neutral if it were rebalanced continuously. It might be a structural defect in XIV.

Rebalanced continually....yes. I wonder if Q plans to do that with the normal equity long short. Surely not?

Presumably with unleveraged instruments most long short guys rebalance daily, weekly? I wonder?

I ran a back test from January of 2014 to present and I've never seen such a smooth rise in returns. I started with $100,000.

Amazing equity curve! Is this anomaly known to the market? Any idea why it hasn't been arbitraged out yet?

Thanks for sharing this Simon. I found this same phenomenon 2-3 years ago and ended up bailing on it too. I decided the opportunity cost of tying up the money was too high given the borrow costs and bleed during low vol regimes. I had very similar results to your live trading, occasional pops, then a seemingly perpetual bleed.

@Maxim, I'm not sure this is something that should be arbed out since it has definite risks. The strategy has a negative carry and is plagued with fees and non-fills. The borrow rates on those funds have come down a bit so maybe it's more worth it now. I think it works better when combined with a long strategy since it tends to fill drawdowns somewhat.

Yeah precisely; in actual trading, it doesn't seem like the reward is high enough to justify the tail risks, even though the returns happen at very fortuitous times.

I'm curious what you all think of the same algo with a couple of mods. I changed VXX to UVXY and then short slightly less XIV on the theory the 2x leverage of UVXY vs. VXX helps with the decay and "pays" for the slightly lower ratio. I realize it also increases the risk since the pairs are exactly balanced.

Returns look pretty decent.

Clone Algorithm
128
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    #set_slippage(slippage.FixedSlippage(spread=0.01))
    #set_commission(commission.PerShare(cost=0.0035, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: -0.90,
            uvxy: -0.5
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=180),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = cash_target / last_price
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 10): # was 0.002
            if basket_all_traded:
                for sid in basket.index:
                    log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                    order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

@John. The return will be much lower after taking into equation the cost of borrowing ~ 8.8% per year or ~50% compounded for the last 5 years (currently IB charges 5.88% UVXY and 2.93% for XIV) .
But it looks like shorting both XIV and UVXY (with 9 to 5 ratio) still produces a better sharp than just shorting 10% UVXY even after taking into account borrowing costs.

FYI backtest that shorts 10% UVXY

Clone Algorithm
28
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    set_slippage(slippage.FixedSlippage(spread=0.01))
    set_commission(commission.PerShare(cost=0.0035, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: 0,
            uvxy: -0.1
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=180),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = (cash_target / last_price)
    if shares_target != shares_target:shares_target=0
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 10): # was 0.002
            if basket_all_traded:
                for sid in basket.index:
                    log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                    order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

@Simon

I have been testing this algo using IB's paper trading and a 2x VIX bull ETF, and it is losing leverage due to short fails, reverse splits, and the calculate_new_smaller_basket function. Can you explain why the implied account value is used to calculate leverage and rebalance rather than the actual account value?

That was done precisely because of short fails or shorts not executed - if we didn't get one of our shorts, it would be preferable to have a smaller hedged basket, than a full-size unbalanced basket.

Simon, what if there is an event that causes the XIV to terminate. Wouldn't the short VXX put a world of hurt on someone while it spiked without being able offset. A black monday type event might cause XIV to blowup and VXX could go up 100%+ in a day.

Yeah, that's an obvious risk. From my perspective, the problem is not that risk (which is a part of a great many strategies, and frankly less of a risk than simply owning XIV), but that the expected return of the algo is not sufficient to compensate for that risk.

vxx moving 100% in a day is not a real risk in my opinion. The market has circuit breakers now which basically limit the amount of volatility in a day, especially contracts dated a month out. We saw those circuit breakers when Trump was elected and all it did was result in a melt up.

Yeah - a much more worrying risk (forget if I mentioned this already) is something more like TVIX 2012, where the sponsor just halted creations, so the ETN developed a pronounced premium to NAV; that would slowly but surely crush a strategy like this.

At the end of the day, people start short vol strategies (like any number of the VIX-based XIV strategies on the boards) because a risk of total loss is acceptable if the expected CAGR is 100%+. With this strategy, best case, the expected 2x leveraged (long + short) CAGR is only 10%, say, but the risk of severe (if not total) loss is quite high. Risk/reward just not there, IMO.

Adding a small positive beta by adjusting position weights pays for the 2x short borrow fees and reduces volatility slightly.

Clone Algorithm
45
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    #set_slippage(slippage.FixedSlippage(spread=0.01))
    #set_commission(commission.PerShare(cost=0.005, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    #uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: -0.99,
            vxx: -1.01
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=140),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = cash_target / last_price
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 0.002):
            if basket_all_traded:
                for sid in basket.index:
                    log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                    order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

Would a 60/40 XIV to VXX make sense? Also if XIV failed VXX would likely be up over 100% and the 40% leg would offset the loss. Overtime the 60% leg should give a decent return; maybe re balance monthly?

You can trade whatever you want, but this algo was designed to exploit this specific effect. If it's not hedged then the overwhelming source of returns will be the exposure to volatility, and you have a different algo. This algo framework is also useful for trading more generalized hedged baskets provided the hedge ratios do not change, but I didn't find any other ones that I really liked.

As part of the framework, I'd like to use something similar to the "tolerable" function to skip a scheduled rebalance if the legs of the proposed basket are less than 1% off from the current legs. I am noticing a number of 1 share buys and sells, which may be unnecessary trading costs. How would I program this?

@Honver - you basically want to do the order maths yourself. Here is what I do:

allocation = 0.5  
stock_price = data.current(stock, 'price')  
total_shares = math.floor((context.portfolio.portfolio_value / stock_price) * allocation) # total number of shares we should be holding in our portfolio  
transaction_amount = abs(abs(context.portfolio.positions[stock].amount) - abs(total_shares)) * stock_price # transaction value for trading this stock if we rebalance  
if transaction_amount > 300: # only trade if we would be trading more than $300  
    order_target_percent(stock, allocation)  

Thanks Mohammad. The returns have increased slightly by avoiding trades that are less than 1% different from the portfolio position. However, the results are deceptive because these trades were being used to model the short interest.

Clone Algorithm
45
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import datetime
import pytz
import pandas as pd
import numpy as np

# this is the grace period that we give orders to work.  Should be short for IB
# since we don't want to be unhedged for long, but unfortunately, might need to 
# be very long for Quantopian backtesting, since they do not fill during no-trade
# bars
OrderWorkingMinutes = 120

def initialize(context):
    #set_slippage(slippage.FixedSlippage(spread=0.01))
    #set_commission(commission.PerShare(cost=0.005, min_trade_cost=0.35))
    context.peak_port_val = 0.0
    context.max_dd = 0.0
    #uvxy = sid(41969)
    #svxy = sid(41968)
    #vixy = sid(40669)
    xiv = sid(40516)   
    vxx = sid(38054)
    
    context.baskets = [
        pd.Series({
            xiv: -0.99,
            vxx: -1.01
        }),
    ]
    # these will be calculated during our ordering_logic
    context.desired_positions = []    
    context.spy           = sid(8554)
    context.order_cancel_working_time = datetime.timedelta(0, OrderWorkingMinutes * 60, 0)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=140),
                      half_days=True)
    schedule_function(ordering_logic,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=1),
                      half_days=True)
    
'''
    # TEST FUNCTION DO NOT RELEASE!!!
    schedule_function(simulate_call_in,
                      date_rules.month_start(days_offset=10),
                      time_rules.market_open(minutes=30),
                      half_days=True)
    
# TEST FUNCTION
def simulate_call_in(context, data):
    basket_to_call_in = context.baskets[0]
    sid_to_call_in = list(basket_to_call_in.keys())[0]
    log.warn(str(get_now()) + ": CALLING IN " + str(sid_to_call_in.symbol))
    # simulate the cropping of one leg by 80%, this should cause our algo to
    # crop the other leg to 80% of target
    order_target_percent(sid_to_call_in, basket_to_call_in[sid_to_call_in] * 0.8)    
'''

def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# replicate part of order_target_percent, except with an adjustable portfolio size,
# so that we can find out the correct shares given implied portfolio sizes from 
# un-rebalanced legs
def percent_to_shares(context, data, sid, percentage_of_port, port_val): 
    cash_target = port_val * percentage_of_port
    last_price = data.current(sid, 'price')
    shares_target = cash_target / last_price
    return int(shares_target)

def calculate_desired_basket(context, data, basket, port_val):
    basket_all_traded = True
    for sid in basket.index:
        if not data.can_trade(sid):
            basket_all_traded = False
    desired_basket = pd.Series({sid: (percent_to_shares(context, data, sid, basket[sid], port_val) if basket_all_traded else 0.0) for sid in basket.index})
    return desired_basket                           
    
def calculate_desired_positions(context, data, baskets):
    desired_positions =  [ calculate_desired_basket(context, data, basket, context.portfolio.portfolio_value) for basket in baskets ]
    return desired_positions

def ordering_logic(context, data):
    context.desired_positions = calculate_desired_positions(context, data, context.baskets) 
    rebalance(context, data) 

def rebalance(context, data):    
    now = get_now()
    for (weights, basket) in zip(context.baskets, context.desired_positions):
        # this is silly, but apparently necessary for Quantopian
        basket_all_traded = True
        for sid in basket.index:
            if not data.can_trade(sid):
                basket_all_traded = False
        prices = data.history(basket.index, 'price', 60, '1m')
        rets = np.log(prices[basket.index]).diff().fillna(0)
        spread_rets = (weights * rets).sum(axis=1)
        std = spread_rets.std()
        # only trade if these ETFs aren't getting fucked
        if (std < 0.002):
            if basket_all_traded:
                for sid in basket.index:
                    desired_position = basket[sid]
                    if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
                        log.info(str(now) + ": Targeting " + str(basket[sid]) + " for " + str(sid.symbol))
                        order_target(sid, basket[sid]) 
        else:
            log.error(str(now) + ": ABORTING REBALANCE, STD TOO HIGH")
            
def cancel_all_stale(context, data):
    now = get_now()
    sids_cancelled = set()
    fresh_orders = False
    open_orders = get_open_orders()
    for security, orders in open_orders.iteritems():  
        for oo in orders:
            if ((get_datetime() - oo.dt) > context.order_cancel_working_time):
                log.warn(str(now) + ": Cancelling order placed at " + str(oo.dt) +  " for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + "!")
                sids_cancelled.add(oo.sid)
                cancel_order(oo)
            else:
                fresh_orders = True
                #log.info(str(now) + ": NOT CANCELLING order for " + str(oo.amount) + " shares of " + str(oo.sid.symbol) + " because it's fresh!")
    return (sids_cancelled, fresh_orders)

# defines tolerable departures from expected position in a stock
def tolerable(context, data, sid, a, b):
    last_price = data.current(sid, 'price')
    a_cash = a*last_price
    b_cash = b*last_price
    cash_diff = abs(a_cash - b_cash)
    port_value = context.portfolio.portfolio_value
    diff = cash_diff / port_value
    tol = 0.01 # 1% of port value is ok
    return diff < tol

# this is basically an inverse of percent_to_shares
def calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid):
    last_price = data.current(sid, 'price')
    cash_position = float(context.portfolio.positions[sid].amount) * last_price
    percentage_of_port = basket[sid]
    implied_portfolio_val = cash_position / percentage_of_port
    return implied_portfolio_val
 
# this is basically an inverse of calculate_desired_basket
def calculate_portfolio_val_implied_by_basket(context, data, basket):
    portfolio_vals_implied_by_legs = [  calculate_portfolio_val_implied_by_stock_position(context, data, basket, sid) for sid in basket.index ]
    implied_portfolio_val = min(portfolio_vals_implied_by_legs)
    return implied_portfolio_val                         

def calculate_new_smaller_basket(context, data, basket):
    now = get_now()
    implied_smaller_port_val = calculate_portfolio_val_implied_by_basket(context, data, basket)
    log.error(str(now) + ": Actual account value: " + str(context.portfolio.portfolio_value) + ", Implied (smaller) account value: " + str(implied_smaller_port_val))
    smaller_basket = calculate_desired_basket(context, data, basket, implied_smaller_port_val)
    return smaller_basket

def print_basket(basket):
    s = "{"
    for p in basket.index:
        s = s + p.symbol + ": " + str(basket[p]) + ","
    s = s + "}"
    return s    

def verify_basket(context, data, basket_desired_positions, basket_desired_weights):
    now = get_now()
    basket_okay = True
    for sid in basket_desired_positions.index:
        desired_position = basket_desired_positions[sid]
        if not tolerable(context, data, sid, desired_position, context.portfolio.positions[sid].amount):
            basket_okay = False           
    new_basket = basket_desired_positions
    if not basket_okay:
        new_basket = calculate_new_smaller_basket(context, data, basket_desired_weights)
        log.error(str(now) + ": Basket verification failed. Previous desired basket: " + print_basket(basket_desired_positions) + " New desired basket: " + print_basket(new_basket))
    return (new_basket, basket_okay)
    
def verify_positions(context, data, desired_shares):
    all_baskets_okay = True
    new_desired_positions = []
    for (basket_positions, basket_weights) in zip(context.desired_positions, context.baskets):
        (new_desired_basket, basket_okay) = verify_basket(context, data, basket_positions, basket_weights)
        if (not basket_okay): 
            all_baskets_okay = False
        new_desired_positions.append(new_desired_basket)
    return (new_desired_positions, all_baskets_okay)

def drawdown(context, data):
    port_val = context.portfolio.portfolio_value
    dd = 0.0
    if (port_val > context.peak_port_val):
        dd = 0.0
        context.peak_port_val = port_val
    else:
        dd = 1.0 - port_val / context.peak_port_val 
        context.max_dd = max(dd, context.max_dd)
    return dd

def handle_data(context, data):
    dd = drawdown(context, data)
    record(leverage=context.account.leverage)
    record(max_drawdown=context.max_dd)
    record(current_drawdown=dd)
    now = get_now()
    (sids_cancelled, fresh_orders) = cancel_all_stale(context, data)
    rebalanced = False
    # only bother checking out portfolio if we actually have one
    if (len(context.portfolio.positions) > 0):
        # if this is the same minute as our ordering_logic, we won't cancel those orders, they have 
        # one minute to work.
        if (fresh_orders == False):  
            # if we cancelled some orders, presumably held because of no shorts available, give them a minute
            # to cancel
            if (len(sids_cancelled) == 0):
                # if there was nothing to cancel and nothing fresh, double check that our positions haven't been
                # changed from underneath us, and/or that we have been filled all the shares we wanted 
                (new_positions, all_positions_okay) = verify_positions(context, data, context.desired_positions)
                if (not all_positions_okay):
                    log.error(str(now) + ": Portfolio problem, rebalancing")
                    context.desired_positions = new_positions
                    rebalance(context, data)
                    rebalanced = True

There was a runtime error.

I just tried to rewrite this algorithm while watching the newest season of Master of None. I threw a couple new factors in but I don't think it really helped at all, but I think the code is a little cleaner.

Clone Algorithm
12
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from scipy import stats
from quantopian.pipeline.filters.morningstar import Q1500US
from quantopian.algorithm import attach_pipeline, pipeline_output, calendars
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Latest, SimpleMovingAverage
from quantopian.pipeline.data.quandl import cboe_vix, cboe_vxv, cboe_vxd, cboe_vvix
from quantopian.algorithm import order_optimal_portfolio
import quantopian.experimental.optimize as opt
from quantopian.pipeline.data.psychsignal import twitter_withretweets
from quantopian.pipeline.data.quandl import cboe_skew
from zipline.utils.tradingcalendar import get_early_closes
from quantopian.pipeline.data.psychsignal import aggregated_twitter_withretweets_stocktwits_free as psychsignal
from scipy import optimize
import pandas as pd
import numpy as np
import statsmodels.api as sm
import talib
import scipy
import math
import datetime


def initialize(context):

    schedule_function(my_rebalance, date_rules.every_day(),
                      time_rules.market_open(hours=1))
    schedule_function(my_rebalance, date_rules.every_day(),
                      time_rules.market_close(minutes=11))
    schedule_function(my_record_vars, date_rules.every_day(),
                      time_rules.market_close())

    context.lookback = 200
    context.volatility_threshhold = .05
    attach_pipeline(make_pipeline(context), 'my_pipeline')
    context.spy = sid(8554)
    context.vxx = sid(38054)


def before_trading_start(context, data):
    context.output = pipeline_output('my_pipeline')
    context.results = context.output.dropna()[:2]
    context.bull_minus_bear = context.results["bull_minus_bear"][0]
    context.vix = context.results['vix'][0]
    iv = context.output["implied_volatility"].iloc[0]
    hv = calculate_hv(context, data, context.lookback)
    context.vrp = iv - hv


def my_rebalance(context, data):
    short_volatility_factor_1 = False
    short_volatility_factor_2 = False
    short_volatility_factor_3 = False

    days = 200

    r = np.array(data.history(sid(8554), 'price', days, '1d')[:-1])
    R = optimize.fmin(GARCH11_logL, np.array(
        [.1, .1, .1]), args=(r, context), full_output=1)

    omega = R[0][0]
    alpha = R[0][1]
    beta = R[0][2]

    sigma = omega + alpha * \
        data.current(sid(8554), 'price')**2 + beta * context.last_sigma

    if (sigma - context.last_sigma) / context.last_sigma < context.volatility_threshhold:
        short_volatility_factor_1 = True
    if context.vrp > 0:
        short_volatility_factor_2 = True
    if context.bull_minus_bear > 0:
        short_volatility_factor_3 = True

    stocks_balance = {
        symbol('XIV'): 0,
        symbol('VXX'): 0,
    }

    constraints = [opt.MaxGrossLeverage(2.0)]
    if short_volatility_factor_1 and (short_volatility_factor_2 or short_volatility_factor_3):
        stocks_balance = {
            symbol('XIV'): -1.0,
            symbol('VXX'): -1.0,
        }
    constraints.append(opt.PositionConcentration(
        stocks_balance, stocks_balance))
    order_optimal_portfolio(
        opt.TargetPortfolioWeights(stocks_balance),
        constraints=constraints,
        universe=stocks_balance.keys(),
    )


def make_pipeline(context):
    base_universe = Q1500US()
    pipe = Pipeline(
        screen=base_universe,
        columns={
            'bullish_intensity': psychsignal.bullish_intensity.latest,
            'bearish_intensity': psychsignal.bearish_intensity.latest,
            'bull_minus_bear': psychsignal.bull_minus_bear.latest,
            'vix': VIX(),
            'implied_volatility': ImpliedVolatility(window_length=200)
        }
    )
    return pipe


def my_record_vars(context, data):
    record(leverage=context.account.leverage)


def get_now():
    return pd.Timestamp(get_datetime()).tz_convert('US/Eastern')


# Source:
# https://www.quantopian.com/posts/quantcon-2016-peculiarities-of-volatility-by-dr-ernest-chan
def calculate_hv(context, data, days):
    close = data.history(context.spy, ['price'], days + 1, '1d')
    close['ret'] = (np.log(close.price) - np.log(close.price).shift(1))
    return close.ret.std() * math.sqrt(252) * 100


# Garch Model from
# http://stackoverflow.com/questions/33071740/error-while-garch-modeling-using-python
def GARCH11_logL(param, r, context):
    omega, alpha, beta = param
    n = len(r)
    s = np.ones(n)
    for i in range(3, n):
        s[i] = omega + alpha * r[i - 1]**2 + \
            beta * (s[i - 1])  # GARCH(1,1) model
    context.last_sigma = s[-1]
    logL = -((-np.log(s) - r**2 / s).sum())            # calculate the sum
    return logL


class VIX(CustomFactor):
    window_length = 1
    inputs = [cboe_vix.vix_close]

    def compute(self, today, assets, out, v_close):
        out[:] = v_close


class ImpliedVolatility(CustomFactor):
    inputs = [cboe_vix.vix_close]

    def compute(self, today, assets, out, vix):
        out[:] = np.mean(vix, axis=0)

        
There was a runtime error.

Hmm, your algo doesn't ensure that it if fails to short enough shares, it still results in a hedged basket. That was where the majority of the complexity came from.

I add a PositionConcentration constraint to ensure that they always have equal concentration. Is that different somehow?

     stocks_balance = {  
            symbol('XIV'): -1.0,  
            symbol('VXX'): -1.0,  
     }  
    constraints.append(opt.PositionConcentration(  
        stocks_balance, stocks_balance))  
    order_optimal_portfolio(  
        opt.TargetPortfolioWeights(stocks_balance),  
        constraints=constraints,  
        universe=stocks_balance.keys(),  
    )  
class PositionConcentration(min_weights, max_weights, default_min_weight=0.0, default_max_weight=0.0, etf_lookthru=None)  

https://www.quantopian.com/help#constraints

Well, yes. The trick is to handle the case where you don't get filled on the orders for one leg, either because it's short-sale restricted or you don't have the locates, and even to handle the case where you are bought in (though that latter one hasn't happened to me yet).

Ah I see! Thanks, I'll try to fix that.