I have been trying to use minimum variance optimization for a few ETFs. I have the problem of getting completely different results with daily Vs min backtests, even after I control for closing price. I merged the code from a few places - equal weight sample algorithm, David's min variance code using scipy fmin, etc, so you might find it a little inconsistent. I will post the minute based back-testing in the next post. One change here, the weights are not constrained to be positive, which needs to be changed but it still doesnt explain the differences as the log shows exactly the same numbers atleast after first rebalancing

Clone Algorithm

7

Loading...

There was an error loading this backtest.

Backtest from
to
with
initial capital

Cumulative performance:

Algorithm
Benchmark

Custom data:

Total Returns

--

Alpha

--

Beta

--

Sharpe

--

Sortino

--

Max Drawdown

--

Benchmark Returns

--

Volatility

--

Returns | 1 Month | 3 Month | 6 Month | 12 Month |

Alpha | 1 Month | 3 Month | 6 Month | 12 Month |

Beta | 1 Month | 3 Month | 6 Month | 12 Month |

Sharpe | 1 Month | 3 Month | 6 Month | 12 Month |

Sortino | 1 Month | 3 Month | 6 Month | 12 Month |

Volatility | 1 Month | 3 Month | 6 Month | 12 Month |

Max Drawdown | 1 Month | 3 Month | 6 Month | 12 Month |

import numpy as np import pandas as pd from scipy.optimize import fmin from math import sqrt import datetime def initialize(context): set_symbol_lookup_date('2013-01-01') context.stocks = symbols('SPY', 'MDY', 'VBR' ) context.rebalance_date = None context.rebal_days = 28 context.price_window = 28 context.data = { i: [] for i in context.stocks # } context.rebalance_hour_start = 0 context.rebalance_hour_end = 20 def handle_data(context, data): # Get the current exchange time, in the exchange timezone exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') # History of prices price_history = history(bar_count=context.price_window, frequency='1d', field='close_price') if context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.rebal_days): # Check if its in the right time window if exchange_time.hour < context.rebalance_hour_start or exchange_time.hour > context.rebalance_hour_end: return print "Rebalance Time - %s" % exchange_time # Do nothing if there are open orders: if has_orders(context): print('has open orders - doing nothing!') return values = pd.DataFrame(price_history[context.stocks]) context.df = values.pct_change().dropna() x = Opt(context, data) weights = x.get_weights() #log.info(weights) for i in context.stocks: print "Order %s at price %f at %f percent" % (i.symbol, data[i].price, weights[i] * 0.99) order_target_percent(i, weights[i] * 0.99, limit_price=None, stop_price=None) context.rebalance_date = exchange_time def has_orders(context): # Return true if there are pending orders. has_orders = False for sec in context.stocks: orders = get_open_orders(sec) if orders: for oo in orders: message = 'Open order for {amount} shares in {stock}' message = message.format(amount=oo.amount, stock=sec) log.info(message) has_orders = True return has_orders class Opt(): def __init__(self, context,data): self.context = context self.data = data def get_weights(self): context = self.context guess = np.ones(len(context.stocks) - 1,dtype=float)*(1./len(context.stocks)) # # n-1 array sent into fmin, the last value is added in the optimizing function opt = fmin(self.min_var, guess) # The last weight is (1 - the sum of the others) return {sym: np.append(opt,1-sum(opt))[i] for i,sym in enumerate(context.df)} def min_var(self, weights): context = self.context weights = np.append(weights, 1 - sum(weights)) return pvar(context.df, weights) def pvar(P, w=None): ''' Gets the variance of a returns portfolio P with weights w. ''' if w is not None: var = 0 C = P.corr().as_matrix() s= [i for i in P.std()] for i in xrange(len(s)): for j in xrange(len(s)): var += w[i]*w[j]*s[i]*s[j]*C[i, j] return var return P.cov().mean().mean()