Not sure if I shared this already. I cleaned it up, and got the performance to look really good. I'd appreciate feedback, particularly on how to ensure that the optimizer is working properly. --Grant

Clone Algorithm

767

Loading...

There was an error loading this backtest.

Backtest from
to
with
initial capital

Cumulative performance:

Algorithm
Benchmark

Custom data:

Total Returns

--

Alpha

--

Beta

--

Sharpe

--

Sortino

--

Max Drawdown

--

Benchmark Returns

--

Volatility

--

Returns | 1 Month | 3 Month | 6 Month | 12 Month |

Alpha | 1 Month | 3 Month | 6 Month | 12 Month |

Beta | 1 Month | 3 Month | 6 Month | 12 Month |

Sharpe | 1 Month | 3 Month | 6 Month | 12 Month |

Sortino | 1 Month | 3 Month | 6 Month | 12 Month |

Volatility | 1 Month | 3 Month | 6 Month | 12 Month |

Max Drawdown | 1 Month | 3 Month | 6 Month | 12 Month |

# Adapted from: # Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012. # http://icml.cc/2012/papers/168.pdf import numpy as np from scipy import optimize import pandas as pd def initialize(context): context.spy = sid(8554) context.eps = 1.0 schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60)) set_benchmark(symbol('QQQ')) def before_trading_start(context): fundamental_df = get_fundamentals( query( fundamentals.valuation.market_cap, ) .filter(fundamentals.company_reference.primary_exchange_id == 'NAS') .filter(fundamentals.valuation.market_cap != None) .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) update_universe(fundamental_df.columns.values) context.stocks = [stock for stock in fundamental_df] def handle_data(context, data): record(leverage = context.account.leverage) def trade(context,data): for stock in context.stocks: if stock not in data: context.stocks.remove(stock) record(num_stocks = len(context.stocks)) # check for de-listed stocks & leveraged ETFs for stock in context.stocks: if stock.security_end_date < get_datetime(): # de-listed ? context.stocks.remove(stock) if stock in security_lists.leveraged_etf_list: # leveraged ETF? context.stocks.remove(stock) prices = history(8*390,'1m','price') prices = pd.ewma(prices,span=390).as_matrix(context.stocks) for stock in context.stocks: if bool(get_open_orders(stock)): return b_t = [] for stock in context.stocks: b_t.append(context.portfolio.positions[stock].amount*data[stock].price) m = len(b_t) b_0 = np.ones(m) / m denom = np.sum(b_t) if denom == 0.0: b_t = np.copy(b_0) else: context.b_t = np.divide(b_t,denom) x_tilde = [] for i, stock in enumerate(context.stocks): mean_price = np.mean(prices[:,i]) x_tilde.append(mean_price/prices[-1,i]) bnds = [] limits = [0,1] for stock in context.stocks: bnds.append(limits) bnds = tuple(tuple(x) for x in bnds) cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0}, {'type': 'ineq', 'fun': lambda x: np.dot(x,x_tilde) - context.eps}) res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False, 'maxiter': 100, 'iprint': 1, 'ftol': 1}) record(norm_sq = norm_squared(res.x,b_t)) allocation = res.x allocation[allocation<0] = 0 allocation = allocation/np.sum(allocation) if res.success and (np.dot(allocation,x_tilde)-context.eps > 0): # print 'Success' allocate(context,data,allocation) else: return def allocate(context, data, desired_port): for i, stock in enumerate(context.stocks): order_target_percent(stock, desired_port[i]) for stock in data: if stock not in context.stocks: order_target_percent(stock,0) def norm_squared(b,*args): b_t = np.asarray(args) delta_b = b - b_t return 0.5*np.dot(delta_b,delta_b.T) def norm_squared_deriv(b,*args): b_t = np.asarray(args) delta_b = b - b_t return delta_b

We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.