worthy of Q fund?

Just curious if this algo would make any sense at all in the conceived Quantopian crowd-sourced hedge fund?

If anyone in the crowd has an opinion, or if someone at Q has time to dig into it, I would be glad to get a critique (including capacity at higher levels of capital). Personally, I think it is a goof-ball idea, but maybe not?

And if you are looking to enter the contest with a few clicks, I think this algo will make it past the various checks (although you might want to fiddle with it a bit to see if it can be improved. Sharpe is low due to volatility? Trades daily, but maybe commissions eat into returns?).

775
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):

context.eps = 1.005
context.pct_index = 0.5 # percentage short QQQ
context.leverage = 2.5

print 'context.eps = ' + str(context.eps)
print 'context.pct_index = ' + str(context.pct_index)
print 'context.leverage = ' + str(context.leverage)

fundamental_df = get_fundamentals(
query(
fundamentals.valuation.market_cap,
)
.filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
.filter(fundamentals.valuation.market_cap != None)
.order_by(fundamentals.valuation.market_cap.desc()).limit(30))
update_universe(fundamental_df.columns.values)
context.stocks = [stock for stock in fundamental_df]

# check if data exists
for stock in context.stocks:
if stock not in data:
context.stocks.remove(stock)

def handle_data(context, data):
record(leverage = context.account.leverage)

def get_allocation(context,data,n,prices):

prices = pd.ewma(prices,span=390).as_matrix(context.stocks)

b_t = []

for stock in context.stocks:
b_t.append(context.portfolio.positions[stock].amount*data[stock].price)

m = len(b_t)
b_0 = np.zeros(m)
denom = np.sum(b_t)

if denom == 0.0:
b_t = np.copy(b_0)
else:
b_t = np.divide(b_t,denom)

x_tilde = []

for i, stock in enumerate(context.stocks):
mean_price = np.mean(prices[:,i])
x_tilde.append(mean_price/prices[-1,i])

bnds = []
limits = [0,1]

for stock in context.stocks:
bnds.append(limits)

bnds = tuple(tuple(x) for x in bnds)

cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
{'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})

res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})

allocation = res.x
allocation[allocation<0] = 0
allocation = allocation/np.sum(allocation)

if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
return (allocation,np.dot(allocation,x_tilde))
else:
return (b_t,1)

# check if data exists
for stock in context.stocks:
if stock not in data:
context.stocks.remove(stock)

# check for de-listed stocks & leveraged ETFs
for stock in context.stocks:
if stock.security_end_date < get_datetime():  # de-listed ?
context.stocks.remove(stock)
if stock in security_lists.leveraged_etf_list: # leveraged ETF?
context.stocks.remove(stock)

# check for open orders
if get_open_orders():
return

# find average weighted allocation over range of trailing window lengths
a = np.zeros(len(context.stocks))
w = 0
prices = history(8*390,'1m','price')
for n in range(1,9):
(a,w) = get_allocation(context,data,n,prices.tail(n*390))
a += w*a
w += w

allocation = a/w
allocation = allocation/np.sum(allocation)

allocate(context,data,allocation)

def allocate(context, data, desired_port):

# order long stocks
long_pct = 1.0 - context.pct_index
for i, stock in enumerate(context.stocks):
order_target_percent(stock, long_pct*context.leverage*desired_port[i])

qqq = sid(19920) # QQQ

# short index
order_target_percent(qqq,-context.leverage*context.pct_index)

for stock in data:
if stock in context.stocks:
pass
elif stock == qqq:
pass
else:
order_target_percent(stock,0)

def norm_squared(b,*args):

b_t = np.asarray(args)
delta_b = b - b_t

return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):

b_t = np.asarray(args)
delta_b = b - b_t

return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
75 responses

Maybe I am missing something as I have just taken a cursory look but in the end you seem to be earning about 2x spy with 2.5x leverage. I don't see the appeal?

Brenda has a good point. I think it is also important that your correlation to the field is low/negative. Ultimately, i think Q is looking for your equity curve to be a straight line with a positive slope. y = 0.10x + 0 is a good line to aim for!

Is the return relative to SPY at all relevant for a market-neutral strategy? It seems like the "pure alpha" market is not so interested in high returns, but just anything that doesn't go along with the market. I see that Vanguard Market Neutral Fund Investor Shares (VMNFX) has a long-short fund that is in the 3% per year range, and they reference it to a 3-month T-bill. Not so sexy, but they are managing over $400M. What am I missing? @Grant, thanks for sharing this, I'll try to take a look at it. I just cloned it to try to run a longer backtest but it seems to fail on an error even when just changing the startdate to 2010. Any ideas why? Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. A discussion of the leverage without any margin (borrowing) might be educational. Hi Grant, Thanks for sharing this! I’d be happy to share my thought process for how we'd evaluate this for Quantopian’s fund. We evaluate strategies based solely on the exhaust data, so first I’ll run through what I’d look at just based on backtest results, then I’ll speak to the extra info you’ve shared by pointing us to the paper you referenced and by exposing the logic. The first thing we do is to run the longest historical backtest we can get to complete. For contest algos we do this using a test harness that lets us dump in any algo by it’s identifier, along with any desired start/end dates, starting capital base and cost models. In this case however, I just cloned your algo and ran backtests by hand. The longest backtest i could get to complete started 1/1/2011 (attached here). I didn’t dig into why it breaks earlier, but typically we prefer to have at least 5 - 10 year backtest to evaluate. Because I could access your leverage in the algo, I set that to 1.0 so I could evaluate the strategy separate from any leverage affects. While results of this backtest would pass our beta filter, it would not pass our first level performance filters for the fund (Annl returns > 7%, Annl Sharpe > 0.60) - so this algo wouldn’t make it past this initial stage of evaluation for further analysis. In this case however, you’ve shared the details of your strategy and your code directly, so I took a little more time poking around to try to see why this algo doesn’t look that hot. As a rule, the idea of choosing an interesting academic paper to replicate and tweak is a great one - I definitely support that approach! In this case the paper describes a method for selecting a long-only portfolio of stocks based on looking at mean reversion — seems reasonable — but it’s not designed as a hedged strategy which is what we’re interested in for our fund. The authors specifically mention that the strategy is designed for ‘self financed’ portfolios, and they aren’t going to explore margin/short selling. At this point I’m then curious to check, how does this algo do if you remove the hedge you’ve added in QQQ and just dial back to a pure replication of the paper. I checked that out and indeed it looks much more profitable ( see next backtest below, Sharpe of > 1.0 for the full backtest period), but alas, now you’re back to a Beta of just about 1.0. What that tells me is that this strategy derives the majority of it’s power from following the market and that it’s no longer consistently profitable when hedged, or at least when hedged the way you’ve implemented it here (50% allocation to shorting QQQ on a fixed basis). So where would I go from here? Personally I’d probably choose a paper or template that starts off exploiting mean reversion from a long/short approach natively vs. bolting on the hedge after the fact. There’s a template of a long/short mean reversion algo that we’ve recently added to our Help pages. if I wanted to pursue the techniques outlined in this paper then I’d probably start with the short side, since that seems to where things are falling apart. Rather than allocating a static 50% of my portfolio value to QQQ I’d be interested in trying to allocate 50% long in stocks who are under-performing and 50% short to individual stocks that are out-performing. I hope that's helpful and I'd be happy to look at any other interesting strategies you'd like to send our way! Best regards, Jess 58 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Adapted from: # Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012. # http://icml.cc/2012/papers/168.pdf import numpy as np from scipy import optimize import pandas as pd def initialize(context): context.eps = 1.005 context.pct_index = 0.5 # percentage short QQQ context.leverage = 1.0 print 'context.eps = ' + str(context.eps) print 'context.pct_index = ' + str(context.pct_index) print 'context.leverage = ' + str(context.leverage) schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60)) def before_trading_start(context,data): fundamental_df = get_fundamentals( query( fundamentals.valuation.market_cap, ) .filter(fundamentals.company_reference.primary_exchange_id == 'NAS') .filter(fundamentals.valuation.market_cap != None) .order_by(fundamentals.valuation.market_cap.desc()).limit(30)) update_universe(fundamental_df.columns.values) context.stocks = [stock for stock in fundamental_df] # check if data exists for stock in context.stocks: if stock not in data: context.stocks.remove(stock) def handle_data(context, data): record(leverage = context.account.leverage) def get_allocation(context,data,n,prices): prices = pd.ewma(prices,span=390).as_matrix(context.stocks) b_t = [] for stock in context.stocks: b_t.append(context.portfolio.positions[stock].amount*data[stock].price) m = len(b_t) b_0 = np.zeros(m) denom = np.sum(b_t) if denom == 0.0: b_t = np.copy(b_0) else: b_t = np.divide(b_t,denom) x_tilde = [] for i, stock in enumerate(context.stocks): mean_price = np.mean(prices[:,i]) x_tilde.append(mean_price/prices[-1,i]) bnds = [] limits = [0,1] for stock in context.stocks: bnds.append(limits) bnds = tuple(tuple(x) for x in bnds) cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0}, {'type': 'ineq', 'fun': lambda x: np.dot(x,x_tilde) - context.eps}) res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False, 'maxiter': 100, 'iprint': 1, 'ftol': 1e-6}) allocation = res.x allocation[allocation<0] = 0 allocation = allocation/np.sum(allocation) if res.success and (np.dot(allocation,x_tilde)-context.eps > 0): return (allocation,np.dot(allocation,x_tilde)) else: return (b_t,1) def trade(context,data): # check if data exists for stock in context.stocks: if stock not in data: context.stocks.remove(stock) # check for de-listed stocks & leveraged ETFs for stock in context.stocks: if stock.security_end_date < get_datetime(): # de-listed ? context.stocks.remove(stock) if stock in security_lists.leveraged_etf_list: # leveraged ETF? context.stocks.remove(stock) # check for open orders if get_open_orders(): return # find average weighted allocation over range of trailing window lengths a = np.zeros(len(context.stocks)) w = 0 prices = history(8*390,'1m','price') for n in range(1,9): (a,w) = get_allocation(context,data,n,prices.tail(n*390)) a += w*a w += w allocation = a/w allocation = allocation/np.sum(allocation) allocate(context,data,allocation) def allocate(context, data, desired_port): # order long stocks long_pct = 1.0 - context.pct_index for i, stock in enumerate(context.stocks): order_target_percent(stock, long_pct*context.leverage*desired_port[i]) qqq = sid(19920) # QQQ # short index order_target_percent(qqq,-context.leverage*context.pct_index) for stock in data: if stock in context.stocks: pass elif stock == qqq: pass else: order_target_percent(stock,0) def norm_squared(b,*args): b_t = np.asarray(args) delta_b = b - b_t return 0.5*np.dot(delta_b,delta_b.T) def norm_squared_deriv(b,*args): b_t = np.asarray(args) delta_b = b - b_t return delta_b There was a runtime error. Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. Here's the long-only backtest for reference. 58 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Adapted from: # Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012. # http://icml.cc/2012/papers/168.pdf import numpy as np from scipy import optimize import pandas as pd def initialize(context): #set_benchmark(sid(19920)) context.eps = 1.005 context.pct_index = 0.5 # percentage short QQQ context.leverage = 2.0 print 'context.eps = ' + str(context.eps) print 'context.pct_index = ' + str(context.pct_index) print 'context.leverage = ' + str(context.leverage) schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60)) def before_trading_start(context,data): fundamental_df = get_fundamentals( query( fundamentals.valuation.market_cap, ) .filter(fundamentals.company_reference.primary_exchange_id == 'NAS') .filter(fundamentals.valuation.market_cap != None) .order_by(fundamentals.valuation.market_cap.desc()).limit(30)) update_universe(fundamental_df.columns.values) context.stocks = [stock for stock in fundamental_df] # check if data exists for stock in context.stocks: if stock not in data: context.stocks.remove(stock) def handle_data(context, data): record(leverage = context.account.leverage) def get_allocation(context,data,n,prices): prices = pd.ewma(prices,span=390).as_matrix(context.stocks) b_t = [] for stock in context.stocks: b_t.append(context.portfolio.positions[stock].amount*data[stock].price) m = len(b_t) b_0 = np.zeros(m) denom = np.sum(b_t) if denom == 0.0: b_t = np.copy(b_0) else: b_t = np.divide(b_t,denom) x_tilde = [] for i, stock in enumerate(context.stocks): mean_price = np.mean(prices[:,i]) x_tilde.append(mean_price/prices[-1,i]) bnds = [] limits = [0,1] for stock in context.stocks: bnds.append(limits) bnds = tuple(tuple(x) for x in bnds) cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0}, {'type': 'ineq', 'fun': lambda x: np.dot(x,x_tilde) - context.eps}) res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False, 'maxiter': 100, 'iprint': 1, 'ftol': 1e-6}) allocation = res.x allocation[allocation<0] = 0 allocation = allocation/np.sum(allocation) if res.success and (np.dot(allocation,x_tilde)-context.eps > 0): return (allocation,np.dot(allocation,x_tilde)) else: return (b_t,1) def trade(context,data): # check if data exists for stock in context.stocks: if stock not in data: context.stocks.remove(stock) # check for de-listed stocks & leveraged ETFs for stock in context.stocks: if stock.security_end_date < get_datetime(): # de-listed ? context.stocks.remove(stock) if stock in security_lists.leveraged_etf_list: # leveraged ETF? context.stocks.remove(stock) # check for open orders if get_open_orders(): return # find average weighted allocation over range of trailing window lengths a = np.zeros(len(context.stocks)) w = 0 prices = history(8*390,'1m','price') for n in range(1,9): (a,w) = get_allocation(context,data,n,prices.tail(n*390)) a += w*a w += w allocation = a/w allocation = allocation/np.sum(allocation) allocate(context,data,allocation) def allocate(context, data, desired_port): # order long stocks long_pct = 1.0 - context.pct_index for i, stock in enumerate(context.stocks): order_target_percent(stock, long_pct*context.leverage*desired_port[i]) qqq = sid(19920) # QQQ # short index #order_target_percent(qqq,-context.leverage*context.pct_index) for stock in data: if stock in context.stocks: pass elif stock == qqq: pass else: order_target_percent(stock,0) def norm_squared(b,*args): b_t = np.asarray(args) delta_b = b - b_t return 0.5*np.dot(delta_b,delta_b.T) def norm_squared_deriv(b,*args): b_t = np.asarray(args) delta_b = b - b_t return delta_b There was a runtime error. Thanks Jess, I sorta figured it was a kludge ("a goof-ball idea" as I say above), but on the surface, it looked o.k. I didn't realize that you have Annual Returns > 7% (independent of leverage?) and Sharpe > 0.6 filters for the fund, so that's good guidance. The mean reversion code I've played around with seems to work pretty well on the long side (both the analytic OLMAR and the iterative optimization), and I have to think it should work on the short side, as well. The problem, it would seem, is to have a bunch of stocks that are trending upward, and others that are trending downward, say 25 of each. I gather your new API will aid in sorting out a pool of longs and a pool of shorts? Grant Jess & Justin, Better? When you get the chance, I'd be interested in your feedback. Grant 80 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np from scipy import optimize import pandas as pd import datetime def initialize(context): context.eps = 1.0 context.leverage = 1.0 schedule_function(trade, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60)) set_benchmark(symbol('QQQ')) def before_trading_start(context,data): fundamental_df = get_fundamentals( query( fundamentals.valuation.market_cap, ) .filter(fundamentals.company_reference.primary_exchange_id == 'NAS') .filter(fundamentals.valuation.market_cap != None) .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) update_universe(fundamental_df.columns.values) context.stocks = [stock for stock in fundamental_df] def handle_data(context, data): leverage = context.account.leverage if leverage >= 3.0: print "Leverage >= 3.0" record(leverage = leverage) for stock in context.stocks: if stock.security_end_date < get_datetime() + datetime.timedelta(days=5): # de-listed ? context.stocks.remove(stock) if stock in security_lists.leveraged_etf_list: # leveraged ETF? context.stocks.remove(stock) # check if data exists for stock in context.stocks: if stock not in data: context.stocks.remove(stock) def get_allocation(context,data,prices): prices = pd.ewma(prices,span=390).as_matrix(context.stocks) b_t = [] for stock in context.stocks: b_t.append(abs(context.portfolio.positions[stock].amount*data[stock].price)) m = len(b_t) b_0 = 1.0*np.ones(m)/m denom = np.sum(b_t) if denom > 0: b_t = np.divide(b_t,denom) else: b_t = b_0 x_tilde = [] context.ls = {} for i,stock in enumerate(context.stocks): mean_price = np.mean(prices[:,i]) price_rel = mean_price/prices[-1,i] if price_rel < 0.993: price_rel = 1.0/price_rel context.ls[stock] = -1 elif price_rel > 1.007: context.ls[stock] = 1 else: price_rel = 1.0 context.ls[stock] = 0 x_tilde.append(price_rel) bnds = [] limits = [0,1] for stock in context.stocks: bnds.append(limits) bnds = tuple(tuple(x) for x in bnds) cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0}, {'type': 'ineq', 'fun': lambda x: np.dot(x,x_tilde) - context.eps}) res= optimize.minimize(norm_squared, b_t, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False, 'maxiter': 100, 'iprint': 1, 'ftol': 1e-8}) allocation = res.x allocation[allocation<0] = 0 allocation = allocation/np.sum(allocation) if res.success and (np.dot(allocation,x_tilde)-context.eps > 0): return (allocation,np.dot(allocation,x_tilde)) else: return (b_t,1) def trade(context,data): # find average weighted allocation over range of trailing window lengths prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1) context.stocks = list(prices.columns.values) a = np.zeros(len(context.stocks)) w = 0 for n in range(1,21): (a,w) = get_allocation(context,data,prices.tail(n*390)) a += w*a w += w allocation = a/w denom = np.sum(allocation) if denom > 0: allocation = allocation/np.sum(allocation) allocate(context,data,allocation) def allocate(context, data, desired_port): # check for open orders if get_open_orders(): return pct_ls = 0 for i, stock in enumerate(context.stocks): pct_ls += context.ls[stock]*desired_port[i] order_target_percent(stock, context.leverage*context.ls[stock]*desired_port[i]) order_target_percent(sid(19920), -context.leverage*pct_ls) record(pct_ls = pct_ls) for stock in data: if stock not in context.stocks + [sid(19920)]: order_target_percent(stock,0) def norm_squared(b,*args): b_t = np.asarray(args) delta_b = b - b_t return 0.5*np.dot(delta_b,delta_b.T) def norm_squared_deriv(b,*args): b_t = np.asarray(args) delta_b = b - b_t return delta_b There was a runtime error. @Grant Kiehne ,Thank you very much Hi Grant, I ran a 10 year backtest on this version to check it out - see if this notebook with the pyfolio tearsheet works. The biggest concern that jumps out is that most of the positive returns occur in the past 2.5 years. Prior to 2013 the results look pretty mixed. Is there a reason that you think this strategy would work well only recently? Does it make sense to you that you'd see such steeply positive returns all of sudden in the 1H of 2015? In the absence of an explanation what it looks like is that the results are overly datamined on recent historical data. I'd definitely encourage you to think about looking for strategies with long-running consistent performance, and using the research platform with pyfolio is also a great tool at your disposal. Best, Jess 25 Loading notebook preview... Thanks Jess. I'm not sure why it does better recently. I would think that more recent results would weigh more heavily. Say it'd done well since 2005, except fizzled in the last year? Then you'd say "Oh, looks good except there's too much risk going forward." I gather you need a straight line. Anyway, I'll see what I can figure out. --Grant https://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/quantstrat/sandbox/backtest_musings/strat_dev_process.pdf?root=blotter This is a nice write-up on process-oriented development of trading strategies. I think the emphasis on analyzing signal strength and decay in isolation and in advance of daring to backtest. The above looks like a variant of your OLMAR stuff - perhaps a research notebook analyzing the expected returns of your mean-reversion signals would help focus your research? I am not sure off-hand if you'd have to extricate the signals from the optimization step, I think you would and could... Thanks Simon, Grant Forgot to mention, I liked the idea of averaging the allocation over several periods. I wonder how that compares to calculating an allocation based on averages. Hi Grant, Had you considered selecting the universe based on other factor model, e.g. QualityMinusJunk? Simon, Yes, the assumption here is that if mean reversion works on the long side, it'll work on the short side, too. The allocation over multiple periods comes straight out of the OLMAR paper (the so-called BAH(OLMAR) algo). Regarding the reference you provided, I gather that the gist of it is to be more systematic, but this whole hedge fund idea may have little or no payback. Turning the effort into an 80-hour plus research effort might never pay off. It'd be nice if the Q folks would fund just about anything, but adjusted for risk. The algo I posted above should be worth at least$100 of capital, no? If one has to write an uber-algo fundable at $5M-$10M, then the barrier to entry is too high, in my opinion.

Grant

I am not sure about less funding for less risk; if an algo is exploiting a genuine edge, I think one wants to sink the knife to the hilt, but if it doesn't, you'd just be tying up capital on something which is in the best case random, in the worst case overfitted with negative expectation.

Several of the ideas that I have been writing algos for, on the belief that they work, when I look for evidence of any such effect in Research, there is nothing. It's depressing, to be honest, but at least if you see a scatter plot with one big round lump, it gets you to start thinking about conditioning or modifying your signal, rather than twiddling with parameters in a backtest looking for an extra 15% over 10 years, which I am very susceptible to.

As for 80 hours, I guess I am committed to spending that time on this one way or another, and I am coming around to the idea that spending it on algo writing and backtesting is less effective than spending it in Research.

All good points. I should probably take the plunge and start using the research platform. A more rational path, I suppose.

Jess,

Here's another one for you. At least over the 2-year backtest period, it would seem to hit the mark.

Grant

138
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):

context.eps = 1.0
context.leverage = 1.0

set_benchmark(symbol('QQQ'))

fundamental_df = get_fundamentals(
query(
fundamentals.valuation.market_cap,
)
.filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
.filter(fundamentals.valuation.market_cap != None)
.order_by(fundamentals.valuation.market_cap.desc()).limit(50))
update_universe(fundamental_df.columns.values)
context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):

leverage = context.account.leverage

if leverage >= 3.0:
print "Leverage >= 3.0"

record(leverage = leverage)

for stock in context.stocks:
if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
context.stocks.remove(stock)
if stock in security_lists.leveraged_etf_list: # leveraged ETF?
context.stocks.remove(stock)

# check if data exists
for stock in context.stocks:
if stock not in data:
context.stocks.remove(stock)

prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1)
context.stocks = list(prices.columns.values)

# skip bar if any orders are open
for stock in context.stocks:
if bool(get_open_orders(stock)):
return

sum_weighted_port = np.zeros(len(context.stocks))
sum_weights = 0

for n in range(1,21):
(weight,weighted_port) = get_weighted_port(data,context,prices.tail(n*390))
sum_weighted_port += weighted_port
sum_weights += weight

allocation_optimum = sum_weighted_port/sum_weights

rebalance_portfolio(data, context, allocation_optimum)

def get_weighted_port(data,context,prices):

prices = pd.ewma(prices,span=390).as_matrix(context.stocks)

b_t = np.zeros(len(context.stocks))

# update portfolio
for i, stock in enumerate(context.stocks):
b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)

denom = np.sum(b_t)
# test for divide-by-zero case
if denom > 0:
b_t = np.divide(b_t,denom)
else:
b_t = np.ones(len(context.stocks)) / len(context.stocks)

x_tilde = np.zeros(len(context.stocks))

b = np.zeros(len(context.stocks))

context.ls = {}
for stock in context.stocks:
context.ls[stock] = 0

# find relative moving volume weighted average price for each secuirty
for i,stock in enumerate(context.stocks):
mean_price = np.mean(prices[:,i])
price_rel = mean_price/prices[-1,i]
if price_rel < 0.997:
price_rel = 1.0/price_rel
context.ls[stock] += -1
elif price_rel > 1.003:
context.ls[stock] += 1
else:
context.ls[stock] += 0
price_rel = 1.0
x_tilde[i] = price_rel

###########################
# Inside of OLMAR (algo 2)

x_bar = x_tilde.mean()

# Calculate terms for lambda (lam)
dot_prod = np.dot(b_t, x_tilde)
num = context.eps - dot_prod
denom = (np.linalg.norm((x_tilde-x_bar)))**2

# test for divide-by-zero case
if denom == 0.0:
lam = 0 # no portolio update
else:
lam = max(0, num/denom)

b = b_t + lam*(x_tilde-x_bar)

b_norm = simplex_projection(b)

weight = np.dot(b_norm,x_tilde)

return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):

# check for open orders
for stock in context.stocks:
if get_open_orders(stock):
return

pct_ls = 0

for i, stock in enumerate(context.stocks):
pct_ls += context.ls[stock]*desired_port[i]
order_target_percent(stock, context.leverage*context.ls[stock]*desired_port[i])

order_target_percent(sid(19920), -context.leverage*pct_ls)

record(pct_ls = pct_ls)

for stock in data:
if stock not in context.stocks + [sid(19920)]:
order_target_percent(stock,0)

def simplex_projection(v, b=1):
"""Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

v = np.asarray(v)
p = len(v)

# Sort v into u in descending order
v = (v > 0) * v
u = np.sort(v)[::-1]
sv = np.cumsum(u)

rho = np.where(u > (sv - b) / np.arange(1, p+1))[-1]
theta = np.max([0, (sv[rho] - b) / (rho+1)])
w = (v - theta)
w[w<0] = 0
return w

There was a runtime error.

Hi Grant,

I suggest that you run a 10-12 year backtest on any strategy to see how it looks over a broader range of market cycles.

I've done that in the attached backtest and the results are not compelling, most notably the strategy suffers a 67% maximum drawdown and delivers a cumulative return of -34% over the ten years between 2005 and today. These results do not meet our criteria for a capital allocation.

In addition to widening your analysis to take advantage of the full scope of historical data available, I'd also encourage you to re-examine the economic rationale behind your universe selection. My sense is that you're focused on applying a particular technique that's designed to optimize the portfolio weights given to a set of stocks that some other process has determined are a 'good investment'. You seem to be using the largest 50 stocks by market cap listed on the NASDAQ if I'm reading your logic correctly. Putting aside what weighting scheme you choose, that portfolio selection amounts to a single long-term bet that the largest 50 stocks in the NASDAQ will outperform the broad NASDAQ index over time. You could actually isolate those two aspects of your strategy by testing how an equal weight portfolio using the same universe selection performs over the same time period. If that equal-weight version doesn't make money then I think you'd be hard pressed to find a weighting scheme that's going to look very good.

On the other hand, if you find a method that systematically selects a 'good' basket of stocks that do tend to outperform, then applying a smart portfolio optimization technique on top can be a great boost to performance.

Hope those are helpful suggestions. -Jess

19
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):

context.eps = 1.0
context.leverage = 1.0

set_benchmark(symbol('QQQ'))

fundamental_df = get_fundamentals(
query(
fundamentals.valuation.market_cap,
)
.filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
.filter(fundamentals.valuation.market_cap != None)
.order_by(fundamentals.valuation.market_cap.desc()).limit(50))
update_universe(fundamental_df.columns.values)
context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):

leverage = context.account.leverage

if leverage >= 3.0:
print "Leverage >= 3.0"

record(leverage = leverage)

for stock in context.stocks:
if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
context.stocks.remove(stock)
if stock in security_lists.leveraged_etf_list: # leveraged ETF?
context.stocks.remove(stock)

# check if data exists
for stock in context.stocks:
if stock not in data:
context.stocks.remove(stock)

prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1)
context.stocks = list(prices.columns.values)

# skip bar if any orders are open
for stock in context.stocks:
if bool(get_open_orders(stock)):
return

sum_weighted_port = np.zeros(len(context.stocks))
sum_weights = 0

for n in range(1,21):
(weight,weighted_port) = get_weighted_port(data,context,prices.tail(n*390))
sum_weighted_port += weighted_port
sum_weights += weight

allocation_optimum = sum_weighted_port/sum_weights

rebalance_portfolio(data, context, allocation_optimum)

def get_weighted_port(data,context,prices):

prices = pd.ewma(prices,span=390).as_matrix(context.stocks)

b_t = np.zeros(len(context.stocks))

# update portfolio
for i, stock in enumerate(context.stocks):
b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)

denom = np.sum(b_t)
# test for divide-by-zero case
if denom > 0:
b_t = np.divide(b_t,denom)
else:
b_t = np.ones(len(context.stocks)) / len(context.stocks)

x_tilde = np.zeros(len(context.stocks))

b = np.zeros(len(context.stocks))

context.ls = {}
for stock in context.stocks:
context.ls[stock] = 0

# find relative moving volume weighted average price for each secuirty
for i,stock in enumerate(context.stocks):
mean_price = np.mean(prices[:,i])
price_rel = mean_price/prices[-1,i]
if price_rel < 0.997:
price_rel = 1.0/price_rel
context.ls[stock] += -1
elif price_rel > 1.003:
context.ls[stock] += 1
else:
context.ls[stock] += 0
price_rel = 1.0
x_tilde[i] = price_rel

###########################
# Inside of OLMAR (algo 2)

x_bar = x_tilde.mean()

# Calculate terms for lambda (lam)
dot_prod = np.dot(b_t, x_tilde)
num = context.eps - dot_prod
denom = (np.linalg.norm((x_tilde-x_bar)))**2

# test for divide-by-zero case
if denom == 0.0:
lam = 0 # no portolio update
else:
lam = max(0, num/denom)

b = b_t + lam*(x_tilde-x_bar)

b_norm = simplex_projection(b)

weight = np.dot(b_norm,x_tilde)

return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):

# check for open orders
for stock in context.stocks:
if get_open_orders(stock):
return

pct_ls = 0

for i, stock in enumerate(context.stocks):
pct_ls += context.ls[stock]*desired_port[i]
order_target_percent(stock, context.leverage*context.ls[stock]*desired_port[i])

order_target_percent(sid(19920), -context.leverage*pct_ls)

record(pct_ls = pct_ls)

for stock in data:
if stock not in context.stocks + [sid(19920)]:
order_target_percent(stock,0)

def simplex_projection(v, b=1):
"""Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

v = np.asarray(v)
p = len(v)

# Sort v into u in descending order
v = (v > 0) * v
u = np.sort(v)[::-1]
sv = np.cumsum(u)

rho = np.where(u > (sv - b) / np.arange(1, p+1))[-1]
theta = np.max([0, (sv[rho] - b) / (rho+1)])
w = (v - theta)
w[w<0] = 0
return w

There was a runtime error.

Thanks Jess,

Part of the problem for the algo above, I suspect, is that it may be getting killed by commissions. Is there any way to track the commissions? Why doesn't the backtester just spit that number out?

The idea behind all of this fumbling around is that stocks should mean-revert both as their prices rise, and as their prices fall. So, more long weight should be put on stocks that are well below their means, and more short weight should be put on stocks that are well above their means. And if long and short weights don't sum to zero, then I can make up the difference with an ETF. Seems logical, no? But maybe it doesn't work that way at all, or at least not for the securities I'm working with. Or it does, and I just have written the code wrong.

I guess the question is over a 10 year time period, for the 50 largest cap Nasdaq stocks (at any point in time), to what extent and in what fashion do they mean revert, both long and short, on a day by day, minute by minute basis? Any ideas how to code something like that up in your research platform, in such a way that it would allow me to write a good algo (or not, if the answer is that it won't be a money-maker)?

More generally, there must be tried-and-true methods of finding lists of stocks that should be long and short, and how to weight them. Any ideas?

Grant

I think this is the essence of the problem in writing consistently profitable trading algorithms. If you find such a weighting, the problem is solved.

Grant, thanks for the context on the economic rationale, it's very helpful and I'd missed that this version is long/short which is definitely more interesting to my mind.

One simple way to check if t-costs are destroying your performance is just to re-run the same backtest with zero t-costs and slippage. I've done that in the attached backtest and while it looks better, even with this zero cost assumption the results are not compelling.

I'm inclined to agree with Simon that your last question is really just the problem statement for systematic investing. Turns out it really isn't that easy to figure out what stocks are going to go up and down and by how much in the future!

I'd also strongly endorse the suggestion Simon made to make use of the Research platform to help you identify when you've found a predictive relationship in your data and only then take the time to run backtest simulations to confirm that the 'edge' you've found can be exploited in a trading strategy. The cycle of tweaking parameters and re-running long backtests is not only time-consuming but very prone to overfitting in my experience.

19
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):

context.eps = 1.0
context.leverage = 1.0

set_benchmark(symbol('QQQ'))

fundamental_df = get_fundamentals(
query(
fundamentals.valuation.market_cap,
)
.filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
.filter(fundamentals.valuation.market_cap != None)
.order_by(fundamentals.valuation.market_cap.desc()).limit(50))
update_universe(fundamental_df.columns.values)
context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):

leverage = context.account.leverage

if leverage >= 3.0:
print "Leverage >= 3.0"

record(leverage = leverage)

for stock in context.stocks:
if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
context.stocks.remove(stock)
if stock in security_lists.leveraged_etf_list: # leveraged ETF?
context.stocks.remove(stock)

# check if data exists
for stock in context.stocks:
if stock not in data:
context.stocks.remove(stock)

prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1)
context.stocks = list(prices.columns.values)

# skip bar if any orders are open
for stock in context.stocks:
if bool(get_open_orders(stock)):
return

sum_weighted_port = np.zeros(len(context.stocks))
sum_weights = 0

for n in range(1,21):
(weight,weighted_port) = get_weighted_port(data,context,prices.tail(n*390))
sum_weighted_port += weighted_port
sum_weights += weight

allocation_optimum = sum_weighted_port/sum_weights

rebalance_portfolio(data, context, allocation_optimum)

def get_weighted_port(data,context,prices):

prices = pd.ewma(prices,span=390).as_matrix(context.stocks)

b_t = np.zeros(len(context.stocks))

# update portfolio
for i, stock in enumerate(context.stocks):
b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)

denom = np.sum(b_t)
# test for divide-by-zero case
if denom > 0:
b_t = np.divide(b_t,denom)
else:
b_t = np.ones(len(context.stocks)) / len(context.stocks)

x_tilde = np.zeros(len(context.stocks))

b = np.zeros(len(context.stocks))

context.ls = {}
for stock in context.stocks:
context.ls[stock] = 0

# find relative moving volume weighted average price for each secuirty
for i,stock in enumerate(context.stocks):
mean_price = np.mean(prices[:,i])
price_rel = mean_price/prices[-1,i]
if price_rel < 0.997:
price_rel = 1.0/price_rel
context.ls[stock] += -1
elif price_rel > 1.003:
context.ls[stock] += 1
else:
context.ls[stock] += 0
price_rel = 1.0
x_tilde[i] = price_rel

###########################
# Inside of OLMAR (algo 2)

x_bar = x_tilde.mean()

# Calculate terms for lambda (lam)
dot_prod = np.dot(b_t, x_tilde)
num = context.eps - dot_prod
denom = (np.linalg.norm((x_tilde-x_bar)))**2

# test for divide-by-zero case
if denom == 0.0:
lam = 0 # no portolio update
else:
lam = max(0, num/denom)

b = b_t + lam*(x_tilde-x_bar)

b_norm = simplex_projection(b)

weight = np.dot(b_norm,x_tilde)

return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):

# check for open orders
for stock in context.stocks:
if get_open_orders(stock):
return

pct_ls = 0

for i, stock in enumerate(context.stocks):
pct_ls += context.ls[stock]*desired_port[i]
order_target_percent(stock, context.leverage*context.ls[stock]*desired_port[i])

order_target_percent(sid(19920), -context.leverage*pct_ls)

record(pct_ls = pct_ls)

for stock in data:
if stock not in context.stocks + [sid(19920)]:
order_target_percent(stock,0)

def simplex_projection(v, b=1):
"""Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

v = np.asarray(v)
p = len(v)

# Sort v into u in descending order
v = (v > 0) * v
u = np.sort(v)[::-1]
sv = np.cumsum(u)

rho = np.where(u > (sv - b) / np.arange(1, p+1))[-1]
theta = np.max([0, (sv[rho] - b) / (rho+1)])
w = (v - theta)
w[w<0] = 0
return w

There was a runtime error.

Well, I have to wonder if the whole long-short idea makes any sense at all. The folks at Vanguard have been at it since 1998, and I see that they've managed to make a whopping 3% per year (with a big hit to returns from their 1.6% expense ratio):

https://personal.vanguard.com/us/funds/snapshot?FundId=0634&FundIntExt=INT#tab=0

Of course, they have $400M in assets, so maybe the scale is killing them. Regarding use of the research platform, it is probably a great suggestion, but could you/Simon be more specific? Are there standard recipes for exploring trade data for the purpose of setting up a long-short trading strategy? I have to imagine that the investment world has been working this problem since the beginning of time, so some known-good approaches must have emerged by now. Or maybe nothing works, and it is all just monkeys on typewriters out there. Are you planning to unveil some tools with: https://www.quantopian.com/posts/how-to-build-a-stat-arb-strategy-on-quantopian Will it allow efficient filtering/searching for baskets of stocks that would be good candidates for portfolios that would have the right characteristics, consistently over a 10-year time frame? And with enough capacity to handle$5M-$10M in capital? Will it have parallel processing capability, so it doesn't take eons? Hey Grant, Here's a long only strategy that I have running at IB. Its not hedged, but almost behaves like a hedged strategy. 463 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23921), # TLT 20+ Year T Bonds sid(23870)] # IEF 7-10 Year T Notes context.secs = [ sid(19662), # XLY Consumer Discrectionary SPDR Fund sid(19656), # XLF Financial SPDR Fund sid(19658), # XLK Technology SPDR Fund sid(19655), # XLE Energy SPDR Fund sid(19661), # XLV Health Care SPRD Fund sid(19657), # XLI Industrial SPDR Fund sid(19659), # XLP Consumer Staples SPDR Fund sid(19654), # XLB Materials SPDR Fund sid(19660)] # XLU Utilities SPRD Fund # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_open()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on SPY ############### spy_mean = data[sid(8554)].mavg(120) spy_sigma = data[sid(8554)].stddev(120) spy_price = data[sid(8554)].price spy_z = (spy_price - spy_mean) / spy_sigma ## Risk on/off logic: for bond in context.safe: if spy_z < -1.0: if get_open_orders(bond): continue order_target_percent(bond,.25) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(120) sigma = data[stock].stddev(120) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < spy_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.13) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.00) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(spy_z = spy_z)  We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page. There was a runtime error. Nice and simple, I like it! Does shorting ruin it? Thanks, Simon. I haven't worked out a good short logic for this one. I've thought about holding a static short in SPY but nothing is speaking to me. Feel free to hack it up! Thanks Jamie, Cool. Too bad Jess can't get you a gazillion dollars in her hedge fund, since it is long-only. Maybe you could trick her by shorting a bond inverse ETF (e.g. http://etfdb.com/type/bond/all/inverse/)? Or maybe it would be o.k. as-is, and the long-short business only applies to the contest? Grant Seems to hold up under shorting, so long as you short the bond inverse ETFs. And it sorta meets the criteria for the hedge fund: long-short, Sharpe > 0.7 and 7% or more return per year, kinda market neutral...did I miss anything? Jess, how can we get some money? I'll split the take with Jamie. : ) 52 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(38688), # TBF Short 20+ Year Treasury sid(41199)] # Short 7-10 Year Treasury context.secs = [ sid(19662), # XLY Consumer Discrectionary SPDR Fund sid(19656), # XLF Financial SPDR Fund sid(19658), # XLK Technology SPDR Fund sid(19655), # XLE Energy SPDR Fund sid(19661), # XLV Health Care SPRD Fund sid(19657), # XLI Industrial SPDR Fund sid(19659), # XLP Consumer Staples SPDR Fund sid(19654), # XLB Materials SPDR Fund sid(19660)] # XLU Utilities SPRD Fund # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_open()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on SPY ############### spy_mean = data[sid(8554)].mavg(120) spy_sigma = data[sid(8554)].stddev(120) spy_price = data[sid(8554)].price spy_z = (spy_price - spy_mean) / spy_sigma ## Risk on/off logic: for bond in context.safe: if spy_z < -1.0: if get_open_orders(bond): continue order_target_percent(bond,-.25) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(120) sigma = data[stock].stddev(120) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < spy_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.13) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.00) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(spy_z = spy_z)  There was a runtime error. All we have to do now is add leverage! @Grant, Thought I'd just tack on a few more thoughts here: 1) Possibly the QQQ vs. 50 large-tech stock stat arb you are attempting is "too efficient" already. Tech stocks, and the QQQ are pretty popular instruments for large institutional trading desks which focus on arbitrage, and since many institutions have virtually zero-commissions, and have extremely low-latency order execution I imagine the arb could be taken advantage of in seconds or 1 minute, etc. It's just a thought. 2) So, given #1 above, perhaps something to try is to actuallly use a smaller basket of stocks to check for a divergence vis a vis the QQQ. Maybe filter for the Top 200 large cap Nasdaq stocks, then just choose the Top 10 of those that have diverged in price versus the QQQ and then bet on that spread converging. Ernie Chan (QuantCon speaker, stat arb hedge fund manager, physicist, etc) has an excellent example of this on his blog here (He's using the Energy Sector ETF, the XLE, but the idea is the same): http://epchan.blogspot.com/2007/02/in-looking-for-pairs-of-financial.html 3) Ernie's blog is an endless resource for stat arb strategies, and I find myself spending hours just clicking link to link since he's been blogging now for almost 10 years it seems. Besides the actual blog post, he's quite active in the comments as he "debates" his methods with what I've come to realize are typically quite sophisticated readers it seems. Here are a couple other posts in this same paradigm of stat arb, for quick reference: (link to a great external paper in this one) http://epchan.blogspot.com/2007/02/mr_05.html Out-of-sample testing for co-integration based stat arb: http://epchan.blogspot.com/2007/04/anonymous-reader-l-posted-some.html 4) Another idea you might try is what some folks refer to as "Cross-sectional Mean Reversion Stat Arb" which would be to do as you currently are, Take the Top N stocks by marketcap (Let's say 100) and then find the Top 10 who have diverged from the QQQ in the positive direction (e.g. outperformed their benchmark), and then find the Bottom 10 which have diverged from the QQQ in the negative direction (e.g. underperformed their benchmark), and then go short the 10 outperforming stocks, and long the underperforming stocks, and rebalance this maybe every 1-day or 2-days. The reversion to the benchmark performance should be pretty quick I'd imagine, but you might try different holding periods. 5) In this other forum post I have some of the co-integration tests that Ernie references in his blog in certain places all coded up in Python (ADF test, half-life of mean reversion based on an OU-process, hurst exponent). In case you find the code snippets useful, here is that backtest algo code: https://www.quantopian.com/posts/pair-trade-with-cointegration-and-mean-reversion-tests Good luck - hopefully this gives you an additional perspective to apply to your algo research! -Justin Thanks Justin, I haven't ignored Simon's advice above to use the (totally awesome) Quantopian research platform. I'm somewhat familiar with it, but it is a bit daunting to get started on a new project. Sticking with large cap NASDAQ stocks (a somewhat arbitrary choice to begin with), if we were to use QQQ as a reference, I guess the first questions would be how good of a reference is it? Is there a way to test, on a a stock-by-stock basis, point-in-time, the goodness of QQQ as a reference? And is there an efficient way to do it over 50 stocks, every trading minute, for 10 years? In the end, I figure a high-level heat map would be nice, with nrows = 252 trading days/year x 10 years = 2520 days and 390 minutely columns (a matrix with 982800 elements). The heat map could be coded such that for any given minute, there would be a measure of the goodness of QQQ as a reference for all 50 stocks (this could simply be summary statistics of whatever test is applied to each stock). In the end, for example, green would indicate QQQ is a good reference across all stocks, red would mean it won't work at all as a reference, and yellow would indicate a mixed result. The problem I see is that to perform such an exercise, the goodness test will need to be computationally trivial. Each point on the heat map will require applying the goodness test 50 times, and since there are about 1M points on the heatmap, we have 50M tests. So, it seems like the test would need to be vectorized/parallelized in some fashion. Or we would need to throw out information (e.g. only compute on a daily basis, reducing the number of tests to 50M/390 = ~ 100K tests). And then there is the amount of data we use to apply the test. Should the trailing window be minutes? Days? Weeks? Months? Should the data be smoothed? Use only the "C" in OHLCV or somehow incorporate all trade info? Weight by volume in some fashion? Any thoughts? Grant Justin, By the way, what's wrong with the algo I posted above, based on Jamie's code (# Backtest ID: 5604767d6915150e17550492)? It would seem to meet the minimum requirements for the Q fund, no? Why can't we get a little money? You gotta start somewhere, so why not give us$10K to see how it plays out? In a year, I could buy a six pack of cheap beer or maybe even go out for a nice buffet dinner?

What's the risk? Or would it just not be worth your time?

Grant

Hey Grant,

I think we're on our own with this long-only strategy. We prob need $10M to live on the 2/20 fee structure. I heard recently that Renaissance was charging 5/44 at one point.. Something to strive for! Jamie, I actually wasn't kidding. Why would Q not put some money into such a strategy, if it meets their minimum criteria? And it's not long-only if one shorts the inverse ETFs, as I showed. So, why not? For me, if I make a$100 next year without doing anything, then I can buy a few tanks of gas or whatever. Maybe it just wouldn't fit in their conceived Q fund?

Grant

Grant,

I think Q might be interested in an algo like this.. But only after its out-of-sample performance is analyzed.

Until then, I guess we can solicit the investing community? $1000 minimum with 100 members and we have a fund. Prob need a good lawyer. Jamie, Well, it's Q's job to advise what they need in terms of in-sample and out-of-sample data, and then to go get money for us. And take care of the lawyer part, too. When I get the chance, maybe I'll play around some more with your algo (Backtest ID: 55b6a4d8e418a20c6c039099). But it would be good to get some guidance up front from the Q folks to see if it would be at all worthwhile. I'm kinda murky as to what the process is to get an algo evaluated, with feedback. There doesn't seem to be any formal process, other than submitting to the contest. And then, I guess you get an e-mail/phone call if your algo looks promising? Grant Grant - do you think this strategy would work in a bear market? It doesn't appear the data on some of the ETF's goes back prior to 2010 Hi Dan, The algo I posted above (Backtest ID: 5604767d6915150e17550492) is just the one Jamie posted, except with shorting of inverse bond ETFs instead of the bond ETFs he used. It's basically a gimmick so that the long-only algo appears to be long-short. I suggest just running Jamie's original code, posted above (Backtest ID: 55b6a4d8e418a20c6c039099). Grant Thanks, Grant! Jamie...what kind of leverage are you employing on IB with this algo? Hi Daniel, I'm using zero leverage at the moment -- just experimenting with this strategy for now. Looking back, the bond allocation scares me a little. The algo is now in "risk off" and half of the account is in treasuries. Given the current interest rate environment, TLT has added to the volatility. Feel free to make improvements! Jamie Very interesting...it would be cool to see if one could add VXX into the database to help the hedging prospects. I will take a look. If TLT scares you use a short dated bond fund or cash. I'm usingusing cash as my reserve asset at the moment but used to use 1 to 3 month bills when they actually had a meaningful yield. Forgot - I quickly tried out a long/short version of Jamie's algo, the performance is not ruined by adding shorts. 232 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23921), # TLT 20+ Year T Bonds sid(23870)] # IEF 7-10 Year T Notes context.secs = [ sid(19662), # XLY Consumer Discrectionary SPDR Fund sid(19656), # XLF Financial SPDR Fund sid(19658), # XLK Technology SPDR Fund sid(19655), # XLE Energy SPDR Fund sid(19661), # XLV Health Care SPRD Fund sid(19657), # XLI Industrial SPDR Fund sid(19659), # XLP Consumer Staples SPDR Fund sid(19654), # XLB Materials SPDR Fund sid(19660)] # XLU Utilities SPRD Fund # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_open()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on SPY ############### spy_mean = data[sid(8554)].mavg(120) spy_sigma = data[sid(8554)].stddev(120) spy_price = data[sid(8554)].price spy_z = (spy_price - spy_mean) / spy_sigma ## Risk on/off logic: for bond in context.safe: if spy_z < -1.0: if get_open_orders(bond): continue order_target_percent(bond,.25) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(120) sigma = data[stock].stddev(120) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < spy_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.13) log.info("Allocate long %s" % (stock.symbol) + " at %s" % str(exchange_time)) elif sect_z > spy_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,-.13) log.info("Allocate short %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.00) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(spy_z = spy_z)  We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page. There was a runtime error. Here's a version which maintains a constant leverage of 1.0. It probably slightly differently allocates to bonds than the original. 232 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23921), # TLT 20+ Year T Bonds sid(23870)] # IEF 7-10 Year T Notes context.secs = [ sid(19662), # XLY Consumer Discrectionary SPDR Fund sid(19656), # XLF Financial SPDR Fund sid(19658), # XLK Technology SPDR Fund sid(19655), # XLE Energy SPDR Fund sid(19661), # XLV Health Care SPRD Fund sid(19657), # XLI Industrial SPDR Fund sid(19659), # XLP Consumer Staples SPDR Fund sid(19654), # XLB Materials SPDR Fund sid(19660)] # XLU Utilities SPRD Fund # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_open()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on SPY ############### spy_mean = data[sid(8554)].mavg(120) spy_sigma = data[sid(8554)].stddev(120) spy_price = data[sid(8554)].price spy_z = (spy_price - spy_mean) / spy_sigma target_leverage = 1.0 # target leverage pending_orders = False new_orders = {} ## Risk on/off logic: for bond in context.safe: if get_open_orders(bond): pending_orders = True continue if spy_z < -1.0: new_orders[bond] = 2.0 else: new_orders[bond] = 2.0 ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(120) sigma = data[stock].stddev(120) current_price = data[stock].price sect_z = (current_price - mean) / sigma if get_open_orders(stock): pending_orders = True continue ## sector trade logic if sect_z < spy_z and (1.0 > sect_z > -1.0): new_orders[stock] = 1.0 elif sect_z > spy_z and (1.0 > sect_z > -1.0): new_orders[stock] = -1.0 else: new_orders[stock] = 0.0 if (pending_orders == False): total_desired_leverage = max(1.0,sum([abs(new_orders[x]) for x in new_orders])) scale_factor = target_leverage / total_desired_leverage for x in new_orders: order_target_percent(x, scale_factor * new_orders[x]) if (new_orders[x] > 0): log.info("Allocate long %s" % (x.symbol) + " at %s" % str(exchange_time)) elif (new_orders[x] < 0): log.info("Allocate short %s" % (x.symbol) + " at %s" % str(exchange_time)) else: log.info("Zero weight %s" % (x.symbol) + " at %s" % str(exchange_time)) record(spy_z = spy_z) record(target_net_lev = scale_factor * sum([new_orders[x] for x in new_orders]))  We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page. There was a runtime error. Nice work, Simon! Maybe I can incorporate some of this logic into a ranking scheme -- useful for a long/short strategy. Thanks Jamie for sharing your code. This is an amazing algorithm. I too plan to live trade with this. The drawdown is so minimal that it makes me feel safe to invest my own money. The first algo that I was comfortable investing my own money with. Thanks again Sa Hi Sa, I'm glad you like the algorithm. Take a look at the bond allocation and make sure you're comfortable with that. Right now the algo is in treasuries and TLT is very volitile given the impending Fed interest rate decision. It might be best to stick to shorter term bonds. VCSH might be a good position to add to the bond allocation. Good luck! What about a healthy mixture of different term bond ETFs as well as GLD in the safe asset class? It seems like there is no reason the safe assets shouldn't be diversified as well. This algo is quite sensitive to the exact rebalancing day of the month, so there might be some research to do whether the effect this is exploiting is the reversion from end-of-month run-up. Hi, I was wondering if some one for knowledgeable on this forum can give me a brief summary about the logic in regards to jamie's algorithm. I have tried to dissect it but i am still not sure how its weighting the portfolio. Many thanks all, Andrew Translating Python into Thinkscript The purpose of Jamie’s original algorithm is to tactically allocate between equities and Treasuries based on Risk On - Risk Off (RoRo) sentiment. The strategy is simple and logical. That is, when downside volatility exceeds one standard deviation, the algo allocates to Treasuries (“Risk Off”) and when downside volatility poses no threat, the algo invests in equities ("Risk On"). Seeing this as a volatility-based strategy, I made the following changes in Jamie’s original code: • reduced the equity universe to SSO (2x the return of SPX) and XIV (inverse the VIX) • went to safety with IEF, alone (outperforms TLT in downside markets) • keyed off of IWM instead of SPY (small caps generally signal earlier than SPX) • allowed the algo to go to cash • reduced the moving average length from 120 to 50 (more common among volatility strategies) • traded Market On Close (MOC) as opposed to the open (independently improves returns in multiple studies) On the surface, results looked encouraging. (XIV only allows for a backtest of 5 years.) So I translated the python code into Thinkscript (TDAmeritrade). This allowed me to see what the algo was doing as the price chart played out. The strategy's eligible buys/sells can be seen plotted (bubbles) on their monthly schedule. Take a look at what the algo's trading signals look like on a price chart here. Conclusion: Translating python script into thinkscript enhances one's understanding of the algo. 113 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23870)] # IEF context.secs = [ sid(32270), sid(40516)] # SSO, XIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_close()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma ## Risk on/off logic: for bond in context.safe: if iwm_z < -1: if get_open_orders(bond): continue order_target_percent(bond,.50) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.5) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(iwm_z = iwm_z)  There was a runtime error. Nice work, Stephen! I actually started out using thinkscript before making my way to Quantopian. Take a look at the attached notebook, I use these tearsheets a lot when assessing my algos. Simon, made a good observation about my original algo. The trade frequency might be another factor to play with and see how/if it effects performance. Keep it up! -Jamie 36 Loading notebook preview... Thanks Jamie. Your notebook is awesome. I absolutely find it uncanny how accurately your algo properly times the switch between equities and bonds. Translating your python script into thinkscript created an (RoRo) indicator that I'll keep an eye on, going forward! Check out your algo's ability to call allocations between stocks and bonds for the last 20 years here. For anyone who wants the thinkscript study, it's here. Regarding frequency, I found rebalancing more frequently than monthly simply diminished returns and increased transaction costs. If the algo switched regimes, ignoring the end-of-the-month, I suspect it would do better. Another area where I see room for improvement is in the algo's use of working capital. This issue has been brought up several times on Quantopian by "Garyha." Gary has properly emphasized the importance of calculating returns based on actual amount spent rather than the static starting capital figure that the backtester uses. And in this thread, above, Gary alludes to another important point: with greater leverage comes greater margin requirements, that is, money held back as collateral and not available for purchasing shares. Lastly, order_target_percent has its own unique issues ...also previously discussed in this forum. The work-around, however, is complex. Oh, and when the algo goes to cash for a month ...you can sell some iron condors or scalp the S&P futures! Jamie, I followed your lead into the Research environment to compare your original algo to my modified 3-asset version. To make the comparison “apples-to-apples”, I used identical time frames, limited by the life of XIV (5 years). I’ll refer to your original model as the “Diversified Equity Sectors” (DES) model. DES switches between Treasuries and core equity ETFs. 3AC switches between Treasuries and the S&P 500 (2x) or a short volatility ETF. Here are some highlights comparing the two versions, using the five main tear sheets: Cumulative Returns • DES earns 44.9%, has a Sharpe of 1.18, and does this with a comfortable 6.9% volatility. • 3AC earns 139.8%, has a Sharpe Ratio of 1.62, but does so with an unnerving 16% volatility. Adjusted for risk, however, this may be tolerable. Shock Events Both algos tolerate shocks very well. By comparison, 3AC demonstrates a markedly convex response to the Fukushima crisis. (more) Holdings (Both algos are long only.) • DES does what you asked it to do: stay diversified across equity ETFs (13%) unless downside volatility is high and then move to Treasuries (25%). • Keying off of the Russell, 3AC more promptly switches between it’s 3 assets, spending the greatest time shorting volatility via XIV. Transactions No noteworthy differences. Out-of-sample Bayesian Cones The predictive likelihood of positive returns in the future is favorable for both algos. Conclusion Both models produce high returns, well endure systemic shocks and have optimistic predictive distributions. Note: Kudos, first to Jamie Lunn for what I think is a stunningly simple and elegant concept. And Kudos, second to the pyfolio group. Folks, that's a really great tool. 13 Loading notebook preview... Hi Stephen, Does the above notebook have the comparison of the two algorithms? Many thanks, Andrew Stephen, I adjusted your 3 asset version to use a different day of the month and found that changing it affected performance dramatically. All I did to your original algorithm was change the trade scheduling so that trading occurred on the 5th day of the month rather than the first. This cut returns down to less than half the benchmark and gave a drawdown of nearly 50%. Is this to be expected? 221 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com import numpy as np import pandas as pd import statsmodels.api as sm import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23870)] # IEF context.secs = [ sid(32270), sid(40516)] # SSO, XIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(days_offset=5),time_rules.market_close()) #schedule_function(trade,date_rules.every_day() def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma ## Risk on/off logic: for bond in context.safe: if iwm_z < -1: if get_open_orders(bond): continue order_target_percent(bond,.50) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.5) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(iwm_z = iwm_z) There was a runtime error. I mentioned this last month. Yes, this sensitivity is fully expected. Several articles in the financial literature find that abnormally high returns can be achieved by buying on the last day of the month. This turn of the month pattern has its origin in the monthly economic payment cycle. Billions of dollars worth of investments are liquidated just prior to the month end. A disproportionate share of monthly payments in the US, for instance those by pension funds (pensions), corporate treasuries (dividends), and mutual funds (distributions), take place precisely at the turn of the month. So strongly ingrained is this pattern that since July 1926, one could have held the S&P index for only seven days a month and pocketed the entire market excess return with nearly 50% lower volatility compared to a buy and hold strategy. When we developers write an end-of-the-month schedule into our algos and its constituents are stock indices, we are taking advantage of systematic institutional selling which distributes cash payments that are subsequently and, at least, partially re-invested in the market by the recipients. Lucas, appreciate the question. Simon, my apologies for having missed your earlier comment. References: • McConnell, J. J., and W. Xu. "Equity returns at the turn of the month." Financial Analysts Journal 64 (2008): 49-64. • Etula, Erkko and Rinne, Kalle and Suominen, Matti and Vaittinen, Lauri, Dash for Cash: Month-End Liquidity Needs and the Predictability of Stock Returns (July 25, 2015). ADDENDUM While we’re looking at this algorithm, I’d like to make one additional and related comment. I think we too frequently use Treasuries as our safe haven. During the entire period this algorithm ran, the asset classes expected to outperform - based on economic regimes - were equities and real estate (not Treasuries). Economic regimes are defined by economic growth and inflation expectations. (See here.) Because of the persistence of economic regimes, survivorship and hindsight biases are reduced in backtests. Real estate is a hard asset and certainly eligible to be considered “safe”, especially during accelerating growth and a low interest rate environment. So, in this edition of our algo, I simply substituted IYR for IEF. (An end-of-the-month schedule is deliberately maintained.) Rather than reflexively coding in Treasuries as our “safe haven,” why not consider how asset class behavior varies over shifting economic scenarios? 113 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com # 11.16.15 Substituted Real Estate for Treasuries import numpy as np import pandas as pd import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(21652)] # IYR context.secs = [ sid(32270), sid(40516)] # SSO, XIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_close()) def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma ## Risk on/off logic: for bond in context.safe: if iwm_z < -1: if get_open_orders(bond): continue order_target_percent(bond,.50) log.info("Risk OFF: allocate %s" % (bond.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(bond): continue order_target_percent(bond,.00) log.info("Risk ON: zero weight %s" % (bond.symbol) + " at %s" % str(exchange_time)) ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): if get_open_orders(stock): continue order_target_percent(stock,.5) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) record(iwm_z = iwm_z)  There was a runtime error. I have no idea what I'm doing but I changed the allocation to keep consistent leverage of 1 or 0 and came up with this. 221 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com import numpy as np import pandas as pd import statsmodels.api as sm import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(21652)] # RWR context.secs = [ sid(32270), sid(40516)] # SSO, XIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_close()) #schedule_function(trade,date_rules.every_day() def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma viable_stocks = [] ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma record('sect_z_' + stock.symbol, sect_z) ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): viable_stocks.append(stock) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) if viable_stocks: order_target_percent(context.safe, .00) stock_allocation = 1.0/len(viable_stocks) for stock in viable_stocks: if get_open_orders(stock): continue else: order_target_percent(stock, stock_allocation) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if iwm_z <= -1: if not get_open_orders(context.safe): order_target_percent(context.safe, 1.0) else: order_target_percent(context.safe, .00) record(iwm_z = iwm_z) There was a runtime error. Lucas, akin to the low-volatility anomaly ...where low volatility assets produce higher risk-adjusted returns, I think you might have discovered the low-leverage anomaly! :-) When I get a chance, I'll take a look at this ...see if I can explain it. Off the cuff, I think it falls into the category: "If something looks too good to be true, it probably is." Do appreciate your interest in tinkering with this. I doubt your discovery is the result of chance and serendipity. So the tearsheet gave me this: Worst Drawdown Periods net drawdown in % peak date valley date recovery date duration 0 26.81 2014-12-05 2014-12-16 2015-08-14 181 1 15.90 2014-09-18 2014-10-02 2014-11-10 38 4 14.82 2013-03-26 2013-06-24 2013-07-09 76 2 14.09 2013-09-18 2013-10-08 2013-10-17 22 3 10.29 2015-08-18 2015-09-04 NaT NaN  0 and 1 can be entirely attributed to holding XIV at the wrong time while 4 is a combination of XIV and SSO, mainly XIV. It seems that maybe (at least exclusively) holding XIV isn't a good idea due to its rather severe volatility? I thought that switching in ZIV instead would be a good idea but it's really just a less extreme version of XIV. It has slightly less drawdown but also less returns (slightly higher volatility matched). 46 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com import numpy as np import pandas as pd import statsmodels.api as sm import datetime import pytz def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(21652)] # IYR context.secs = [ sid(32270), sid(40513)] # SSO, ZIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_close()) #schedule_function(trade,date_rules.every_day() def handle_data(context, data): lev = context.account.leverage record(lev = lev) def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma viable_stocks = [] ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma record('sect_z_' + stock.symbol, sect_z) ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): viable_stocks.append(stock) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) if viable_stocks: order_target_percent(context.safe, .00) stock_allocation = 1.0/len(viable_stocks) for stock in viable_stocks: if get_open_orders(stock): continue else: order_target_percent(stock, stock_allocation) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: if iwm_z <= -1: if not get_open_orders(context.safe): order_target_percent(context.safe, 1.0) else: order_target_percent(context.safe, .00) record(iwm_z = iwm_z) There was a runtime error. Jessica, In your Sep 4th post you listed several parameters for the fund, namely "we prefer to have at least 5 - 10 year backtest to evaluate." and "While results of this backtest would pass our beta filter, it would not pass our first level performance filters for the fund (Annl returns > 7%, Annl Sharpe > 0.60) - so this algo wouldn’t make it past this initial stage of evaluation for further analysis." If these are indeed parameters you are using, it would be nice to see them right in the Capital-Allocation section of the website, rather than buried in post like this. That section has a paucity of details on exactly what you're looking for, which seems like its doing you and potential algo writers a disfavor since you seem to have already developed at least some of the parameters you'll be looking for. I'd add that you also need to clarify if 7% without leverage is your cutoff, or 7% with leverage. It seems that you're painting yourself into an unobtainable corner if you expect 7% unlevered returns with almost no drawdown, no beta, and toss in some other random restriction like long and short positions, no leveraged ETFs... Also, insisting on a 10 year backtest ensures you'll miss out on any strategy that has emerged on any stocks or products that have emerged in the intervening 10 years since 2005, leaving you only the picked over strategies that are likely to have already been arb'd out of existence by the other 2000 funds who've been working on this that whole time as well. You're getting worked over by whoever is providing your hedge fund seeding. They're holding you to unreasonable performance metrics that literally no-one else in the hedge fund industry is held to, and setting you up for failure in the process. I'd try to cast that net a little wider. I made a few modifications that improved a bit on the drawdown and volatility from my previous backtest: - Swap out Stephen's IEF and XIV for TLT and ZIV. TLT was just a whim but ZIV is a lot less volatile than XIV so it's slightly less vulnerable to the wild changes that XIV is subject to. - When buying multiple securities, weight them according to their Z score. - Add a "short circuit" for securities that immediately (outside the monthly buy routine) sells an entire position and exchanges it for bonds if at any time during the month its price falls below the lower Bollinger Band. This also helped reduce exposure to the sudden fluctuations on ZIV. 221 Loading... Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # 12/1/2010 start (XIV inception 11/30/2010) # Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099 # Modifications, Stephen Harlin, nextSignals.com import numpy as np import pandas as pd import statsmodels.api as sm import datetime import pytz import talib def initialize(context): set_symbol_lookup_date('2015-01-01') context.safe = [ sid(23921) ] # TLT context.secs = [ sid(32270), sid(40513) ] # SSO, ZIV # default commissions and slippage set_commission(commission.PerShare(cost=0.005, min_trade_cost=1)) set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1)) #execute on the first trading day of the month schedule_function(trade,date_rules.month_start(),time_rules.market_close()) #schedule_function(trade,date_rules.every_day() context.short_circuited = [] def handle_data(context, data): price_history = history(bar_count=30, frequency='1d', field='price') lev = context.account.leverage record(lev = lev) for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma #record('sect_z_' + stock.symbol, sect_z) upper, middle, lower = talib.BBANDS(price_history[sid(40513)], timeperiod=30) #record(lower=lower[-1], price=data[sid(40513)].price) record(short_circuit=1 if data[sid(40513)].price < lower[-1] else 0) for stock in context.portfolio.positions: upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30) if data[stock].price < lower[-1]: if get_open_orders(stock): continue percent = ( context.portfolio.positions[stock].amount * context.portfolio.positions[stock].last_sale_price)/context.portfolio.portfolio_value value = (context.portfolio.positions[stock].amount * context.portfolio.positions[stock].last_sale_price) #context.short_circuited.append((stock, percent)) order_target_percent(stock, 0.0) order_target_value(context.safe, value) index = 0 while context.short_circuited and index < len(context.short_circuited): stock, percent = context.short_circuited[index] if get_open_orders(stock): index += 1 continue upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30) if data[stock].price > lower[-1]: order_target_percent(stock, percent) context.short_circuited.pop(index) else: index += 1 def trade(context, data): exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') ############# info on IWM ############### iwm_mean = data[sid(21519)].mavg(50) iwm_sigma = data[sid(21519)].stddev(50) iwm_price = data[sid(21519)].price iwm_z = (iwm_price - iwm_mean) / iwm_sigma viable_stocks = [] ########## calculate z for each sector ########### for stock in context.secs: mean = data[stock].mavg(50) sigma = data[stock].stddev(50) current_price = data[stock].price sect_z = (current_price - mean) / sigma #record('sect_z_' + stock.symbol, sect_z) ## sector trade logic if sect_z < iwm_z and (1.0 > sect_z > -1.0): viable_stocks.append((stock, sect_z, sect_z + 2)) else: if get_open_orders(stock): continue order_target_percent(stock,.0) log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time)) if viable_stocks: viable_stocks = sorted(viable_stocks, key=lambda x: x, reverse=True) total_weight = sum(item for item in viable_stocks) order_target_percent(context.safe, .00) for stock, sect_z, sect_z_offset in viable_stocks: stock_allocation = sect_z_offset/total_weight if get_open_orders(stock): continue else: order_target_percent(stock, stock_allocation) log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time)) else: #if iwm_z <= -1: if not get_open_orders(context.safe): order_target_percent(context.safe, 1.0) #else: # order_target_percent(context.safe, .00) record(iwm_z = iwm_z) There was a runtime error. 110% negative cash stung that one for PvR (Profit vs. Risk taking all cash in play into account) of 112% instead of the apparent 236%. Happens early, March 2011. To spot margin dips easier, try recording max leverage rather than just leverage. Or record cash low. Your previous modification was fine, only barely over, great return. hi, can anyone tweak this strategy in indian market ?? Following up on Kevin Q.'s comments above, it'd be good to hear from the Quantopian hedge fund team. For example, the algo Lucas Cooper posted above looks pretty darn good: beta = 0.1 sharpe = 2.45 2X SPY return What's missing? Seems like a decent candidate for the hedge fund, no? Why not plunk$100K into it, let it run for 6 months, and then start ramping up the allocation if it plays out? Or is SSO a non-starter, since by implication from the contest rules, you won't allow leveraged ETFs in the fund?

Here's Lucas Cooper's algo from above, but at \$1 M in capital. Aside from using SSO, it would still seem to conform to the basic hedge fund requirements. Or am I missing something?

75
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# 12/1/2010 start (XIV inception 11/30/2010)
# Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099
# Modifications, Stephen Harlin, nextSignals.com

import numpy as np
import pandas as pd
import statsmodels.api as sm
import datetime
import pytz
import talib

def initialize(context):
set_symbol_lookup_date('2015-01-01')
context.safe =   [ sid(23921) ]  # TLT
context.secs =   [ sid(32270), sid(40513) ]  # SSO, ZIV

# default commissions and slippage
set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1))

#execute on the first trading day of the month
context.short_circuited = []

def handle_data(context, data):
price_history = history(bar_count=30, frequency='1d', field='price')
lev = context.account.leverage
record(lev = lev)
for stock in context.secs:
mean = data[stock].mavg(50)
sigma = data[stock].stddev(50)
current_price = data[stock].price
sect_z = (current_price - mean) / sigma
#record('sect_z_' + stock.symbol, sect_z)

upper, middle, lower = talib.BBANDS(price_history[sid(40513)], timeperiod=30)
#record(lower=lower[-1], price=data[sid(40513)].price)
record(short_circuit=1 if data[sid(40513)].price < lower[-1] else 0)

for stock in context.portfolio.positions:
upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30)
if data[stock].price < lower[-1]:
if get_open_orders(stock):
continue
percent = (
context.portfolio.positions[stock].amount *
context.portfolio.positions[stock].last_sale_price)/context.portfolio.portfolio_value
value = (context.portfolio.positions[stock].amount *
context.portfolio.positions[stock].last_sale_price)
#context.short_circuited.append((stock, percent))
order_target_percent(stock, 0.0)
order_target_value(context.safe, value)

index = 0
while context.short_circuited and index < len(context.short_circuited):
stock, percent = context.short_circuited[index]
if get_open_orders(stock):
index += 1
continue
upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30)
if data[stock].price > lower[-1]:
order_target_percent(stock, percent)
context.short_circuited.pop(index)
else:
index += 1

exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
############# info on IWM ###############
iwm_mean = data[sid(21519)].mavg(50)
iwm_sigma = data[sid(21519)].stddev(50)
iwm_price = data[sid(21519)].price
iwm_z = (iwm_price - iwm_mean) / iwm_sigma

viable_stocks = []
########## calculate z for each sector ###########
for stock in context.secs:
mean = data[stock].mavg(50)
sigma = data[stock].stddev(50)
current_price = data[stock].price
sect_z = (current_price - mean) / sigma
#record('sect_z_' + stock.symbol, sect_z)

if sect_z < iwm_z and (1.0 > sect_z > -1.0):
viable_stocks.append((stock, sect_z, sect_z + 2))
else:
if get_open_orders(stock):
continue
order_target_percent(stock,.0)
log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time))

if viable_stocks:
viable_stocks = sorted(viable_stocks, key=lambda x: x, reverse=True)
total_weight = sum(item for item in viable_stocks)
order_target_percent(context.safe, .00)
for stock, sect_z, sect_z_offset in viable_stocks:
stock_allocation = sect_z_offset/total_weight
if get_open_orders(stock):
continue
else:
order_target_percent(stock, stock_allocation)
log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time))
else:
#if iwm_z <= -1:
if not get_open_orders(context.safe):
order_target_percent(context.safe, 1.0)
#else:
#    order_target_percent(context.safe, .00)

record(iwm_z = iwm_z)
There was a runtime error.

Great algorithm by Jaime. It seems to be very simple but effective. Looks like complex stuff don't triumph over good old plays around moving averages.

Crazy question, This there a way within this algo that you can limit your trading power to only half the cash that is in your portfolio?

14
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# 12/1/2010 start (XIV inception 11/30/2010)
# Jamie Lunn's Original: Backtest ID: 55b6a4d8e418a20c6c039099
# Modifications, Stephen Harlin, nextSignals.com

import numpy as np
import pandas as pd
import statsmodels.api as sm
import datetime
import pytz
import talib

def initialize(context):
set_symbol_lookup_date('2015-01-01')
context.safe =   [ sid(23921) ]  # TLT
context.secs =   [ sid(38533), sid(40670) ]  # SSO, ZIV

# default commissions and slippage
set_slippage(slippage.VolumeShareSlippage(volume_limit=0.25, price_impact=0.1))

#execute on the first trading day of the month

context.short_circuited = []

def handle_data(context, data):
price_history = history(bar_count=30, frequency='1d', field='price')
lev = context.account.leverage
record(lev = lev)
for stock in context.secs:
mean = data[stock].mavg(50)
sigma = data[stock].stddev(50)
current_price = data[stock].price
sect_z = (current_price - mean) / sigma
#record('sect_z_' + stock.symbol, sect_z)

upper, middle, lower = talib.BBANDS(price_history[sid(40513)], timeperiod=30)
#record(lower=lower[-1], price=data[sid(40513)].price)
record(short_circuit=1 if data[sid(40513)].price < lower[-1] else 0)

for stock in context.portfolio.positions:
upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30)
if data[stock].price < lower[-1]:
if get_open_orders(stock):
continue
percent = (
context.portfolio.positions[stock].amount *
context.portfolio.positions[stock].last_sale_price)/context.portfolio.portfolio_value
value = (context.portfolio.positions[stock].amount *
context.portfolio.positions[stock].last_sale_price)
#context.short_circuited.append((stock, percent))
order_target_percent(stock, 0.0)
order_target_value(context.safe, value)

index = 0
while context.short_circuited and index < len(context.short_circuited):
stock, percent = context.short_circuited[index]
if get_open_orders(stock):
index += 1
continue
upper, middle, lower = talib.BBANDS(price_history[stock], timeperiod=30)
if data[stock].price > lower[-1]:
order_target_percent(stock, percent)
context.short_circuited.pop(index)
else:
index += 1

exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
############# info on IWM ###############
iwm_mean = data[sid(21519)].mavg(50)
iwm_sigma = data[sid(21519)].stddev(50)
iwm_price = data[sid(21519)].price
iwm_z = (iwm_price - iwm_mean) / iwm_sigma

viable_stocks = []
########## calculate z for each sector ###########
for stock in context.secs:
mean = data[stock].mavg(50)
sigma = data[stock].stddev(50)
current_price = data[stock].price
sect_z = (current_price - mean) / sigma
#record('sect_z_' + stock.symbol, sect_z)

if sect_z < iwm_z and (1.0 > sect_z > -1.0):
viable_stocks.append((stock, sect_z, sect_z + 2))
else:
if get_open_orders(stock):
continue
order_target_percent(stock,.0)
log.info("Zero weight %s" % (stock.symbol) + " at %s" % str(exchange_time))

if viable_stocks:
viable_stocks = sorted(viable_stocks, key=lambda x: x, reverse=True)
total_weight = sum(item for item in viable_stocks)
order_target_percent(context.safe, .00)
for stock, sect_z, sect_z_offset in viable_stocks:
stock_allocation = sect_z_offset/total_weight
if get_open_orders(stock):
continue
else:
order_target_percent(stock, stock_allocation)
log.info("Allocate %s" % (stock.symbol) + " at %s" % str(exchange_time))
else:
#if iwm_z <= -1:
if not get_open_orders(context.safe):
order_target_percent(context.safe, 1.0)
#else:
#    order_target_percent(context.safe, .00)

record(iwm_z = iwm_z)
There was a runtime error.

The Dec 5, 2015 algo that appears to be 122% ...
2015-10-30 pvr:253 INFO Profited 1220468 on 3047493 activated/transacted for PvR of 40.0%
2015-10-30 pvr:256 INFO QRet 122.05 PvR 40.05 CshLw -2047493.00 MxLv 2.17 RskHi 3047493

Why does this algorithm (any of the backtests) trade when the z-score of the equity is between 1.0 and -1.0? What's the significance of that range?