Improved Minimum Variance Portfolio

After a nudge in the right direction from Grant Kiehne I made a new version of a minimum variance portfolio. It uses a Lagrangian to solve for the weights that minimize the variance. I used returns on the vwap for everything and the portfolio was randomly generated.

I'm not a huge fan of shorts so I added context.allow_shorts, which if False, the algo will use non-negative least squares regression to solve the Lagrangian. I also added the ability to re-invest cash but I'm thinking there's a cleaner way to do it. Ideas?

I'm pretty happy with the results I've seen so far, I'm gonna have to port it over to min data at some point to start paper trading. Does anybody have any idea how to stop that negative cash dip on the first day it invests? I commented out what I tried (line 51).

Dave

241
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from scipy.optimize import nnls

def initialize(context):

context.stocks =[
sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882),
sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649),
sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658),
sid(14518), sid(22406)
]
# Cash buffer: reduces investment on a per trade basis
context.cash_buffer = 0

# Days between rebalance
context.rebal_days = 10

# Number of observations used in calculations.
context.nobs = 300

context.re_invest_cash = True
context.allow_shorts = False

context.data = {
i: []  for i in context.stocks
}

def handle_data(context, data):
P = context.portfolio
record(cash=P.cash, positions=P.positions_value)

for i in data.keys():
context.data[i].append(data[i].vwap(1))
if len(context.data[i]) > context.nobs:
context.data[i] = context.data[i][-1*context.nobs:]

if  n > 50 and not n % context.rebal_days:
vwaps = pd.DataFrame(context.data)
context.df = vwaps.pct_change().dropna()

weights = min_var_weights(context.df, allow_shorts=context.allow_shorts)
# logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0}

# log.info(logs)
for sym in context.stocks:
old_pos = P.positions[sym].amount
if context.re_invest_cash:# and context.trading_days != context.rebal_days:
new_pos = re_invest_order(sym, weights, context, data)
else:
new_pos = int(
(weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)
)
order(sym, new_pos - old_pos)

def re_invest_order(sym, weights, context, data):
P = context.portfolio
if P.cash > 0:
new_pos = int(
(weights[sym] *(P.starting_cash + P.cash) / data[sym].price)*(1 - context.cash_buffer))
else:
new_pos = int(
(weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer))
return new_pos

def min_var_weights(returns, allow_shorts=False):
'''
Returns a dictionary of sid:weight pairs.

allow_shorts=True --> minimum variance weights returned
allow_shorts=False --> least squares regression finds non-negative
weights that minimize the variance
'''
cov = 2*returns.cov()
x = np.ones(len(cov) + 1)
x[-1] = 1.0
p = lagrangize(cov)
if allow_shorts:
weights = np.linalg.solve(p, x)[:-1]
else:
weights = nnls(p, x)[0][:-1]
return {sym: weights[i] for i, sym in enumerate(returns)}

def lagrangize(df):
'''
Utility funcion to format a DataFrame
in order to solve a Lagrangian sysem.
'''
df = df
df['lambda'] = np.ones(len(df))
z = np.ones(len(df) + 1)
x = np.ones(len(df) + 1)
z[-1] = 0.0
x[-1] = 1.0
m = [i for i in df.as_matrix()]
m.append(z)
return pd.DataFrame(np.array(m))

This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
19 responses

This fixes the crazy oscillations in the positions and cash. It performs a lot better too. Over 400% on the same portfolio

def re_invest_order(sym, weights, context, data):
P = context.portfolio
if P.cash > 0:
new_pos = int(
(weights[sym] * (P.positions_value + P.cash) / data[sym].price)*(1 - context.cash_buffer))
else:
new_pos = int(
(weights[sym] * (P.positions_value) / data[sym].price)*(1 - context.cash_buffer))
return new_pos


Very interesting result with the non-negative least squares! I noticed were that there seems to be a lot of churn in the portfolio with NNLS approach perhaps a slower re-calibration period may save on transaction costs/slippage if you included those especially if you are considering paper trading. I would be interested to see some measure of portfolio turnover comparing the two methods.

Just to note, I adjusted your function min_var_weights a bit where you called lagrangize, its not really necessary, the version with short sales that I posted originally has the closed form built into it. If you look at the difference in the calculated weights you can see that it is within machine precision.

def min_var_weights(returns, allow_shorts=False):
'''
Returns a dictionary of sid:weight pairs.

allow_shorts=True --> minimum variance weights returned
allow_shorts=False --> least squares regression finds non-negative
weights that minimize the variance
'''
cov = 2*returns.cov()
x = np.ones(len(cov) + 1)
x[-1] = 1.0
p = lagrangize(cov)
if allow_shorts:
#this is exactly the same as the line below (weights = ...) with less mess
weights2 = np.linalg.solve(p, x)[:-1]
else:
weights2 = nnls(p, x)[0][:-1]
precision = np.asmatrix(inv(returns.cov()))
oned = np.ones((len(returns.columns), 1))
weights = precision*oned / (oned.T*precision*oned) #these are nearly equal, check the logs
weights2 = np.asmatrix(weights2).T
log.info(weights - weights2)
return {sym: weights2[i] for i, sym in enumerate(returns)}


excellent coding style by the way, much neater than mine, nice job!

Nice, they are definitely within machine precision, that's good confirmation that they're both working as intended. I separated out the Lagrange function because they show up in other problems, I thought the portability might help at some point down the road. Thanks for the style props, I borrowed the enumerates and a couple other things from you so your complimenting yourself too.

I fixed the churn in the portfolio by using the positions value rather than starting cash to re-balance. It smoothed everything out and made the returns a lot higher. I also added a couple lines to invest evenly on the first day while the number of observations build up. That way there's no lag getting into the market and it fixed the initial negative cash dip I was getting before. This test has those changes with the same portfolio and settings.

Something worth noting is that I ran a test where everything went horribly wrong with the non-negative approach. It was caused by the algo going .999999 into a single security and 1e-15 ish in some others. I'm thinking that a ceiling needs be put on the weight that can be given to a security and any weights within machine precision of 0 need to be set to 0. I would be a poor man due to an aberration if that happened in a live situation.

135
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from scipy.optimize import nnls

def initialize(context):

context.stocks =[
sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882),
sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649),
sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658),
sid(14518), sid(22406)
]
# Cash buffer: reduces investment on a per trade basis
context.cash_buffer = 0.

# Days between rebalance
context.rebal_days = 10

# Number of observations used in calculations.
context.nobs = 250

context.re_invest_cash = True
context.allow_shorts = False

context.data = {
i: []  for i in context.stocks
}

def handle_data(context, data):
P = context.portfolio
if context.trading_days == 1:
w = 1./len(context.stocks)
for sym in context.stocks:
shares = (P.starting_cash * w // data[sym].price)*(1 - context.cash_buffer)
order(sym, shares)
record(cash=P.cash, positions=P.positions_value)

for i in data.keys():
context.data[i].append(data[i].vwap(1))
if len(context.data[i]) > context.nobs:
context.data[i] = context.data[i][-1*context.nobs:]

if  n > 50 and not n % context.rebal_days:
vwaps = pd.DataFrame(context.data)
context.df = vwaps.pct_change().dropna()

weights = min_var_weights(context.df, allow_shorts=context.allow_shorts)
logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0}

log.info(logs)
for sym in context.stocks:
old_pos = P.positions[sym].amount
if context.re_invest_cash:# and context.trading_days != context.rebal_days:
new_pos = re_invest_order(sym, weights, context, data)
else:
new_pos = int(
(weights[sym] *max(P.positions_value, P.starting_cash) / data[sym].price)*(1 - context.cash_buffer)
)
order(sym, new_pos - old_pos)

def re_invest_order(sym, weights, context, data):
P = context.portfolio
if P.cash > 0:
new_pos = int(
(weights[sym] * (P.positions_value + P.cash) / data[sym].price)*(1 - context.cash_buffer))
else:
new_pos = int(
(weights[sym] * (P.positions_value) / data[sym].price)*(1 - context.cash_buffer))
return new_pos

def min_var_weights(returns, allow_shorts=False):
'''
Returns a dictionary of sid:weight pairs.

allow_shorts=True --> minimum variance weights returned
allow_shorts=False --> least squares regression finds non-negative
weights that minimize the variance
'''
cov = 2*returns.cov()
x = np.ones(len(cov) + 1)
x[-1] = 1.0
p = lagrangize(cov)
if allow_shorts:
weights = np.linalg.solve(p, x)[:-1]
else:
weights = nnls(p, x)[0][:-1]
return {sym: weights[i] for i, sym in enumerate(returns)}

def lagrangize(df):
'''
Utility funcion to format a DataFrame
in order to solve a Lagrangian sysem.
'''
df = df
df['lambda'] = np.ones(len(df))
z = np.ones(len(df) + 1)
x = np.ones(len(df) + 1)
z[-1] = 0.0
x[-1] = 1.0
m = [i for i in df.as_matrix()]
m.append(z)
return pd.DataFrame(np.array(m))


This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

How was the list of stocks chosen for this portfolio? Is it a random list? Has anyone tried other sets of symbols.
Is there a process for picking stock list for this algorithm if I wanted to try other portfolios?

Sarvi

Ya it's a random list, I'm not sure what they are. You can just replace the context.stocks list with whatever securities you want. Just make sure they were traded throughout the time period your testing. There's no method to it, just change that one list.
Also, I think there's a mistake in this version.

# in min var weights
x = np.ones(len(cov) + 1)

# Should be
x = np.array([0.]*(len(cov)+1)])


The vector being solved for should be all zeros and one 1. For some reason this doesn't seem to change the outcome of the solutions but it's worth mentioning.
I would clone the updated version above, it got rid of the oscillations in the cash and positions.

I want a huge global universe of stocks ie 1000 not just a cute handfull of 15 stocks!

Back tests let you use up to 100. Look at DollarVolumeUniverse in the docs, it gives you a sample of the market

Has anyone tried modifying this to run on minute data to replicate the results of daily data.
You can't do paper trading until the algo can handle minute data. The simple tip I got about converting goes like this
" exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern') log.info('{hour}:{minute}:'.format(hour=exchange_time.hour,minute=exchange_time.minute))
if exchange_time.hour != 10 or exchange_time.minute != 0:
return
" to the begining. But that does reproduce the results of daily data though.

Sarvi

I have a version that runs on minute data, however the back tester tells me the sids I used with daily data are not available in minute mode. I'll run a full test on min data, then replicate it with a daily test after to avoid this. Ill post the results later.

Marcus, I have warned about this before here but I repeat myself, let's assume you even wanted to use 43 stocks;

One pitfall of this Markowitz type of analysis is the curse of dimensionality. You have 43 stocks which means that you are estimating the covariance matrix containing 43*42/2+43 = 9,073 parameters utilizing just 252 observations. This is complete statistical nonsense, the parameters are not even uniquely identified. You are going to want to increase the length of your observation window to tighten the standard errors on the covariance estimates. 90,000 days would probably be sufficient for nice tight estimation with 43 assets. Obviously that size of estimation window is unreasonable which is why the next step would be to implement dimension reduction techniques. An exogenous factor model could work nicely, the principal components approach is another methodology or, the factors on demand methodology of Meucci. Asymptotic principal components from Connor and Korajczyk 1986 could be an excellent solution with a very large number of assets.

Wayne, I forgot that you wanted to see a comparison of the negative vs. non-negative results. I have found that the non-negative is a little bit more consistent but there is neither one is conclusively better. For example, this backtest does 473% while allowing shorts and about 153% without, but the test above does 400% with the non-negative approach and about 180% when allowing negative weights.

The code for this test works with min and daily data, and the shorting can be toggled also. I made another version that solves the problem on page 12 of the paper in your comment above, it is more erratic though, especially with negative, and it goes all in on one security without the negative weights.

135
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from scipy.optimize import nnls
from pytz import timezone
#
# This is a version of the min variance algo without history().
# It can run on daily or min data, just change context.period to
# anything but 'daily'
#

def initialize(context):
context.period = 'daily'
context.stocks =[
sid(12662), sid(2174), sid(5787), sid(4963), sid(21612), sid(1062),
sid(5719), sid(7580), sid(8857), sid(2000), sid(5938), sid(19917),
sid(12267), sid(3695), sid(14520), sid(2663), sid(24074), sid(12350)
]
context.cash_buffer = 0.0
context.EPSILON = 1e-8
context.max_weight = 2
context.order_cushion = .2
# Days between rebalance
context.rebal_days = 30
context.today = None
# Number of observations used in calculations.
context.nobs = 250
context.min_nobs = 50

context.re_invest_cash = 1
context.allow_shorts = 1
context.invest_position = 1 # Uses starting cash if False, only used if not re-investing cahs

context.data = {
i: []  for i in context.stocks
}

def handle_data(context, data):
#data = history(bar_count=context.nobs, frequency
dt = get_datetime().astimezone(timezone('US/Eastern'))
P = context.portfolio
if context.trading_days == 0:
w = 1./len(context.stocks)
for sym in context.stocks:
shares = (P.starting_cash * w // data[sym].price)*(1 - context.cash_buffer)
order(sym, shares)
record(cash=P.cash, positions=P.positions_value)

if not dt.day == context.today:
append_data(context, data)
context.today = dt.day

vwaps = pd.DataFrame(context.data)
context.df = vwaps.pct_change().dropna()

weights = min_var_weights(context.df, allow_shorts=context.allow_shorts)
for i in weights:
if weights[i] > context.max_weight:

weights = catch_max_weight(weights, context, data)
if abs(weights[i]) < context.EPSILON:
weights[i] = 0
#logs = {sym.symbol: weights[sym] for sym in context.stocks if weights[sym] !=0}
#log.info(logs)
orders = {}
for sym in context.stocks:
old_pos = P.positions[sym].amount
if context.re_invest_cash:# and context.trading_days != context.rebal_days:
new_pos = re_invest_order(sym, weights, context, data)
else:
if context.invest_position:
new_pos = int((1 - context.cash_buffer) * (
weights[sym] * max(P.positions_value, P.starting_cash, P.cash)) / data[sym].price
)

else:
new_pos = int((1 - context.cash_buffer) * (
weights[sym] * max(P.starting_cash, P.cash)) / data[sym].price
)

cost = abs(data[sym].price * new_pos)
orders[sym] = (old_pos, new_pos, data[sym].price ,weights[sym] ,cost)

if check_orders(orders, context, data):
for sym in orders:
order_target(sym, orders[sym][1])

def check_orders(orders, context, data):
P = context.portfolio
total_cost = sum([orders[i][-1] for i in orders])
if total_cost > (P.positions_value + P.cash) * (1 + context.order_cushion):
return False
log.info('\nTotal Order: \$ %s\n'%total_cost)
log.info({i.symbol: orders[i] for i in orders})
return True

def catch_max_weight(weights, context, data):
log.debug("\nOVER WEIGHT LIMIT\n%s"%{x.symbol: weights[x] for x in data.keys()})
return {i: 1. / len(context.stocks) for i in weights}

def append_data(context, data):
for i in context.stocks:
context.data[i].append(data[i].vwap(1))
if len(context.data[i]) > context.nobs:
context.data[i] = context.data[i][-1*context.nobs:]

def re_invest_order(sym, weights, context, data):
P = context.portfolio
if P.cash > 0:
new_pos = int((1 - context.cash_buffer) * (
weights[sym] * (P.positions_value + P.cash)) / data[sym].price
)
else:
new_pos = int((1 - context.cash_buffer) * (
weights[sym] * (P.positions_value)) / data[sym].price
)
return new_pos

def min_var_weights(returns, allow_shorts=False):
'''
Returns a dictionary of sid:weight pairs.

allow_shorts=True --> minimum variance weights returned
allow_shorts=False --> least squares regression finds non-negative
weights that minimize the variance
'''
cov = 2*returns.cov()
x = np.array([0.]*(len(cov)+1))
#x = np.ones(len(cov) + 1)
x[-1] = 1.0
p = lagrangize(cov)
if allow_shorts:
weights = np.linalg.solve(p, x)[:-1]
else:
weights = nnls(p, x)[0][:-1]
return {sym: weights[i] for i, sym in enumerate(returns)}

def lagrangize(df):
'''
Utility funcion to format a DataFrame
in order to solve a Lagrangian sysem.
'''
df = df
df['lambda'] = np.ones(len(df))
z = np.ones(len(df) + 1)
x = np.ones(len(df) + 1)
z[-1] = 0.0
x[-1] = 1.0
m = [i for i in df.as_matrix()]
m.append(z)
return pd.DataFrame(np.array(m))

if n < context.min_nobs or n % context.rebal_days:
return False
if context.period == 'daily':
return True
# Converts all time-zones into US EST to avoid confusion
loc_dt = get_datetime().astimezone(timezone('US/Eastern'))
if loc_dt.hour == 12 and loc_dt.minute == 0:
return True
else:
return False
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

@ Wayne Nilsen

"One pitfall of this Markowitz type of analysis is the curse of dimensionality. You have 43 stocks which means that you are estimating the covariance matrix containing 43*42/2+43 = 9,073 parameters utilizing just 252 observations. This is complete statistical nonsense, the parameters are not even uniquely identified. You are going to want to increase the length of your observation window to tighten the standard errors on the covariance estimates. 90,000 days would probably be sufficient for nice tight estimation with 43 assets. Obviously that size of estimation window is unreasonable which is why the next step would be to implement dimension reduction techniques. An exogenous factor model could work nicely, the principal components approach is another methodology or, the factors on demand methodology of Meucci. Asymptotic principal components from Connor and Korajczyk 1986 could be an excellent solution with a very large number of assets. "

Interesting. I have to read it again to understand it :-) Could you please have a look at my paper:

Davidsson, M (2013) The Use of Least Squares in the Optimization of Investment Portfolios,
International Journal of Management , Vol 30, No 10, pp 310 – 321
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2366298

Let me know what you think?!

I agree with Wayne, the dimension of the covariance matrix is O(n^2 ), but the observations is O(n). However, I don't agree the dimension of covariance matrix is exactly n*(n-1)/2+n, there is hidden relation there. So when too many equities in the portfolio, we need more or much more observations to make it has statistical meaning.

On the other hand, consider in time series model, too many observations means using outdated data to do the prediction, which does not make sense either. Thus we cannot make the observations unlimited many.

The final conclusion, we need restrain the number of equities in the portfolio, or this Markowitz type of analysis is the curse of dimension.

Also I rewrite part of the code to fix the negative cash dip on the first day problem.

And I implemented the SPDR sectors as the portfolio picks, since they are with low variance, which is our trading idea. I was going to implement set_universe feature of Quatopian, but we only have one implementation DollarVolumeUniverse, which picks the stocks by the liquidity. However, if we pick high liquidity equities and all of them have high volatility, which is not our trading strategy idea, minimum the variance. Second, some of these liquidity stocks are new IPO, such as EEM, QID, and some are going default, such as MER, OIH, they are strongly influence our portfolio picks and rebalance.

14
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from scipy.optimize import nnls
import math

def initialize(context):

context.stocks =[sid(19654), sid(19655), sid(19656), sid(19657), sid(19658), sid(19659), sid(19660), sid(19661), sid(19662)]
# Cash buffer: reduces investment on a per trade basis
context.cash_buffer = 0

# Days between rebalance
context.rebal_days = 10

# Number of observations used in calculations.
context.nobs = 100

context.re_invest_cash = True
context.allow_shorts = False

context.data = {i: []  for i in context.stocks}

def handle_data(context, data):
P = context.portfolio
record(cash=P.cash, positions=P.positions_value)

for i in data.keys():
context.data[i].append(data[i].vwap(1))
if len(context.data[i]) > context.nobs:
context.data[i] = context.data[i][-1*context.nobs:]

if  n > context.nobs and not n % context.rebal_days:
vwaps = pd.DataFrame(context.data)
context.df = vwaps.pct_change().dropna()

weights = min_var_weights(context.df, allow_shorts=context.allow_shorts)

for sym in context.stocks:
old_pos = P.positions[sym].amount
if context.re_invest_cash:
new_pos = re_invest_order(sym, weights, context, data)
else:
new_pos = math.floor((weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer))
order(sym, new_pos - old_pos)

def re_invest_order(sym, weights, context, data):
P = context.portfolio
if context.trading_days > context.nobs + 10:
new_pos = math.floor((weights[sym] *(P.cash + P.starting_cash) / data[sym].price)*(1 - context.cash_buffer))
else:
new_pos = math.floor((weights[sym] *(P.starting_cash) / data[sym].price)*(1 - context.cash_buffer))
return new_pos

def min_var_weights(returns, allow_shorts=False):
cov = returns.cov()
x = np.ones(len(cov) + 1)
x[-1] = 1.0
p = lagrangize(cov)
if allow_shorts:
weights = np.linalg.solve(p, x)[:-1]
else:
weights = nnls(p, x)[0][:-1]
return {sym: weights[i] for i, sym in enumerate(returns)}

def lagrangize(df):
df['lambda'] = np.ones(len(df))
z = np.ones(len(df) + 1)
z[-1] = 0.0
m = [i for i in df.as_matrix()]
m.append(z)
return pd.DataFrame(np.array(m))

This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Last word, I don't believe this trading strategy can print money, since it is very conservative, but it should beat the market.

The host can get so good result, only because the picks are too lucky. Say I long these 20 equities on the first day evenly, and do nothing, my return is still 251%. :)

12
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):

context.stocks =[
sid(438), sid(9693), sid(6704), sid(6082), sid(3971), sid(1882),
sid(7543), sid(4080), sid(2427), sid(20394), sid(20277), sid(10649),
sid(7800), sid(2293), sid(6683), sid(21757), sid(17800), sid(19658),
sid(14518), sid(22406)
]

context.day = 1

def handle_data(context, data):
if context.day == 1:
for sid in context.stocks:
pos = context.portfolio.cash/20/data[sid].close_price
order(sid, pos)
context.day += 1

This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

I agree that this is a long strategy. The game would be to put promising securities in there and just leave it. It has flaws but is a good start towards something better.

There is new versions of this on this post. One is min data with history() and the other works like this but can run minute or daily data. The also don't have the churn in the cash and positions. There was a couple mistakes in this version, see my earlier comments for the fixes.

I think there is a lot of promise in combining this with something like Frank Grossman's process of selecting a portfolio on the basis of relative strength and volatility. See http://02f27c6.netsolhost.com/papers/darwin-adaptive-asset-allocation.pdf for some qualitative ideas.

I have played around with this strategy, and found when I include assets that are not correlated, or negatively correlated, the weighting algo overallocated weights to assets that tend to be less or negatively correlated. I adjusted the max weight, but it only seems to register a log output, suggestions on how to adjust to restrict overallocation?