Back to Community
simple OLMAR w/ optimizer & get_fundamentals

Not sure if I shared this already. I cleaned it up, and got the performance to look really good. I'd appreciate feedback, particularly on how to ensure that the optimizer is working properly. --Grant

Clone Algorithm
766
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):
    
    context.spy = sid(8554)
    
    context.eps = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    set_benchmark(symbol('QQQ'))
 
def before_trading_start(context): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):
    
    record(leverage = context.account.leverage)

def trade(context,data):
    
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
            
    record(num_stocks = len(context.stocks))
        
    # check for de-listed stocks & leveraged ETFs
    for stock in context.stocks:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
            context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
    
    prices = history(8*390,'1m','price')
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(context.portfolio.positions[stock].amount*data[stock].price)
         
    m = len(b_t)
    b_0 = np.ones(m) / m
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
    else:     
        context.b_t = np.divide(b_t,denom)
    
    x_tilde = []
    
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        x_tilde.append(mean_price/prices[-1,i]) 
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1})
    
    record(norm_sq = norm_squared(res.x,b_t))
    
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        # print 'Success'
        allocate(context,data,allocation)
    else:
        return

def allocate(context, data, desired_port):
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, desired_port[i])
    
    for stock in data:
        if stock not in context.stocks:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
12 responses

Hey Grant, thanks for sharing this exceptionally clean implementation. From the original OLMAR I remember there were two different versions. Is this the basic or BuyAndHold one?

Alex,

This is the simple version. There may be an advantage to the so-called BAH(OLMAR) version, since it weights the portfolio using a range of trailing window lengths for computing the mean. It might smooth out the returns, and has the added benefit of effectively eliminating a parameter that can be over-fit.

Note that the optimize function may be too inefficient to complete within the 50 seconds allotted per minute bar, if it has to be run 20 or 30 times to implement the BAH(OLMAR) approach. Something to watch out for, if you start to play around with it.

Grant

very nice work, thanks for that. Please note that the algorithem seems to correlate with the market and in turmoils such as between Sep-2008 to Jan-2010 and between Mar-2004 to July-2004 the algo shows big drawdown and results that are worse than the benchmark.

Here's an update, to fix a bug (sorta). I was not normalizing the current-state portfolio properly:

iff denom == 0.0:  
        b_t = np.copy(b_0)  
    else:  
        context.b_t = np.divide(b_t,denom)  

I changed to:

    if denom == 0.0:  
        b_t = np.copy(b_0)  
    else:  
        b_t = np.divide(b_t,denom)  

I also tweaked context.eps to 1.005 to get decent return, and the optimizer had to be tweaked, as well (ftol is now 1e-6, which I think is the default). There were a few other minor changes that don't affect the results.

Clone Algorithm
766
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):
    
    context.spy = sid(8554)
    
    context.eps = 1.005
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    set_benchmark(symbol('QQQ'))
 
def before_trading_start(context): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):
    
    record(leverage = context.account.leverage)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
        
    # check for de-listed stocks & leveraged ETFs
    for stock in context.stocks:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
            context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check for open orders      
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
        
    record(num_stocks = len(context.stocks))
    
    prices = history(8*390,'1m','price')
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(context.portfolio.positions[stock].amount*data[stock].price)
         
    m = len(b_t)
    b_0 = np.ones(m) / m  # equal-weight portfolio
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
    else:     
        b_t = np.divide(b_t,denom)
    
    x_tilde = []
    
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        x_tilde.append(mean_price/prices[-1,i]) 
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})
    
    record(norm_sq = norm_squared(res.x,b_t))
    
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        allocate(context,data,allocation)
    else:
        return

def allocate(context, data, desired_port):
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, desired_port[i])
    
    for stock in data:
        if stock not in context.stocks:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Here's a long-term backtest using the code I posted immediately above. Obviously, there are some things to understand, but there appears to be "workiness."

Clone Algorithm
766
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):
    
    context.spy = sid(8554)
    
    context.eps = 1.005
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    set_benchmark(symbol('QQQ'))
 
def before_trading_start(context): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]

def handle_data(context, data):
    
    record(leverage = context.account.leverage)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
        
    # check for de-listed stocks & leveraged ETFs
    for stock in context.stocks:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
            context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check for open orders      
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
        
    record(num_stocks = len(context.stocks))
    
    prices = history(8*390,'1m','price')
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(context.portfolio.positions[stock].amount*data[stock].price)
         
    m = len(b_t)
    b_0 = np.ones(m) / m  # equal-weight portfolio
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
    else:     
        b_t = np.divide(b_t,denom)
    
    x_tilde = []
    
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        x_tilde.append(mean_price/prices[-1,i]) 
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})
    
    record(norm_sq = norm_squared(res.x,b_t))
    
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        allocate(context,data,allocation)
    else:
        return

def allocate(context, data, desired_port):
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, desired_port[i])
    
    for stock in data:
        if stock not in context.stocks:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Grant,

Many thanks for posting this updated version -- looks great and much leaner.

I'd like to highlight one trick in case anyone else is interested. Often the minute price can be quite noisy because of sub-minute price swings (Ernie Chan gave a great talk on this at QuantCon: https://vimeo.com/122492697). Here, Grant is instead using an exponentially weighted average price:

prices = history(8*390,'1m','price')  
prices = pd.ewma(prices,span=390).as_matrix(context.stocks)  

Thomas

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here's a bit more of a revision of the code, to include the weighting over a range of trailing window lengths (the so-called "BAH(OLMAR)" mentioned by Alex above). I included the means to adjust the leverage, and the ability to include the inverse ETF, SH, in the portfolio. At some point, I can add a bunch of comments, if there is interest. --Grant

Clone Algorithm
766
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):
    
    context.eps = 1.005
    context.pct_index = 0.0 # max percentage of inverse ETF
    context.leverage = 1.0
    
    print 'context.eps = ' + str(context.eps)
    print 'context.pct_index = ' + str(context.pct_index)
    print 'context.leverage = ' + str(context.leverage)
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
 
def before_trading_start(context): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
    context.stocks.append(symbols('SH')[0]) # add inverse ETF to universe

def handle_data(context, data):
    
    record(leverage = context.account.leverage)

def get_allocation(context,data,n):
      
    prices = history(8*390,'1m','price').tail(n*390)
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(context.portfolio.positions[stock].amount*data[stock].price)
         
    m = len(b_t)
    b_0 = np.ones(m) / m  # equal-weight portfolio
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
    else:     
        b_t = np.divide(b_t,denom)
    
    x_tilde = []
    
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        x_tilde.append(mean_price/prices[-1,i]) 
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
    
    bnds[-1] = [0,context.pct_index] # limit exposure to index
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})
    
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        return (allocation,np.dot(allocation,x_tilde))
    else:
        return (b_t,1)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
        
    # check for de-listed stocks & leveraged ETFs
    for stock in context.stocks:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
            context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
    
    # check for open orders      
    if get_open_orders():
        return
    
    # find average weighted allocation over range of trailing window lengths
    a = np.zeros(len(context.stocks))
    w = 0
    for n in range(3,9):
        (a,w) = get_allocation(context,data,n)
        a += w*a
        w += w
    
    allocation = a/w
    allocation = allocation/np.sum(allocation)
    
    allocate(context,data,allocation)

def allocate(context, data, desired_port):
    
    record(long = sum(desired_port[0:-1]))
    record(inverse = desired_port[-1])
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, context.leverage*desired_port[i])
    
    for stock in data:
        if stock not in context.stocks:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Grant, thanks so much for sharing. I'm currently studying the OLMAR paper and your implementations and hope to contribute back with variations of my own once I understand things a little better. I'd be very interested in your adding more comments to your code if you have time.

One thing I'd like to understand better is why you schedule trade() for 60 minutes after market open. I tried modifying that to 5 minutes and got similar results. When I dropped it down to 1 minute or 0 (no minutes parameter), I got a runtime error:

KeyError: Security(44933, symbol='FOXA_V', security_name='NEWS CORP', exchange='NASDAQ GLOBAL SELECT MARKET', start_date=Timestamp('2013-06-19 00:00:00+0000', tz='UTC'), end_date=Timestamp('2013-06-28 00:00:00+0000', tz='UTC'), first_traded=None)  

Sorry if this is too much of a beginner question -- I'm still getting up to speed on everything.

Hello A. Roy,

I may get the chance tomorrow to add some comments. If you don't hear back in a day or two, just re-post here to pester me.

Interesting that you are getting an error; I get the same thing, so I sent the fine folks at Quantopian an e-mail with access to the code so that they can have a look. I thought I'd wrung out all of the problems!

Grant

@ A. Roy,

Well, I made a change and managed to avoid the error you found, but I can't say that I fully understand it. In initialize(context), I added:

context.data = []  

and then in before_trading_start(context), I added:

    # check if data exists  
    for stock in context.stocks:  
        if stock not in context.data:  
            context.stocks.remove(stock)  

In handle_data(context, data), I now have:

context.data = data  

I think that the algo gets tripped up on stocks that are added to the universe, via the code in before_trading_start(), if the stock doesn't trade first-thing in the day, but I'm not sure--I thought I'd guarded for that elsewhere.

Eventually, I'll add comments to the code, but I'd like to understand this weird problem first, since there may be some restructuring of the code.

Grant

Clone Algorithm
766
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Adapted from:
# Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
# http://icml.cc/2012/papers/168.pdf

import numpy as np
from scipy import optimize
import pandas as pd

def initialize(context):
    
    context.eps = 1.005
    context.pct_index = 0.0 # max percentage of inverse ETF
    context.leverage = 1.0
    
    print 'context.eps = ' + str(context.eps)
    print 'context.pct_index = ' + str(context.pct_index)
    print 'context.leverage = ' + str(context.leverage)
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=1))
    
    context.data = []
 
def before_trading_start(context): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
    context.stocks.append(symbols('SH')[0]) # add inverse ETF to universe
    
    # check if data exists
    for stock in context.stocks:
        if stock not in context.data:
            context.stocks.remove(stock)

def handle_data(context, data):
    
    record(leverage = context.account.leverage)
    
    context.data = data

def get_allocation(context,data,n):
      
    prices = history(8*390,'1m','price').tail(n*390)
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(context.portfolio.positions[stock].amount*data[stock].price)
         
    m = len(b_t)
    b_0 = np.ones(m) / m  # equal-weight portfolio
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
    else:     
        b_t = np.divide(b_t,denom)
    
    x_tilde = []
    
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        x_tilde.append(mean_price/prices[-1,i]) 
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
    
    bnds[-1] = [0,context.pct_index] # limit exposure to index
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})
    
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        return (allocation,np.dot(allocation,x_tilde))
    else:
        return (b_t,1)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
        
    # check for de-listed stocks & leveraged ETFs
    for stock in context.stocks:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
            context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
    
    # check for open orders      
    if get_open_orders():
        return
    
    # find average weighted allocation over range of trailing window lengths
    a = np.zeros(len(context.stocks))
    w = 0
    for n in range(3,9):
        (a,w) = get_allocation(context,data,n)
        a += w*a
        w += w
    
    allocation = a/w
    allocation = allocation/np.sum(allocation)
    
    allocate(context,data,allocation)

def allocate(context, data, desired_port):
    
    record(long = sum(desired_port[0:-1]))
    record(inverse = desired_port[-1])
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, context.leverage*desired_port[i])
    
    for stock in data:
        if stock not in context.stocks:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Grant - try something like this. Instead of placing the order immediately, when you place an order, queue it up and then clear the queue as stocks become trade-able.

In initialize

context.order_queue = {}  

Make a new function clear_orders
def clear_orders(context, data): for stock in context.order_queue.keys(): if context.order_queue[stock]: if stock in data: context.order_queue[stock]['func'](stock,context.order_queue[stock]['amt']) context.order_queue[stock] = None

In your allocate function, instead of order_target_percent
context.order_queue[stock] = {'func': order_target_percent, 'amt': context.leverage*desired_port[i])} clear_orders(context, data)

In handle_data
clear_orders(context, data)

Don't know why my post formatting is so wacky - sorry about that