Back to Community
Sector ETF with OLMAR

Hi everyone,

I've been recently playing around with applying the OLMAR algorithm, pretty much well known in the quantopian community by now. As a first approach I thought, why not try to apply it to the market as a whole and see how it fares with the 9 ETFs that represent the major sectors. Any thoughts or ideas to improve this algo? Seems to have some promise but sometimes falls flat.

Any experienced users with OLMAR out there want to chime in? Feel free to clone if you want to use it also. If anyone wants to collaborate also, feel free to invite.

Clone Algorithm
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
from scipy import optimize
import pandas as pd

def record_vars(context, data):
    record(total_positions = len(context.portfolio.positions))
    record(leverage = context.account.leverage)

def initialize(context):
    context.eps = 1.001
    context.long_lev = 2.7
    context.trade_frequency = 6
    context.month_counter = context.trade_frequency
    context.stocks_long =  [sid(19662),  # XLY Consumer Discrectionary SPDR Fund   
                            sid(19656),  # XLF Financial SPDR Fund  
                            sid(19658),  # XLK Technology SPDR Fund  
                            sid(19655),  # XLE Energy SPDR Fund  
                            sid(19661),  # XLV Health Care SPRD Fund  
                            sid(19657),  # XLI Industrial SPDR Fund  
                            sid(19659),  # XLP Consumer Staples SPDR Fund   
                            sid(19654),  # XLB Materials SPDR Fund  
                            sid(19660) ] # XLU Utilities SPRD Fund

                      time_rules.market_open(hours= 1, minutes=30))
def month_counter(context, data):
    context.month_counter += 1

def master(context, data):
    #if context.month_counter % context.trade_frequency != 0:
    #    return
    rebalance(context, data, context.stocks_long, context.long_lev)

def get_allocation(context, data, n, stock_list):
    prices = data.history(stock_list, 'price', 8*390, '1m').tail(n*390)
    prices = pd.ewma(prices,span=390).as_matrix(stock_list)
    b_t = []
    for stock in stock_list:
        b_t.append(context.portfolio.positions[stock].amount*data.current(stock, 'price'))
    m = len(b_t)
    b_0 = np.ones(m) / m  # equal-weight portfolio
    denom = np.sum(b_t)

    if denom == 0.0:
        b_t = np.copy(b_0)
        b_t = np.divide(b_t,denom)
    x_tilde = []
    for i, stock in enumerate(stock_list):
        mean_price = np.mean(prices[:,i])
    bnds = []
    limits = [0,1]
    for stock in stock_list:
    bnds[-1] = [0,0] # limit exposure to index
    bnds = tuple(tuple(x) for x in bnds)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:,x_tilde) - context.eps})
    res= optimize.minimize(norm_squared, b_0, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-6})
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    if res.success and (,x_tilde)-context.eps > 0):
        return (allocation,,x_tilde))
        return (b_t,1)

def rebalance(context, data, stock_list, leverage):
    # check if data exists
    for stock in stock_list:
        if not data.can_trade(stock):
    # check for de-listed stocks & leveraged ETFs
    for stock in stock_list:  
        if stock.security_end_date < get_datetime():  # de-listed ?  
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            print 'removing leveraged ETF'
    # check for open orders      
    if get_open_orders():
    # find average weighted allocation over range of trailing window lengths
    a = np.zeros(len(stock_list))
    w = 0
    for n in range(3,9):
        (a,w) = get_allocation(context, data, n, stock_list)
        a += w*a
        w += w
    allocation = a/w
    allocation = allocation/np.sum(allocation)
    allocate(context,data,allocation, stock_list, leverage)

def allocate(context, data, desired_port, stock_list, leverage):
    for i, stock in enumerate(stock_list):
        if data.can_trade(stock):
            print 'placing an order: %s %s' % (stock, desired_port[i])
            order_target_percent(stock, leverage * desired_port[i])

def norm_squared(b,*args):
    b_t = np.asarray(args)
    delta_b = b - b_t
    return 0.5*,delta_b.T)

def norm_squared_deriv(b,*args):
    b_t = np.asarray(args)
    delta_b = b - b_t
    return delta_b

There was a runtime error.
1 response

Hi Richard,

Looks like my hacked together code. You might have a look at and contact the author, Paul Perry ( to see if he has any insights.

Up front, you may want to consider your objective. Are you trying to "beat" SPY? See if you can improve the performance with mean reversion, without adding risk, and including transaction costs? Or something else?

You could also consider how to incorporate the relative risk. For example, see

One suggestion is to compute the optimum portfolio weights every day (or every minute, if you want). Store the weights versus time, smooth them, and periodically rebalance (e.g. weekly/monthly/quarterly) using the smoothed weighting.

Also, for development, I recommend starting with a gross leverage of 1.0. Also, always run the backtest starting as far back as possible. If you go the equal-weight route, maybe use RSP as your benchmark, which would allow you to go back to 4/24/03?