Back to Community
Risk Parity // All Weather Portfolio

This is my rendition of the "All Weather Portfolio." A portfolio that is intended to perform well throughout recession, expansion, inflation and deflation. This version uses Roncalli et. al's equally-weighted risk contribution calculation to set the weights of each asset. The assets I selected are the ones used in Harry Browne's Permanent Portfolio (cash, stock, bond and gold.) The idea here is to have equal amounts of total risk in each asset class.

Additionally I use 2x leverage to target equity-like returns.

http://thierry-roncalli.com/download/erc.pdf
https://en.wikipedia.org/wiki/Permanent_Portfolio_Family_of_Funds
http://orcamgroup.com/wp-content/uploads/2012/10/pmpt-engineering-targeted-returns-and-risks.pdf

Clone Algorithm
525
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy


def initialize(context):
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=0),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=3),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    
    
    context.stocks = [ sid(8554), #SPY  
                       sid(23921),#TLT 
                     sid(26807), #GLD
                      sid(23911)]  #Shy
                  
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
def handle_data(context, data):  
    if 'mx_lvrg' not in context:             # Max leverage  
        context.mx_lvrg = 0                  # Init this instead in initialize() for better efficiency  
    if context.account.leverage > context.mx_lvrg:  
        context.mx_lvrg = context.account.leverage  
        record(mx_lvrg = context.mx_lvrg)    # Record maximum leverage encountered      
    record(leverage=context.account.leverage)

    
def getin(context, data):
     
    prices = data.history(context.stocks,'price',22,'1d').as_matrix(context.stocks) #22 = 1 month
    ret = np.diff(prices,axis=0) # daily returns
    ret = np.divide(ret,np.amax(np.absolute(ret)))
    
    bnds = ((0,1),(0,1),(0,1),(0,1)) #bounds for weights (number of bounds  = to number of assets)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0})
    
    res= scipy.optimize.minimize(fitnessERC, context.x0, args=ret,method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0]=0
        denom = np.sum(allocation)
        if denom != 0:         #normalization process
            allocation = allocation/denom
    else:
        allocation = context.x0
 
    context.x0 = allocation
    
    total=allocation[0]+allocation[1]+allocation[2]+allocation[3]
    w1=allocation[0]/total
    w2=allocation[1]/total
    w3=allocation[2]/total
    w4=allocation[3]/total
    
    leverage = 2
    
    order_target_percent(sid(8554),w1*leverage)
    order_target_percent(sid(23921),w2*leverage)
    order_target_percent(sid(26807),w3*leverage)
    order_target_percent(sid(23911),w4*leverage)
    



def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def fitnessERC(x, *args):
    N = x.shape[0]
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    Acov = np.matrix(Acov)
    x = np.matrix(x)
    y = np.array(x) * ( np.array( Acov * x.T ).T )
    var = x * Acov * x.T
    b = var/N
    fval = 0 
    y = np.squeeze(np.asarray(y))
    for i in range(0,N):
        xij  = (y[i]/var - b) * (y[i]/var - b)
        fval = fval + xij*xij
    return fval


          
There was a runtime error.
8 responses

This is really nice - thanks for sharing.

Unfortunately it is the sort of algo that would do great in the contest, but totally fail in real life because the interest paid on the 2x leverage would kill any returns made.

I tried to use leveraged ETFs to see if anything can be done to reduce actual leverage, but was not successful.

Nice algo - thanks for sharing.

I use 2x leverage to target equity-like returns.

Just wondering if there is any reason to borrow $1000000 and put 1,302,882.84 into cash equivalent SHY?

2005-01-25
GLD $42.24 7097 $299,777.28
SHY $81.42 16002 $1,302,882.84
SPY $116.87 1418 $165,721.66
TLT $90.51 2460 $222,654.60
Cash ($997,983.15)

Try AGG,BIV,IEF instead of SHY .
That may help you beat the market without leverage and borrowing cost.

Mohammad, here is an example with no leverage. In this version I show how you can cap weights in the optimization problem. I cap SHY at 10% and remove the leverage.

Vladimir, well shy is not exactly cash, it has a longer duration. The goal was to capture the "low volatility anomaly."

Clone Algorithm
525
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy


def initialize(context):
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=0),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=3),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    
    
    context.stocks = [ sid(8554), #SPY  
                       sid(23921),#TLT 
                     sid(26807), #GLD
                      sid(23911)]  #Shy
                  
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
def handle_data(context, data):  
    if 'mx_lvrg' not in context:             # Max leverage  
        context.mx_lvrg = 0                  # Init this instead in initialize() for better efficiency  
    if context.account.leverage > context.mx_lvrg:  
        context.mx_lvrg = context.account.leverage  
        record(mx_lvrg = context.mx_lvrg)    # Record maximum leverage encountered      
    record(leverage=context.account.leverage)

    
def getin(context, data):
     
    prices = data.history(context.stocks,'price',22,'1d').as_matrix(context.stocks) #22 = 1 month
    ret = np.diff(prices,axis=0) # daily returns
    ret = np.divide(ret,np.amax(np.absolute(ret)))
    
    bnds = ((0,1),(0,1),(0,1),(0,.1)) #bounds for weights (number of bounds  = to number of assets)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0})
    
    res= scipy.optimize.minimize(fitnessERC, context.x0, args=ret,method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0]=0
        denom = np.sum(allocation)
        if denom != 0:         #normalization process
            allocation = allocation/denom
    else:
        allocation = context.x0
 
    context.x0 = allocation
    
    total=allocation[0]+allocation[1]+allocation[2]+allocation[3]
    w1=allocation[0]/total
    w2=allocation[1]/total
    w3=allocation[2]/total
    w4=allocation[3]/total
    
    leverage = 1
    
    order_target_percent(sid(8554),w1*leverage)
    order_target_percent(sid(23921),w2*leverage)
    order_target_percent(sid(26807),w3*leverage)
    order_target_percent(sid(23911),w4*leverage)
    



def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def fitnessERC(x, *args):
    N = x.shape[0]
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    Acov = np.matrix(Acov)
    x = np.matrix(x)
    y = np.array(x) * ( np.array( Acov * x.T ).T )
    var = x * Acov * x.T
    b = var/N
    fval = 0 
    y = np.squeeze(np.asarray(y))
    for i in range(0,N):
        xij  = (y[i]/var - b) * (y[i]/var - b)
        fval = fval + xij*xij
    return fval


          
There was a runtime error.

Georges,

Margin rate is several times higher then total return for SHY(0.99% for last year from now).
It is more reasonable to remove it.

SPY TLT GLD SHY
leverage = 1.00

Total Returns
159.4%
Benchmark Returns
111.1%
Alpha
0.11
Beta
0.13
Sharpe
1.51
Sortino
2.22
Information Ratio
0.53
Volatility
0.08
Max Drawdown
14.8%

SPY TLT GLD
leverage = 0.97

Total Returns
161%
Benchmark Returns
111.1%
Alpha
0.11
Beta
0.13
Sharpe
1.51
Sortino
2.22
Information Ratio
0.54
Volatility
0.08
Max Drawdown
15%

Clone Algorithm
17
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy


def initialize(context):
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=0),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=3),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    
    
    context.stocks = [ sid(8554), #SPY  
                       sid(23921),#TLT 
                     sid(26807), #GLD
                     # sid(23911)
                     ]  #Shy
                  
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
def getin(context, data):
     
    prices = data.history(context.stocks,'price',22,'1d').as_matrix(context.stocks) #22 = 1 month
    ret = np.diff(prices,axis=0) # daily returns
    ret = np.divide(ret,np.amax(np.absolute(ret)))
    
    bnds = ((0,1),(0,1),(0,1)) #bounds for weights (number of bounds  = to number of assets)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0})
    
    res= scipy.optimize.minimize(fitnessERC, context.x0, args=ret,method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0]=0
        denom = np.sum(allocation)
        if denom != 0:         #normalization process
            allocation = allocation/denom
    else:
        allocation = context.x0
 
    context.x0 = allocation
    
    total=allocation[0]+allocation[1]+allocation[2]#+allocation[3]
    w1=allocation[0]/total
    w2=allocation[1]/total
    w3=allocation[2]/total
    # w4=allocation[3]/total
    
    leverage = 0.97
    
    order_target_percent(sid(8554),w1*leverage)
    order_target_percent(sid(23921),w2*leverage)
    order_target_percent(sid(26807),w3*leverage)
    # order_target_percent(sid(23911),w4*leverage)
    
    record(leverage=context.account.leverage)


def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def fitnessERC(x, *args):
    N = x.shape[0]
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    Acov = np.matrix(Acov)
    x = np.matrix(x)
    y = np.array(x) * ( np.array( Acov * x.T ).T )
    var = x * Acov * x.T
    b = var/N
    fval = 0 
    y = np.squeeze(np.asarray(y))
    for i in range(0,N):
        xij  = (y[i]/var - b) * (y[i]/var - b)
        fval = fval + xij*xij
    return fval



'''
SPY TLT GLD SHY
leverage = 1.00

Total Returns
159.4%
Benchmark Returns
111.1%
Alpha
0.11
Beta
0.13
Sharpe
1.51
Sortino
2.22
Information Ratio
0.53
Volatility
0.08
Max Drawdown
14.8%

SPY TLT GLD
leverage = 0.97

Total Returns
161%
Benchmark Returns
111.1%
Alpha
0.11
Beta
0.13
Sharpe
1.51
Sortino
2.22
Information Ratio
0.54
Volatility
0.08
Max Drawdown
15%


'''
          
There was a runtime error.

Here I further expand on the concept of "risk budgeting" but cutting weights if an asset in the portfolio runs too hot or too cold. I think it does a nice job of managing the risk of the total portfolio while keeping leverage well under 1 at some points.

Sharpe Ratio = 2
Max DD < 11%

Clone Algorithm
525
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy


def initialize(context):
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=0),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    schedule_function(func= getin,date_rule=date_rules.week_start(days_offset=3),
                      time_rule=time_rules.market_open(hours=1, minutes=1))
    
    
    context.stocks = [ sid(8554), #SPY  
                       sid(23921),#TLT 
                     sid(26807) #GLD
                      ]  
                  
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
def handle_data(context, data):  
    if 'mx_lvrg' not in context:             # Max leverage  
        context.mx_lvrg = 0                  # Init this instead in initialize() for better efficiency  
    if context.account.leverage > context.mx_lvrg:  
        context.mx_lvrg = context.account.leverage  
        record(mx_lvrg = context.mx_lvrg)    # Record maximum leverage encountered      
    record(leverage=context.account.leverage)

    
def getin(context, data):
     
    prices = data.history(context.stocks,'price',22,'1d').as_matrix(context.stocks) #22 = 1 month
    ret = np.diff(prices,axis=0) # daily returns
    ret = np.divide(ret,np.amax(np.absolute(ret)))
    
    bnds = ((0,1),(0,1),(0,1)) #bounds for weights (number of bounds  = to number of assets)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0})
    
    res= scipy.optimize.minimize(fitnessERC, context.x0, args=ret,method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0]=0
        denom = np.sum(allocation)
        if denom != 0:         #normalization process
            allocation = allocation/denom
    else:
        allocation = context.x0
 
    context.x0 = allocation
    
    total=allocation[0]+allocation[1]+allocation[2]
    w1=allocation[0]/total
    w2=allocation[1]/total
    w3=allocation[2]/total
    
    
    #########################################################
    current_spy = data.current(sid(8554), 'price')
    current_tlt = data.current(sid(23921), 'price')
    current_gld = data.current(sid(26807), 'price')
    
    
    spy_hist = data.history(sid(8554),'close',10,'1d')
    tlt_hist = data.history(sid(23921),'close',10,'1d')
    gld_hist = data.history(sid(26807),'close',10,'1d')
    
    
    spy_change = (spy_hist.ix[-5] - current_spy) / current_spy
    tlt_change = (tlt_hist.ix[-5] - current_tlt) / current_tlt
    gld_change = (gld_hist.ix[-5] - current_gld) / current_gld
    
    
    risk=.03
    reward=.05
    
    if spy_change <- risk*(1-w1) or spy_change > reward*(1-w1):
        l = .5
    else:
        l = 1
    
    if tlt_change <- risk*(1-w2) or  tlt_change > reward*(1-w2):
        m = .5
    else:
        m = 1

    if gld_change <- risk*(1-w3) or  gld_change > reward*(1-w3):
        n = .5
    else:
        n = 1

    #################
    
    
    
    leverage = 1
    
    order_target_percent(sid(8554),w1*leverage*l)
    order_target_percent(sid(23921),w2*leverage*m)
    order_target_percent(sid(26807),w3*leverage*n)

    



def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def fitnessERC(x, *args):
    N = x.shape[0]
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    Acov = np.matrix(Acov)
    x = np.matrix(x)
    y = np.array(x) * ( np.array( Acov * x.T ).T )
    var = x * Acov * x.T
    b = var/N
    fval = 0 
    y = np.squeeze(np.asarray(y))
    for i in range(0,N):
        xij  = (y[i]/var - b) * (y[i]/var - b)
        fval = fval + xij*xij
    return fval


          
There was a runtime error.

Guys, great posts - so the question remains - should you spend more time on "algo development" or should you spend time on marketing your black box hedge fund? I am only half kidding. Check my post called "The Simplest Algorithm", it contains a similar "algo" ;)

@Georges - the new algo is good, but it does lose that 'holy grail' smooth upward incline ;)

My critique of it would be the down years in 2013 and 2015.
2013 was an anomaly year in many algo terms, and this algo fails to remain profitable in it.
2015 was a down year, and the algo again fails to remain profitable.

Although after re-examining the original algo, that had losses in these years too, but they were much smaller losses.

Hey thanks a lot guys! your suggestions have led to some great improvements