Back to Community
Keller/Butler - A Century of Generalized Momentum: Elastic Asset Allocation (EAA)

[edit: newer versions at the bottom, with dividend-adjusted prices from Yahoo]

Hey everyone,

While I'm waiting for QuantCon, I've implemented another recent asset allocation algorithm floating around the quant blogosphere. This one is taken from W. Keller and A. Butler's 2014 paper "A Century of Generalized Momentum - From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)".

As the name implies, their approach dynamically changes asset classes and allocations, and includes a crash-protection mechanism. An asset's specific allocation is based on a customizable scoring function, they call "generalized momentum". As a guideline the authors suggest two "golden" weight-settings - offensive and defensive. Both templates are included in the source code and you can change them by commenting them in.

Specifically, the generalized momentum relies on the following factors:
- Price-momentum (wR)
- Correlation (wC)
- Volatility (wV)
- Elasticity (wS)

The approach has been investigated thoroughly and reproduced by different bloggers. Two outstanding posts are on QuantStrat TradeR and TrendXplorer - I highly recommend you check them out. My implementation here on Quantopian gets close but still has a number of limitations, as noted in the source code.

EAA clearly has the performance edge over last week's "CSSAnalytics - A Simple Tactical Asset Allocation Portfolio with Percentile Channels", but that comes at the cost of increased volatility and draw-down. (Also, note the shorter backtest window) Nevertheless, given EAA's generic framework, I expect there to be plenty of room for optimization and exploration with other asset mixes, such as industry sectors or countries. Let me know what you find.

Happy hacking. Cheers!

Clone Algorithm
866
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Ported by:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 6, 2014 (v0.5)
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   No correction for dividend payout
#   Using SHY momentum instead of risk free rate
#   Negative excess returns truncated to 0
#

import pandas as pd
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper:
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    
    # Assets from Index Swing Trader Blog:
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF
    context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    context.cash = sid(23870)
    
    context.bill = sid(23911)
    context.assets = set(context.active + [context.cash, context.bill])
    
    # Weights:
    #   [wR, wC, wV, wS, eps]
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    #context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0)

    context.leverage = 1.0
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )


def handle_data(context, data):
    pass

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    h = history(280, '1d', 'price')
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    mom[mom < 0] = 0
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV)) ** (wS + eps)
    
    #
    # Crash Protection
    #
    
    num_neg = mom[mom <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum(axis=1)).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
There was a runtime error.
37 responses

Thanks for sharing a clean code.
I think context.leverage is reserved as a variable, better use something different.

Alex, I am especially grateful, having asked you personally about this. And, again, think it's so generous of you to freely share such valuable work with others.

Bravo!

On first blush, the algo looks great. (Going to play with it now!)

Stephen

Alex,

I'm told the model works with Total Return instead of Excess Return, effectively setting the risk free rate to zero. You might consider that implementation:

# Scoring  
#  
# excess return momentum  
mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \  
       hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \  
       hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \  
       hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22  
mom[mom < 0] = 0

Stephen

Stephen,

You're right, we'd need a Total Return time series. AFAIK the Q neither provides TR data nor programmatic access to dividend payouts yet. When this becomes available we can back-fit this algorithm.

I think we need the total excess return over risk-free for the price momentum score. Using short-term bond momentum is kind of a hack. In the paper, section 4.6 provides a condensed version of the "EAA recipe":

ri is computed as the average total excess return over the last 1, 3, 6 and 12 months (not 9 months) where excess return is defined relative to the (13 week) TBill yield.

Actually, it should be possible to fetch and integrate dividend data from Yahoo. This would be a great improvement over the existing code base. Maybe someone feels adventurous. :-)

What about using SHV or SHY dividend adjusted price from Yahoo as approximation?

VY

I think they're both good options. SHV, since its inception in 2007, has certainly lived up to its safe haven status. Its price has been very stable and SHV should be minimally vulnerable in a rising rate environment. Either one makes good sense.

This is interesting--the strategies with the best equity curves are the ones from my blog =P. Thanks for the advertising once again ^_^

-Ilya (author of QuantStrat TradeR)

VY, I'm sure SHY or SHV would work fine. I've looked at the Fetcher docs, but didn't have the time to dig into them yet. If you have experience getting Yahoo data with the fetcher, I'd appreciate your input.

Alex and VY,

I'm more of a database programmer ...a mere fledgling wanna-be in python. But here's code I wrote to bring over yahoo data for a database when I wanted to snag a LAST price for a ticker I wrote into a field:

PHP_Execute( "

$getfield = 'GetField ( \"Asset Components::Ticker\" )'; $ticker = fm_evaluate($getfield);

$link = \"http://download.finance.yahoo.com/d/quotes.csv?s=\" . $ticker . \"&f=l1gh&e=.csv\";

$set = ini_set('memory_limit', -1);
$curl = curl_init();
curl_setopt ($curl, CURLOPT_URL, $link);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec ($curl);
curl_close ($curl);


echo $result; " )

You guys might look at that and go, "Ahhh."

Hope it helps.

Stephen

As many of you know, it’s the ability to adhere to a strategy that counts …in the long run. And what makes it easier to adhere to a strategy? Minimizing drawdowns.

Look at the relative performance of EAA from 2008 to early 2009 and the latter half of 2011. I think this is where the EAA recipe really shines.

I believe this may help:
https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return

and a strategy based on either yahoo or quantpian data: https://www.quantopian.com/posts/yahoo-slash-quantopian-using-y-adj-as-signal-for-swtich

Note that fetcher only allows 5 xls fetch, thus you may need to write an external fetcher consolidator.

Florent,

That’s a really useful implementation and a great article on paired-switching. Now, if we could snag implied volatility data out of yahoo, we might exploit iv’s predictive value. (…lots of articles supporting this in SSRN eLibrary under the search terms “implied volatility predictive”)

Your algo also has a handsome performance plot. Do you think this approach could be expanded to three risky assets, drawing from a small universe and rotating around a volatility forecast?

Stephen

Florent, nice! I'll incorporate your fetcher code when I have some more time. It might mean cutting down the number of assets to 5 for now, but let's see.

Alex,
Hope you won't mind the intrusion here, but I read through your scripting, and a few papers you posted on EAA. I must say- what you have here is rather remarkable.
Though, may I make a suggestion, to introduce slippage into the model. It would more accurately show past performance. I'm working on doing that myself (but I'm rather new to Python).

Corey - Quantopian includes a volume-driven slippage model by default unless instructed otherwise. See https://www.quantopian.com/help#ide-slippage

This is a great defensible, robust algorithm. I really like it. The code is clean and very well documented. Thanks very much for spending the time to do so.

Since Quantopian offered us minute-resolution data, I tried using the same strategy and re-balancing the asset every 30 minutes. The preliminary result is not at all satisfactory, perhaps the play between momentum/volatility/correlation is a whole different ball game for intraday data.


Edit 14MAR2015: the effect of constant negative return is caused by slippage, once slippage and commission are neglected, backtest performance improved.

Clone Algorithm
18
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Ported by:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 6, 2014 (v0.5)
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   No correction for dividend payout
#   Using SHY momentum instead of risk free rate
#   Negative excess returns truncated to 0
#

import pandas as pd
import math

# trade after 10:29am
TIME_THRES = [10,29]
# rebalance every 30 minutes
TRADE_FREQ = 30
# get minute data
# get 60 bars of minute da
DATA_SIZE = 60
DATA_RESOLUTION = '1m'
# resample frequency
RESAMPLE_FREQ = '5min'
RESAMPLE_SIZE = 13
lb = [-4,-6,-12,22]

def initialize(context):
    set_commission(commission.PerTrade(cost=0.0))
    #
    # Configuration
    #

    # Assets (N=7) from Paper:
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    
    # Assets from Index Swing Trader Blog:
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF
    context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    context.cash = sid(23870)
    
    context.bill = sid(23911)
    context.assets = set(context.active + [context.cash, context.bill])
    
    # Weights:
    #   [wR, wC, wV, wS, eps]
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    #context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0)

    context.leverage = 1.0
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    # schedule_function(
    #     reallocate,
    #     date_rules.month_end(days_offset=0),
    #     time_rules.market_close(minutes=5)
    # )
    
    # schedule_function(
    #     rebalance,
    #     date_rules.month_end(days_offset=0),
    #     time_rules.market_close(minutes=5)
    # )


def handle_data(context, data):
    
    # trade after TIME_THRES
    data_datetime = data[context.active[0]].datetime
    if data_datetime.time().hour <= TIME_THRES[0] and data_datetime.time().minute <= TIME_THRES[1]:        
        return
    
    ## reblance every TRADE_FREQ min.    
    if data_datetime.time().minute % TRADE_FREQ != 0:
        return
    
    # ensure there is enough, proper data
    h = history(DATA_SIZE, DATA_RESOLUTION, 'price')    
    if h[context.active[0]].size < DATA_SIZE:
        return  
    # continue only when data within 1 day
    if len(h.resample('D', how='last')[context.active[0]].values) != 1:
        return
    if h.resample(RESAMPLE_FREQ, how='last')[context.active[0]].size != RESAMPLE_SIZE:
        return
    
    reallocate(context,data)
    rebalance(context,data)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    # get data (h size checked at handle_data)
    h = history(DATA_SIZE, DATA_RESOLUTION, 'price')
    hm = h.resample(RESAMPLE_FREQ, how='last')[context.active]
    hb = h.resample(RESAMPLE_FREQ, how='last')[context.bill]
    ret = hm.pct_change().ix[lb[2]:]
    
    N = len(context.active)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum 
    mom = (hm.ix[-1] / hm.ix[lb[0]]  - hb.ix[-1] / hb.ix[lb[0]] + \
           hm.ix[-1] / hm.ix[lb[1]]  - hb.ix[-1] / hb.ix[lb[1]] + \
           hm.ix[-1] / hm.ix[lb[2]]  - hb.ix[-1] / hb.ix[lb[2]] ) / lb[3]
    mom[mom < 0] = 0
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV)) ** (wS + eps)
    
    #
    # Crash Protection
    #
    
    num_neg = mom[mom <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum(axis=1)).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
There was a runtime error.

@Yoyo T., you'll find that commissions are eating into that strategy. Trade too often without much profit and you'll just piddle your account away. The original poster's monthly rebalance periodicity reduces these costs to a fraction of your's.

The strat runs fine on daily data too due to the fact of the EOD rebalance schedule. Try starting the strategy in the early part of March 2009 -- you might get conflicting results.

Clone Algorithm
9
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Ported by:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 6, 2014 (v0.5)
#

import pandas
import math

def initialize(context):
    context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    context.cash = sid(23870)
    
    context.bill = sid(23911)
    context.assets = set(context.active + [context.cash, context.bill])
    context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6)
    context.leverage = 1.0
    context.alloc = pandas.Series([0.0] * len(context.assets), index=context.assets)
    schedule_function( reallocate, date_rules.month_end(days_offset=0),
                      time_rules.market_close(minutes=5))
    schedule_function(rebalance,   date_rules.month_end(days_offset=0),
                      time_rules.market_close(minutes=5))

def handle_data(context, data):
    pass

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    h = history(280, '1d', 'price')
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)

    # Scoring, excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    mom[mom < 0] = 0
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pandas.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    # Generalized Momentum
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV )^wS
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV)) ** (wS + eps)
    
    # Crash Protection
    num_neg = mom[mom <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    # Security selection
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    # Allocation
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum(axis=1)).dropna()
    w = pandas.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
There was a runtime error.

@Market Tech. Thanks for your insight! I thought about commission too and had set the commission to zero to in the code I posted earlier. I was hoping to get results at least as good as yours, however, the return was negative. I think the framework of the algorithm is really nice, just have to figure out an appropriate definition for z.

I spent some time experimenting with Florent's fetcher code and found a way to get up-tp-date adjusted data from Yahoo into the regular history data-frame format. Using adjusted prices makes quite a difference depending on the instruments used, e.g. with REITs. You can switch between Quantopian data and Yahoo data with the "use_adjusted" flag. If you use adjusted data, the fetcher limits you to max 5 different instruments.

(I hope the traffic from repeated back-testing doesn't get The Q blocked from Yahoo servers)

Clone Algorithm
866
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Implementation:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 10, 2014 (v0.6)
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   Dividend adjustment requires len(assets) <= 5 (Quantopian Fetcher limit)
#   CAUTION: (slight) look-ahead bias with adjusted EOD price
#
# Version Log:
#   v0.6 - Dividend adjustment via Yahoo data (Thanks to F. Chandelier)
#          Truncating z-score at 0 rather than momentum
#          Add leverage logging
#   v0.5 - initial open-source release

import pandas as pd
import datetime as dt
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper (unadjusted):
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    #context.bill = sid(23911)
    
    # Assets from Index Swing Trader Blog (unadjusted):
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    #context.cash = sid(23870)
    #context.bill = sid(23911)
    
    # Custom assets (N=5) (adjusted yahoo data):
    #   VTI, EFA, ICF, TLT, IEF. Cash = IEF, Bill = IEF (!)
    context.use_adjusted = True
    context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870)]
    context.cash = sid(23870)
    context.bill = sid(23870)
    
    context.leverage = 1.0
    
    # Weights:
    #   [wR, wC, wV, wS, eps]
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    #context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0)

    context.assets = set(context.active + [context.cash, context.bill])
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    #
    # Yahoo fetcher
    # (inspired by F. Chandelier)
    # https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return
    #
    if context.use_adjusted:
        start_year = 2002
        end_year = dt.datetime.today().year + 1
        url_template = "http://real-chart.finance.yahoo.com/table.csv?s=%s&a=0&b=1&c=%d&d=0&e=1&f=%d&g=d&ignore=.csv"

        for sym in context.active:
            url = url_template % (sym.symbol, start_year, end_year)
            print "Fetching %s adjusted prices: %s" % (sym.symbol, url)

            fetch_csv(
                url,
                date_column='Date',
                date_format='%Y-%m-%d',
                symbol=sym,
                usecols=['Adj Close'],
                pre_func=fetch_pre,
                post_func=fetch_post
            )


def handle_data(context, data):
    record(leverage = context.portfolio.positions_value / context.portfolio.portfolio_value)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    h = make_history(context, data).ix[-280:]
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    non_cash_assets = list(context.active)
    non_cash_assets.remove(context.cash)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV)) ** (wS + eps)
    z[mom < 0.] = 0.0
    
    #
    # Crash Protection
    #
    
    num_neg = z[z <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum(axis=1)).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
    
#
# Quantopian/Yahoo history switch
#
def make_history(context, data):
    if context.use_adjusted:
        df = pd.DataFrame(index=data[context.active[0]]['aclose_hist'].index, columns=context.active)
        for s in context.active:
            df[s] = data[s]['aclose_hist']
        return df
    else:
        return history(300, '1d', 'price')

def fetch_pre(df):
    df = df.rename(columns={'Adj Close': 'aclose'})
    df['aclose_hist'] = pd.Series([[]] * len(df.index), index=df.index)
    return df
    
def fetch_post(df):
    #
    # Workaround for history() not providing access to external fields
    # Populate data[] with past 300 adjusted close prices
    #
    for i in xrange(0, len(df.index)):
        df['aclose_hist'].ix[-i-1] = df['aclose'][-i-300:][:300]
    return df
There was a runtime error.

Alex, They say, “The enemy of good is better.” Well, you just disproved that aphorism.

Implementing Florent’s lead towards dividend and split adjusted prices, v0.6 shows a substantial reduction in risk without sacrificing returns: Sharpe improves by 34%, max drawdown decreases 33%, and portfolio level volatility comes in at a very respectable 10%. This version is also a truer estimate for a real-world application. Nice work Alex and Florent!

Great post Alex.
I've added a new term to the momentum score (z) which tries to capture the intraday volatility (iv),[1] which I will call the Golden Lama EEA:
w ~ z = ( r^w_r * (1-c)^w_c ) / ( v^w_v * iv ^w_iv)
where iv is defined as:
iv = mean(daily_high[-10:]-daily_low[-10:])/mean(daily_high[-280:]-daily_low[-280:])
The return improved a bit as compared to the Golden Offensive EAA, while volatility and max drawdown remained relatively unchanged.

-- Golden Lama --
Total Returns: 217.2%
Sharpe: 1.71
Volatility: 0.10
Max Drawdown: 11.9%

-- Golden Offensive EAA --
Total Returns: 197.5%
Sharpe: 1.65
Volatility: 0.10
Max Drawdown: 11.7%

[1] https://cssanalytics.wordpress.com/2015/03/13/using-a-self-similarity-metric-with-intraday-data-to-define-market-regimes/

Clone Algorithm
181
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Implementation:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 10, 2014 (v0.6)
# Modified:
#   Yoyoteng (https://www.quantopian.com/users/54b8a7b4f44758bf5c000aea)
#   March 13, 2014
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   Dividend adjustment requires len(assets) <= 5 (Quantopian Fetcher limit)
#   CAUTION: (slight) look-ahead bias with adjusted EOD price
#
# Version Log:
#   v0.7 - add addional term, intraday volatility to z score.
#   v0.6 - Dividend adjustment via Yahoo data (Thanks to F. Chandelier)
#          Truncating z-score at 0 rather than momentum
#          Add leverage logging
#   v0.5 - initial open-source release

import pandas as pd
import datetime as dt
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper (unadjusted):
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    #context.bill = sid(23911)
    
    # Assets from Index Swing Trader Blog (unadjusted):
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    #context.cash = sid(23870)
    #context.bill = sid(23911)
    
    # Custom assets (N=5) (adjusted yahoo data):
    #   VTI, EFA, ICF, TLT, IEF. Cash = IEF, Bill = IEF (!)
    context.use_adjusted = True
    context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870)]
    context.cash = sid(23870)
    context.bill = sid(23870)
    
    context.leverage = 1.0
    
    # Weights:
    #   [wR, wC, wV, wS, eps, wIV]
    #   wi ~ zi = ( ri^wR * (1-ci)^wC / (vi^wV) * (ivi^wIV) )^(wS+eps)
    
    #   Golden Lama EAA :
    context.score_weights = (2.0, 1.0, 0.25, 1.0, 1e-6, 4.0)
    #   IntradayVolatility EAA : wi ~ zi = (1-ci) / (intraday_vol)
    #context.score_weights = (0.0, 0.0, 0.0, 1.0, 1e-6, 1.0) 
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    #context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6, 0.0)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    #context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6, 0.0)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6, 0.0)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6, 0.0)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0, 1.0)

    context.assets = set(context.active + [context.cash, context.bill])
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    #
    # Yahoo fetcher
    # (inspired by F. Chandelier)
    # https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return
    #
    if context.use_adjusted:
        start_year = 2002
        end_year = dt.datetime.today().year + 1
        url_template = "http://real-chart.finance.yahoo.com/table.csv?s=%s&a=0&b=1&c=%d&d=0&e=1&f=%d&g=d&ignore=.csv"

        for sym in context.active:
            url = url_template % (sym.symbol, start_year, end_year)
            print "Fetching %s adjusted prices: %s" % (sym.symbol, url)

            fetch_csv(
                url,
                date_column='Date',
                date_format='%Y-%m-%d',
                symbol=sym,
                usecols=['Adj Close'],
                pre_func=fetch_pre,
                post_func=fetch_post
            )


def handle_data(context, data):
    record(leverage = context.portfolio.positions_value / context.portfolio.portfolio_value)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    h = make_history(context, data).ix[-280:]
    h_low = history(300, '1d', 'low').ix[-280:]
    h_high = history(300, '1d', 'high').ix[-280:]
    
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    non_cash_assets = list(context.active)
    non_cash_assets.remove(context.cash)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    # normalized intraday volatility. 
    # inspired by Cssanalytics, https://cssanalytics.wordpress.com/2015/03/13/using-a-self-similarity-metric-with-intraday-data-to-define-market-regimes/
    ivol = (h_high[context.active].ix[-10:]-h_low[context.active].ix[-10:]).mean()/(h_high[context.active]-h_low[context.active]).mean()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV / ivoli^wIV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    wIV = context.score_weights[5]
       
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV) / (ivol ** wIV)) ** (wS + eps)
    z[mom < 0.] = 0.0
    
    #
    # Crash Protection
    #
    
    num_neg = z[z <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum(axis=1)).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
    
#
# Quantopian/Yahoo history switch
#
def make_history(context, data):
    if context.use_adjusted:
        df = pd.DataFrame(index=data[context.active[0]]['aclose_hist'].index, columns=context.active)
        for s in context.active:
            df[s] = data[s]['aclose_hist']
        return df
    else:
        return history(300, '1d', 'price')

def fetch_pre(df):
    df = df.rename(columns={'Adj Close': 'aclose'})
    df['aclose_hist'] = pd.Series([[]] * len(df.index), index=df.index)
    return df
    
def fetch_post(df):
    #
    # Workaround for history() not providing access to external fields
    # Populate data[] with past 300 adjusted close prices
    #
    for i in xrange(0, len(df.index)):
        df['aclose_hist'].ix[-i-1] = df['aclose'][-i-300:][:300]
    return df
There was a runtime error.

Hey Ted, nice mod. I looked at replacing daily "vol" with intraday data as well, with mixed results. Your "Golden Lama" score seems to be quite a bit more stable than my initial attempts. Thanks!

Alex, the cash proxy is IEF in this implementation. Last night, ahead of the Fed's announcement, I noticed that flows into IEI were up 535%. Recently, it looks like institutions use SHY and IEI as cash equivalents when moving out of equities.

the algorithm seems break after platform upgrade, when backtesting it responsed following:

Something went wrong. Sorry for the inconvenience. Try using the built-in debugger to analyze your code. If you would like help, send us an email.
ValueError: No axis named 1 for object type
There was a runtime error on line 143.

Did anyone implemented the improvements described here :

http://indexswingtrader.blogspot.co.il/2015/01/a-primer-on-elastic-asset-allocation.html

its a Hedged, Offensive settings for the keeller algorithem which provides higher returns and more important, none of the years has a loss and the biggest monthly drawdown -%6.5 versus ~-%11 in the offesive EAA.

http://a.disquscdn.com/uploads/mediaembed/images/1618/9084/original.jpg?w=800&h

@Novice TAI
I ran into the same problem, as well with all iterations of this program when forward testing. I've been working on it for the last few days, to no avail.

@Corey, Novice TAI: Looks like the upgrade broke my sketchy workaround (and one of my contest algos). I'll look into this when I have some time for TheQ in the next couple of days.

@Joe Lee: When you look at the source code, you'll find the "Weights" part of the configuration section. "Equal Weighted Hedged" is the scoring setting you're looking for.

A plot of cumulative return of the ETFs used in the default strategy, along with SPY.
This demonstrates how well the strategy is, i.e. minimal drawdown with the final return (198% or 217%) close to the best performing ETF (250%) used. Note the different cumulative returns for SPY shown in the image (~200%) with those in Quantopian(~130%), which is likely due to my return is the simple cumulative return, when Quantopian likely accounted for slippage/commision...?
image here

%matplotlib inline
import matplotlib.pyplot as plt  
import numpy as np  
import pandas as pd  
from zipline.utils.factory import load_bars_from_yahoo

start = '2003-12-31'  
end = '2014-12-31'  
ticks = ('MDY','IEV','EEM','QQQ','XLV','IEF','TLT','IEF', 'SPY')  
data = load_bars_from_yahoo(  
    stocks=ticks,  
    start=pd.Timestamp(start, tz='utc'),  
    end=pd.Timestamp(end, tz='utc'),#pd.Timestamp.utcnow(),  
    indexes={})  
prices = data.minor_xs('price')

log_returns = np.log(prices).diff()  
compound_returns = (1 + log_returns).cumprod()  
compound_returns.plot(figsize=(14, 6))

changed the code so that he non-fetcher version works again

Clone Algorithm
332
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Implementation:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 10, 2014 (v0.6)
# Modified:
#   Yoyoteng (https://www.quantopian.com/users/54b8a7b4f44758bf5c000aea)
#   March 13, 2014
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   Dividend adjustment requires len(assets) <= 5 (Quantopian Fetcher limit)
#   CAUTION: (slight) look-ahead bias with adjusted EOD price
#
# Version Log:
#   v0.7 - add addional term, intraday volatility to z score.
#   v0.6 - Dividend adjustment via Yahoo data (Thanks to F. Chandelier)
#          Truncating z-score at 0 rather than momentum
#          Add leverage logging
#   v0.5 - initial open-source release

import pandas as pd
import datetime as dt
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper (unadjusted):
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    #context.bill = sid(23911)
    
    # Assets from Index Swing Trader Blog (unadjusted):
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    #context.cash = sid(23870)
    #context.bill = sid(23911)
    
    # Custom assets (N=5) (adjusted yahoo data):
    #   VTI, EFA, ICF, TLT, IEF. Cash = IEF, Bill = IEF (!)
    context.use_adjusted = False
    context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870)]
    context.cash = sid(23870)
    context.bill = sid(23870)
    
    context.leverage = 1.0
    
    # Weights:
    #   [wR, wC, wV, wS, eps, wIV]
    #   wi ~ zi = ( ri^wR * (1-ci)^wC / (vi^wV) * (ivi^wIV) )^(wS+eps)
    
    #   Golden Lama EAA :
    context.score_weights = (2.0, 1.0, 0.25, 1.0, 1e-6, 4.0)
    #   IntradayVolatility EAA : wi ~ zi = (1-ci) / (intraday_vol)
    #context.score_weights = (0.0, 0.0, 0.0, 1.0, 1e-6, 1.0) 
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    #context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6, 0.0)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    #context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6, 0.0)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6, 0.0)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6, 0.0)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0, 1.0)

    context.assets = set(context.active + [context.cash, context.bill])
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    #
    # Yahoo fetcher
    # (inspired by F. Chandelier)
    # https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return
    #
    if context.use_adjusted:
        start_year = 2002
        end_year = dt.datetime.today().year + 1
        url_template = "http://real-chart.finance.yahoo.com/table.csv?s=%s&a=0&b=1&c=%d&d=0&e=1&f=%d&g=d&ignore=.csv"

        for sym in context.active:
            url = url_template % (sym.symbol, start_year, end_year)
            print "Fetching %s adjusted prices: %s" % (sym.symbol, url)

            fetch_csv(
                url,
                date_column='Date',
                date_format='%Y-%m-%d',
                symbol=sym,
                usecols=['Adj Close'],
                pre_func=fetch_pre,
                post_func=fetch_post
            )


def handle_data(context, data):
    record(leverage = context.portfolio.positions_value / context.portfolio.portfolio_value)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)

def reallocate(context, data):
    h = make_history(context, data).ix[-280:]
    h_low = history(300, '1d', 'low').ix[-280:]
    h_high = history(300, '1d', 'high').ix[-280:]
    
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    non_cash_assets = list(context.active)
    non_cash_assets.remove(context.cash)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    # normalized intraday volatility. 
    # inspired by Cssanalytics, https://cssanalytics.wordpress.com/2015/03/13/using-a-self-similarity-metric-with-intraday-data-to-define-market-regimes/
    ivol = (h_high[context.active].ix[-10:]-h_low[context.active].ix[-10:]).mean()/(h_high[context.active]-h_low[context.active]).mean()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV / ivoli^wIV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    wIV = context.score_weights[5]
       
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV) / (ivol ** wIV)) ** (wS + eps)
    z[mom < 0.] = 0.0
    
    #
    # Crash Protection
    #
    
    num_neg = z[z <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    
    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum()).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
    
#
# Quantopian/Yahoo history switch
#
def make_history(context, data):
    if context.use_adjusted:
        df = pd.DataFrame(index=data[context.active[0]]['aclose_hist'].index, columns=context.active)
        for s in context.active:
            df[s] = data[s]['aclose_hist']
        return df
    else:
        return history(300, '1d', 'price')

def fetch_pre(df):
    df = df.rename(columns={'Adj Close': 'aclose'})
    df['aclose_hist'] = pd.Series([[]] * len(df.index), index=df.index)
    return df
    
def fetch_post(df):
    #
    # Workaround for history() not providing access to external fields
    # Populate data[] with past 300 adjusted close prices
    #
    for i in xrange(0, len(df.index)):
        df['aclose_hist'].ix[-i-1] = df['aclose'][-i-300:][:300]
    return df
There was a runtime error.

Hey Peter, thanks for the quick fix.

Looks like there might be a pure pandas solution for the fetcher problem with multi-indexing. If you're feeling adventurous: http://pandas.pydata.org/pandas-docs/stable/advanced.html#advanced

I wrote a small app you can read multiple tickers via one call with fetcher so we are not limited to one fetcher per ticker .
The output is something like this:

Date    open_vti    high_vti    low_vti close_vti   volume_vti  adj_close_vti   open_efa    high_efa    low_efa close_efa   volume_efa  adj_close_efa   open_icf    high_icf    low_icf close_icf   volume_icf  adj_close_icf   open_tlt    high_tlt    low_tlt close_tlt   volume_tlt  adj_close_tlt   open_ief    high_ief    low_ief close_ief   volume_ief  adj_close_ief  
2/01/02 106 106.34  104.61  106.25  1627600 41.55   119.85  119.96  118.96  119.96  1215900 28.74   84.5    84.85   83.09   84.85   35200   24.44  
3/01/02 106.46  107.39  106.35  107.39  354400  42  120.1   121.04  120 121.04  971400  29  85.23   85.3    85.07   85.22   35400   24.55  
4/01/02 107.8   108.41  107.29  108.1   487200  42.28   122 122.07  121 121.84  377400  29.19   85.3    85.32   84.53   84.76   9600    24.41  

We only have to extract adj_close_SID in the algo

Don't call it very minute though ;)

http://peterbakker.pythonanywhere.com/?stocks=VTI,EFA,ICF,TLT,IEF

the flask code to generate the csv:

from flask import Flask, make_response,request  
import pandas as pd  
import pandas.io.data as web  
import datetime as dt  
import io

app = Flask(__name__)

# Route '/' and '/index' to `index`  
@app.route('/')  
@app.route('/index/<stocks>')  
def index():  
    #output = request.args.get('stocks')  
    # Get data from fields  
    stockstr = request.args.get('stocks')  
    stocklist = stockstr.split(",")  
    StockData = []  
    startdate = dt.datetime(2002,1,1)  
    enddate   = dt.datetime.utcnow()  
    DataCollection2 = None  
    i=0  
    for stock in stocklist:  
        StockDF =  web.DataReader(stock, "yahoo",startdate , enddate )  
        #print >> sys.stderr, StockDF  
        StockDF = StockDF.rename(columns = {'Open':'Open_'+stock,'High':'High_'+stock,'Low':'Low_'+stock,'Close':'Close_'+stock,'Volume':'Volume_'+stock,'Adj Close':'Adj_Close_'+stock})  
        StockDF.columns = map(str.lower, StockDF.columns)  
        StockData.append(StockDF)  
        if i==0:  
            DataCollection2=StockDF  
        else:  
            DataCollection2= pd.merge(DataCollection2, StockDF, left_index=True, right_index=True, how='outer')  
        i+=1  
    buffers = io.StringIO()  
    DataCollection2.to_csv(buffers,encoding='utf-8')

    csv = buffers.getvalue()

    if csv==None:  
        csv='error generating content'  
    response = make_response(csv)  
    file_name = 'DATA_'+stockstr.replace(',', '_');  
    response.headers["Content-Disposition"] = 'attachment; filename="'+file_name+'.csv"'  
    return response

I was playing a lot with in sampling the algo with various parameters, there is great weakness of the algo in the year of 2004, basically the algo return is minus 11% for almost a year the while the benchmark goes down ~%3.5. the main risk is that the algo goes down the 10% in 14 days only between april 2 to april 14, this is a great volatility and risk. it seems the algo , due to its monthly rebalance, didnt catch the turmoil of its ETF's, Attach is the backtest with charts of the algo ETF so you can see they werent going down per the algo.,

if anyone have an idea how to detect and eliminate this fast turmoil, please help.

Clone Algorithm
40
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Implementation:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 10, 2014 (v0.6)
# Modified:
#   Yoyoteng (https://www.quantopian.com/users/54b8a7b4f44758bf5c000aea)
#   March 13, 2014
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   Dividend adjustment requires len(assets) <= 5 (Quantopian Fetcher limit)
#   CAUTION: (slight) look-ahead bias with adjusted EOD price
#
# Version Log:
#   v0.7 - add addional term, intraday volatility to z score.
#   v0.6 - Dividend adjustment via Yahoo data (Thanks to F. Chandelier)
#          Truncating z-score at 0 rather than momentum
#          Add leverage logging
#   v0.5 - initial open-source release

import pandas as pd
import datetime as dt
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper (unadjusted):
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    #context.bill = sid(23911)
    
    # Assets from Index Swing Trader Blog (unadjusted):
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    #context.cash = sid(23870)
    #context.bill = sid(23911)
    
    # Custom assets (N=5) (adjusted yahoo data):
    #   VTI, EFA, ICF, TLT, IEF. Cash = IEF, Bill = IEF (!)
    #context.use_adjusted = False
    #context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870)]
    #context.cash = sid(23870)
    #context.bill = sid(23870)
    
    # Custom assets (N=6) (5 adjusted yahoo data) + Emerging Markets
    #   VTI, EFA, ICF, TLT, IEF, EEF. Cash = IEF, Bill = IEF (!)
    context.use_adjusted = False
    context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870),sid(24705)]
    context.cash = sid(23870)
    context.bill = sid(23870)
    
    context.leverage = 1.0
    
    # Weights:
    #   [wR, wC, wV, wS, eps, wIV]
    #   wi ~ zi = ( ri^wR * (1-ci)^wC / (vi^wV) * (ivi^wIV) )^(wS+eps)
    
    #   Golden Lama EAA :
    #context.score_weights = (2.0, 1.0, 0.25, 1.0, 1e-6, 4.0)
    #   IntradayVolatility EAA : wi ~ zi = (1-ci) / (intraday_vol)
    #context.score_weights = (0.0, 0.0, 0.0, 1.0, 1e-6, 1.0) 
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    #context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6, 0.0)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6, 0.0)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6, 0.0)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6, 0.0)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0, 1.0)

    context.assets = set(context.active + [context.cash, context.bill])
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)

    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    #
    # Yahoo fetcher
    # (inspired by F. Chandelier)
    # https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return
    #
    if context.use_adjusted:
        start_year = 2002
        end_year = dt.datetime.today().year + 1
        url_template = "http://real-chart.finance.yahoo.com/table.csv?s=%s&a=0&b=1&c=%d&d=0&e=1&f=%d&g=d&ignore=.csv"

        for sym in context.active:
            url = url_template % (sym.symbol, start_year, end_year)
            print "Fetching %s adjusted prices: %s" % (sym.symbol, url)

            fetch_csv(
                url,
                date_column='Date',
                date_format='%Y-%m-%d',
                symbol=sym,
                usecols=['Adj Close'],
                pre_func=fetch_pre,
                post_func=fetch_post
            )


def handle_data(context, data):
    pass
    #record(leverage = context.portfolio.positions_value / context.portfolio.portfolio_value, cash = context.portfolio.cash)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)
    print "cash=%s" % context.portfolio.cash
    print "positions valu=%s" % context.portfolio.positions_value
    print "pnl=%s" % context.portfolio.pnl
    print "returns=%s" % context.portfolio.returns
                
                
                
                
def reallocate(context, data):
    h = make_history(context, data).ix[-280:]
    h_low = history(300, '1d', 'low').ix[-280:]
    h_high = history(300, '1d', 'high').ix[-280:]
    
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    non_cash_assets = list(context.active)
    non_cash_assets.remove(context.cash)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    # normalized intraday volatility. 
    # inspired by Cssanalytics, https://cssanalytics.wordpress.com/2015/03/13/using-a-self-similarity-metric-with-intraday-data-to-define-market-regimes/
    ivol = (h_high[context.active].ix[-10:]-h_low[context.active].ix[-10:]).mean()/(h_high[context.active]-h_low[context.active]).mean()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV / ivoli^wIV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    wIV = context.score_weights[5]
       
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV) / (ivol ** wIV)) ** (wS + eps)
    z[mom < 0.] = 0.0
    
    #
    # Crash Protection
    #
    
    num_neg = z[z <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    #record(cpf = cpf)
    record(TLT = data[23921].price) 
    record(IEF = data[23870].price)     
    record(VTI = data[22739].price) 
    record(EFA = data[22972].price) 
    record(ICF = data[22446].price) 

    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum()).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w
   
    
#
# Quantopian/Yahoo history switch
#
def make_history(context, data):
    if context.use_adjusted:
        df = pd.DataFrame(index=data[context.active[0]]['aclose_hist'].index, columns=context.active)
        for s in context.active:
            df[s] = data[s]['aclose_hist']
        return df
    else:
        return history(300, '1d', 'price')

def fetch_pre(df):
    df = df.rename(columns={'Adj Close': 'aclose'})
    df['aclose_hist'] = pd.Series([[]] * len(df.index), index=df.index)
    return df
    
def fetch_post(df):
    #
    # Workaround for history() not providing access to external fields
    # Populate data[] with past 300 adjusted close prices
    #
    for i in xrange(0, len(df.index)):
        df['aclose_hist'].ix[-i-1] = df['aclose'][-i-300:][:300]
    return df
There was a runtime error.

Extremely nice concept. Would it be possible to explain the crash protection mechanism?

@Bharath. The existing crash protection is very elegant, it allocates more 'cash' when more 'z's are set to 0, due to negative 'mom's (weighted returns).

@Joe That is a good pick. You can always use schedule_function to proactively rebalance the portfolio with your ideal trigger and frequency, below is just a not-so-elegant example. Use of volatility maybe another option for the trigger. Would be great to share your results if you figure something out!

# your trigger here  
def mama_bear_is_here(h_price):  
    ret = h_price.pct_change().ix[-1]  
    num_neg = ret[ret <= 0].count()  
    N = len(context.active)  
    cpf = float(num_neg) / N  
    if cpf > 0.95:  
        print "look at that bear."  
        return True  
    else:  
        return False

#call below function with schedule_function for proactive portfolio management.  
def rebalance_now(context, data):  
    h_price = history(5, '1d', 'price')[context.active]  
    if mama_bear_is_here(h_price) is True:  
        reallocate(context, data)  
        rebalance(context, data)  

Thanks Ted

Thanks Ted and Bharath for responding. I adapted the solution above and did massive tests.
Observations:

  1. This type of CPF wasnt enable to eliminate the 12% downturn occured in less than 14 days in 2014 and last the whole year. I played with various CFP thresholds as well as tried to check CPF and balance every day, week, and bi-weekly

  2. Checking and rebalancing more than bi-weekly creates a signifcant reduction in overall performance and gain (11 years period). Regardless, the CPF system didnt help.

  3. something might be wrong with the history function , the values I am getting for the active stocks doesnt seem similar to what yahoo/google shows

conclusions:
1. The CPF system calculates how many of the stocks in portfolio were negative, but it doesnt know if it was very small decrease or very high in %. It seems we should try momentum indicator like RSI or ADX to detect large downturn,
2. Need to check the history function validity + why more frequent rebalance cause such a big reduction in overall perfromance (10 years period)

attached is the up to date with the new crisis rebalance and logs. If someone can help me and continue the work it will be great.

Clone Algorithm
40
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Keller/Butler - Elastic Asset Allocation (EAA)
#
# Source:
#   Wouter J. Keller and Adam Butler
#   "From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)"
#   December 30, 2014 (v0.90), revised January 16, 2015 (v0.92)
#
# Implementation:
#   Alex (https://www.quantopian.com/users/54190a3694339e5d3a0000af)
#   March 10, 2014 (v0.6)
# Modified:
#   Yoyoteng (https://www.quantopian.com/users/54b8a7b4f44758bf5c000aea)
#   March 13, 2014
#
# Additional links:
#   http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979
#   http://indexswingtrader.blogspot.com/2015/01/a-primer-on-elastic-asset-allocation.html
#   https://quantstrattrader.wordpress.com/2015/01/03/for-a-new-year-a-new-asset-allocation-system-just-published-in-ssrn/
#
# Known issues:
#   Dividend adjustment requires len(assets) <= 5 (Quantopian Fetcher limit)
#   CAUTION: (slight) look-ahead bias with adjusted EOD price
#
# Version Log:
#   v0.7 - add addional term, intraday volatility to z score.
#   v0.6 - Dividend adjustment via Yahoo data (Thanks to F. Chandelier)
#          Truncating z-score at 0 rather than momentum
#          Add leverage logging
#   v0.5 - initial open-source release

import pandas as pd
import datetime as dt
import math

def initialize(context):
    #
    # Configuration
    #

    # Assets (N=7) from Paper (unadjusted):
    #   SP500, EAFE, EEM, US Tech, Japan Topix, and two bonds: US Gov10y, and US HighYield. Cash = SHY, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(8554), sid(22972), sid(24705), sid(19658), sid(14520), sid(23870), sid(33655)]
    #context.cash = sid(23911)
    #context.bill = sid(23911)
    
    # Assets from Index Swing Trader Blog (unadjusted):
    #   $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT. Cash = $IEF, Bill = SHY
    #context.use_adjusted = False
    #context.active = [sid(12915), sid(21769), sid(24705), sid(19920), sid(19661), sid(23870), sid(23921)]
    #context.cash = sid(23870)
    #context.bill = sid(23911)
    
    # Custom assets (N=5) (adjusted yahoo data):
    #   VTI, EFA, ICF, TLT, IEF. Cash = IEF, Bill = IEF (!)
    #context.use_adjusted = False
    #context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870)]
    #context.cash = sid(23870)
    #context.bill = sid(23870)
    
    # Custom assets (N=6) (5 adjusted yahoo data) + Emerging Markets
    #   VTI, EFA, ICF, TLT, IEF, EEF. Cash = IEF, Bill = IEF (!)
    context.use_adjusted = False
    context.active = [sid(22739), sid(22972), sid(22446), sid(23921), sid(23870),sid(24705)]
    context.cash = sid(23870)
    context.bill = sid(23870)
    
    context.leverage = 1.0
    
    # Weights:
    #   [wR, wC, wV, wS, eps, wIV]
    #   wi ~ zi = ( ri^wR * (1-ci)^wC / (vi^wV) * (ivi^wIV) )^(wS+eps)
    
    #   Golden Lama EAA :
    #context.score_weights = (2.0, 1.0, 0.25, 1.0, 1e-6, 4.0)
    #   IntradayVolatility EAA : wi ~ zi = (1-ci) / (intraday_vol)
    #context.score_weights = (0.0, 0.0, 0.0, 1.0, 1e-6, 1.0) 
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    #context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6, 0.0)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6, 0.0)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6, 0.0)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6, 0.0)
    #   Scoring Function Test:
    #context.score_weights = (1.0, 1.0, 1.0, 1.0, 0.0, 1.0)

    context.assets = set(context.active + [context.cash, context.bill])
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)
    
    schedule_function(
        reallocate,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    schedule_function(
        rebalance,
        date_rules.month_end(days_offset=0),
        time_rules.market_close(minutes=5)
    )
    
    # special check for crash during the month will cause crash protection rebalance
    schedule_function(
        rebalance_if_downtrend,
        date_rules.week_start(days_offset=3),
        time_rules.market_close(minutes=5)
    )
        
    
    #
    # Yahoo fetcher
    # (inspired by F. Chandelier)
    # https://www.quantopian.com/posts/yahoo-and-fetch-comparing-q-and-y-for-12-mth-rolling-return
    #
    if context.use_adjusted:
        start_year = 2002
        end_year = dt.datetime.today().year + 1
        url_template = "http://real-chart.finance.yahoo.com/table.csv?s=%s&a=0&b=1&c=%d&d=0&e=1&f=%d&g=d&ignore=.csv"

        for sym in context.active:
            url = url_template % (sym.symbol, start_year, end_year)
            print "Fetching %s adjusted prices: %s" % (sym.symbol, url)

            fetch_csv(
                url,
                date_column='Date',
                date_format='%Y-%m-%d',
                symbol=sym,
                usecols=['Adj Close'],
                pre_func=fetch_pre,
                post_func=fetch_post
            )

#def before_trading_start(context):
#    update_universe(context.active)
    
def handle_data(context, data):
    pass
    #record(leverage = context.portfolio.positions_value / context.portfolio.portfolio_value, cash = context.portfolio.cash)

def rebalance(context, data):
    for s in context.alloc.index:
        if s in data:
            order_target_percent(s, context.alloc[s] * context.leverage)
    print "cash=%s" % context.portfolio.cash
    print "positions valu=%s" % context.portfolio.positions_value
    print "pnl=%s" % context.portfolio.pnl
    print "returns=%s" % context.portfolio.returns
    
#call below function with schedule_function for proactive portfolio management.  
def rebalance_if_downtrend(context, data):  
    stocks_price_h = history(bar_count=6, frequency='1d', field='price')

    if mama_bear_is_here(context,data,stocks_price_h) is True:  
        reallocate(context, data)  
        rebalance(context, data)  
                
                
def reallocate(context, data):
    h = make_history(context, data).ix[-280:]
    h_low = history(300, '1d', 'low').ix[-280:]
    h_high = history(300, '1d', 'high').ix[-280:]
    
    hm = h.resample('M', how='last')[context.active]
    hb = h.resample('M', how='last')[context.bill]
    ret = hm.pct_change().ix[-12:]
    
    N = len(context.active)
    
    non_cash_assets = list(context.active)
    non_cash_assets.remove(context.cash)
    
    print "***************************************************************"
    
    #
    # Scoring
    #
    # excess return momentum
    mom = (hm.ix[-1] / hm.ix[-2]  - hb.ix[-1] / hb.ix[-2] + \
           hm.ix[-1] / hm.ix[-4]  - hb.ix[-1] / hb.ix[-4] + \
           hm.ix[-1] / hm.ix[-7]  - hb.ix[-1] / hb.ix[-7] + \
           hm.ix[-1] / hm.ix[-13] - hb.ix[-1] / hb.ix[-13]) / 22
    
    # nominal return correlation to equi-weight portfolio
    ew_index = ret.mean(axis=1)
    corr = pd.Series([0.0] * N, index=context.active)
    for s in corr.index:
      corr[s] = ret[s].corr(ew_index)
    
    # nominal return volatility
    vol = ret.std()
    
    # normalized intraday volatility. 
    # inspired by Cssanalytics, https://cssanalytics.wordpress.com/2015/03/13/using-a-self-similarity-metric-with-intraday-data-to-define-market-regimes/
    ivol = (h_high[context.active].ix[-10:]-h_low[context.active].ix[-10:]).mean()/(h_high[context.active]-h_low[context.active]).mean()
    
    #
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV / ivoli^wIV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    wIV = context.score_weights[5]
       
    z = ((mom ** wR) * ((1 - corr) ** wC) / (vol ** wV) / (ivol ** wIV)) ** (wS + eps)
    z[mom < 0.] = 0.0
    
    #
    # Crash Protection
    #
    
    num_neg = z[z <= 0].count()
    cpf = float(num_neg) / N
    print "cpf = %f" % cpf
    #record(cpf = cpf)
    record(TLT = data[23921].price) 
    record(IEF = data[23870].price)     
    record(VTI = data[22739].price) 
    record(EFA = data[22972].price) 
    record(ICF = data[22446].price) 

    #
    # Security selection
    #
    # TopN = Min( 1 + roundup( sqrt( N ), rounddown( N / 2 ) )
    top_n = min(math.ceil(N ** 0.5) + 1, N / 2)
    
    #
    # Allocation
    #
    top_z = z.order().index[-top_n:]
    print "top_z = %s" % [i.symbol for i in top_z]
    
    w_z = ((1 - cpf) * z[top_z] / z[top_z].sum()).dropna()
    w = pd.Series([0.0] * len(context.assets), index=context.assets)
    for s in w_z.index:
        w[s] = w_z[s]
    w[context.cash] += cpf
    print "Allocation:\n%s" % w
    
    context.alloc = w

    # Check within the month if bear market and need to adjust in the middle of the month  
def mama_bear_is_here(context,data,stocks_price_h):  
    print 'stock history first day'
    print stocks_price_h.ix[0]
    print 'stock history last day'
    print stocks_price_h.ix[-1] 
    pct_change = (stocks_price_h.ix[-1] - stocks_price_h.ix[0]) / stocks_price_h.ix[0]
    print 'check if bear market...'
    print 'pct_change in stocks: {0}'.format(pct_change)
    num_neg = pct_change[pct_change <= 0].count()  
    N = len(context.active)  
    cpf = float(num_neg) / N  

   
    if cpf > 0.79:  
        print "look like its a bear market in the middle of the month .... lets rebalance ..."  
        return True  
    else:  
        return False
  
 

#
# Quantopian/Yahoo history switch
#
def make_history(context, data):
    if context.use_adjusted:
        df = pd.DataFrame(index=data[context.active[0]]['aclose_hist'].index, columns=context.active)
        for s in context.active:
            df[s] = data[s]['aclose_hist']
        return df
    else:
        return history(300, '1d', 'price')

def fetch_pre(df):
    df = df.rename(columns={'Adj Close': 'aclose'})
    df['aclose_hist'] = pd.Series([[]] * len(df.index), index=df.index)
    return df
    
def fetch_post(df):
    #
    # Workaround for history() not providing access to external fields
    # Populate data[] with past 300 adjusted close prices
    #
    for i in xrange(0, len(df.index)):
        df['aclose_hist'].ix[-i-1] = df['aclose'][-i-300:][:300]
    return df


There was a runtime error.