Back to Community
Adaptive Asset Allocation algorithms

Hi all,

I’ve been playing around with some of the adaptive asset allocation strategies published in the blogosphere and I thought I would publish some of my results, I used the excellent paper, 'Adaptive Asset Allocation: A Primer', as the basis of my analysis and implemented the following algorithms,

  1. Volatility Weighted Momentum (VWM) – implemented as described in Exhibit 4 of ‘Adaptive Asset Allocation a Primer’.
    • Note – results reported with 12-month lookback
  2. Minimum Variance Algorithm (MVA) – as described in this post on the cssanalytics blog.
    • Note – results reported with 12-month lookback
  3. EAA – implemented as described in the paper ‘A Century of Generalized Momentum: From Flexible Asset Allocations (FAA) to Elastic Asset Allocation (EAA)’, by Keller and Butler
    • Note – there is an excellent implementation of this algorithm already on Quantopian here. After I found that I adapted parts of Alex's implementation in my own, hopefully I have given appropriate credit.

Next I implemented a simple equal weight algorithm that rebalances monthly. I use this algorithm as my benchmark.

I tested the above algorithms against the following sets of asset classes,

  1. Asset set #1 – inspired by the N=7 universe discussed in (A Century of Generalized Momentum – Keller/Butler, which introduces the EAA algorithm)
    • SPY (S&P 500 ETF), EFA (Developed world ex US/Canada ETF), EEM (Emerging Market ETF), QQQ (NASDAQ ETF), EWJ (Japan ETF), IEF (US Govt BND ETF)
    • Note – High Yield was removed from the set because I could not find an appropriate ETF in Quantopian that fits into the date frames tested
  2. Asset set #2 – distribution of assets across various equity markets, bonds, and real assets
    • SPY (S&P 500 ETF), EFA (Developed world ex US/Canada ETF), EEM (Emerging Market ETF), IEF (US Bond ETF), GLD (Gold ETF), IYR (US Real Estate)
    • Note, this test was designed to see how the strategies performed with a significant allocation to real assets.
  3. Asset set #3 S&P sectors
    • XLY (US Consumer Discretionary ETF), IYR (US Real Estate ETF), IYG (US Financial Services ETF), XLF (US Financial ETF), XLK (US Technology ETF), XLE (US Energy ETF), XLV (Health Care ETF), XLI (XLI Industrial SPDR Fund), XLP (XLP Consumer Staples SPDR Fund), XLB (XLB Materials SPDR Fund), XLU (XLU Utilities SPRD Fund)
    • Note, this test was designed to see how well the strategies could capture market upside.

Lastly, I ran the tests against 2 time periods, 2005-2015 YTD, and 2010-2015 YTD. The first time period was chosen to show how the algorithms behave over a full market cycle including the 2008 recession. The second time frame was chosen to show how the algorithms performed during the last few years. It is my observation that any market timing algorithm that manages to avoid the core damage of the 2008 recession does well over a 10 year backtest, but often the performance doesn’t hold up particularly well when measured over the last few years.

Here are my results,

VWM (Volatility Weighted Momentum)
Asset set #1
Time Period 1
Total Return = 137.79%, CAGR = 8.26%, Sharpe = 0.81, Sortino = 1.03, Volatility = 0.13, Max DD = 16.60%
Time Period 2
Total Return = 54.60%, CAGR = 7.64%, Sharpe = 0.53, Sortino = 0.65, Volatility = 0.13, Max DD = 15.70%

Asset set #2
Time Period 1
Total Return = 160.80%, CAGR = 9.18%, Sharpe = 1.09, Sortino = 1.48, Volatility = 0.11, Max DD = 14.90%
Time Period 2
Total Return = 44.90%, CAGR = 6.47%, Sharpe = 0.5, Sortino = 0.65, Volatility = 0.11, Max DD = 10.90%

Asset set #3
Time Period 1
Total Return = 156.00%, CAGR = 8.99%, Sharpe = .96, Sortino = 1.22, Volatility = 0.13, Max DD = 17.70%
Time Period 2
Total Return = 101.29%, CAGR = 12.55%, Sharpe = 1.08, Sortino = 1.38, Volatility = 0.14, Max DD = 17.70%

EAA (Elastic Asset Allocation)
Asset set #1
Time Period 1
Total Return = 92.70%, CAGR = 6.19%, Sharpe = 0.4, Sortino = 0.51, Volatility = 0.16, Max DD = 29.80%
Time Period 2
Total Return = 18.20%, CAGR = 2.87%, Sharpe = 0.06, Sortino = 0.08, Volatility = 0.12, Max DD = 14.60%

Asset set #2
Time Period 1
Total Return = 147.50%, CAGR = 8.66%, Sharpe = 0.64, Sortino = 0.82, Volatility = 0.17, Max DD = 37.70%
Time Period 2
Total Return = 35.00%, CAGR = 5.20%, Sharpe = 0.28, Sortino = 0.35, Volatility = 0.13, Max DD = 21.80%

Asset set #3
Time Period 1
Total Return = 142.40%, CAGR = 8.45%, Sharpe = 0.61, Sortino = 0.8, Volatility = 0.18, Max DD = 19.80%
Time Period 2
Total Return = 57.60%, CAGR = 7.99%, Sharpe = 0.47, Sortino = 0.62, Volatility = 0.16, Max DD = 19.80%

EW (Equal Weight - Rebalanced Monthly)
Asset set #1
Time Period 1
Total Return = 87.50%, CAGR =5.93%, Sharpe = 0.32, Sortino = 0.42, Volatility = 0.18, Max DD = 48.80%
Time Period 2
Total Return = 54.90%, CAGR = 7.68% Sharpe = 0.51 ,Sortino = 0.68, Volatility = 0.14, Max DD = 15.00%

Asset set #2
Time Period 1
Total Return = 99.40%, CAGR = 6.53%, Sharpe = 0.41 Sortino = 0.53, Volatility = 0.17 Max DD = 44.10%
Time Period 2
Total Return = 41.00%, CAGR = 5.98%, Sharpe = 0.38, Sortino = 0.52, Volatility = 0.12, Max DD = 13.70%

Asset set #3
Time Period 1
Total Return = 105.10%, CAGR = 6.80%, Sharpe = 0.35, Sortino = 0.43, Volatility = 0.21, Max DD = 58.40%
Time Period 2
Total Return = 101.69%, CAGR = 12.59%, Sharpe = 0.94, Sortino = 1.23, Volatility = 0.16, Max DD = 19.90%

It seems that MVA and the simple VWM algorithms hold up well across all asset groupings and time periods, while somewhat disappointingly EAA does not do as well (perhaps there is an error in my implementation).

Backtests and algos to follow.

10 responses

Here is my implementation of the VWM algorithm.

Clone Algorithm
333
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Implementation of - Volatility Weighted Momentum
# as described in the paper Adaptive Asset Allocation: A Primer
# which can be found here -  http://bpgassociates.com/docs/Adaptive-Asset-Allocation-A-Primer.pdf
# the Volatility Weighted Momentum algorithm is described in Exhibit 4 of the paper

import math

def initialize(context):
    set_symbol_lookup_date('2005-01-01')
    
    #Asset #1 - test 7 asset scenario from paper - the small global multi asset universe
    # from Keller/Butler - Elastic Asset Allocation (EAA)
    #removed HYG from sample set to allow for longer backtest
    '''
    context.etfs = symbols('SPY', #proxy for S&P
                           'EFA', #proxy for EAFE
                           'EEM', #proxy for EEM
                           'QQQ', #proxy for US Tech
                           'EWJ', #proxy for Japan Topix
                           'IEF') #proxy for US Govt 10 yr
    '''
    
    #Asset #2 - test a more generalized distribution of assets
    #US stocks, European stocks, Emerging market stocks, US bonds, gold, real estate
    #context.etfs = symbols('SPY','EFA','EEM','IEF','GLD','IYR')
    
    #Asset #3 - test against S&P sectors to estimate upside extraction
    
    context.etfs = symbols('XLY',  # XLY Consumer Discrectionary SPDR Fund  
                           'IYR', #Real Estate ETF in place of XLFS
                           'IYG', #Financial Services etf in place of 
                           'XLF',  # XLF Financial SPDR Fund  
                           'XLK',  # XLK Technology SPDR Fund  
                           'XLE',  # XLE Energy SPDR Fund  
                           'XLV',  # XLV Health Care SPRD Fund  
                           'XLI',  # XLI Industrial SPDR Fund  
                           'XLP',  # XLP Consumer Staples SPDR Fund   
                           'XLB',  # XLB Materials SPDR Fund  
                           'XLU')  # XLU Utilities SPRD Fund
    
    context.mom_lookback = 240
    context.risk_free_asset = symbol('SHY')
    context.target_vol = 0.01 #target 1% volatility
    context.vol_lookback = 60
    
    schedule_function(rebalance,date_rules.month_end(days_offset=0), time_rules.market_close(minutes=60))
    
#calculate volatility for etf universe    
def get_vol(context):
    h = history(context.vol_lookback*2, '1d', 'price')[context.etfs]
    hs = h.ix[-1*context.vol_lookback:]

    context.hvol = hs.pct_change().std()
    
#calculate weights based on relative momentum ranking and absolute momentum test
def get_mom_weights(context):    
    
    #init weights
    context.weights = {}
    for s in context.etfs:
        context.weights[s] = 0
    
    h = history( context.mom_lookback,'1d','price')[context.etfs]
    h = h.resample('M',how='last')
    #drop first row because it is nan
    pct_change =h.iloc[[0,-2]].pct_change()[1:]
    #drop any other nan values
    pct_change = pct_change.dropna(axis=1)
    #convert dataframe to series for sorting. Then sort in descending order
    context.pct_change_series =  pct_change.squeeze().order(ascending=False)
    
    l = int(math.floor(len(context.etfs)/2.0))
    context.weight = 1.0/l
    for i in range(l) :
        if context.pct_change_series.ix[i] > 0.0 :
            s = context.pct_change_series.index[i]
            context.weights[s] = context.weight

            
def rebalance(context,data):
    
    get_vol(context)
    get_mom_weights(context)
    total_weight = 0.0
    for s in context.etfs:
        mom_vol_weight = context.weights[s]*(context.target_vol/context.hvol[s])
        if mom_vol_weight > context.weight:
            mom_vol_weight = context.weight
        order_target_percent(s,mom_vol_weight)
        total_weight += mom_vol_weight
    if total_weight < 1.0:
        order_target_percent(context.risk_free_asset,1.0-total_weight)
    else:
        order_target_percent(context.risk_free_asset,0.0)
    
    
# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
   
    record(leverage = context.account.leverage)
There was a runtime error.

Here is my implementation of the MVA algorithm.

Clone Algorithm
254
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Implementation of - Minimum Variance Algorithm
# as described here - https://cssanalytics.wordpress.com/2013/04/01/minimum-variance-algorithm-mva/

import math
import scipy.stats as stats
import numpy as np
import pandas as pd

def initialize(context):
    set_symbol_lookup_date('2005-01-01')
    
    #Asset #1 - test 7 asset scenario from paper - the small global multi asset universe
    # from Keller/Butler - Elastic Asset Allocation (EAA)
    #removed HYG from sample set to allow for longer backtest
    '''
    context.etfs = symbols('SPY', #proxy for S&P
                           'EFA', #proxy for EAFE
                           'EEM', #proxy for EEM
                           'QQQ', #proxy for US Tech
                           'EWJ', #proxy for Japan Topix
                           'IEF') #proxy for US Govt 10 yr
    '''
    
    #Asset #2 - test a more generalized distribution of assets
    #US stocks, European stocks, Emerging market stocks, US bonds, gold, real estate
    #context.etfs = symbols('SPY','EFA','EEM','IEF','GLD','IYR')
    
    #Asset #3 - test against S&P sectors to estimate upside extraction
    
    context.etfs = symbols('XLY',  # XLY Consumer Discrectionary SPDR Fund  
                           'IYR', #Real Estate ETF in place of XLFS
                           'IYG', #Financial Services etf in place of 
                           'XLF',  # XLF Financial SPDR Fund  
                           'XLK',  # XLK Technology SPDR Fund  
                           'XLE',  # XLE Energy SPDR Fund  
                           'XLV',  # XLV Health Care SPRD Fund  
                           'XLI',  # XLI Industrial SPDR Fund  
                           'XLP',  # XLP Consumer Staples SPDR Fund   
                           'XLB',  # XLB Materials SPDR Fund  
                           'XLU')  # XLU Utilities SPRD Fund
    
    
    context.invest_num = int(math.floor(len(context.etfs)/2.0))
    context.lookback = 240
    context.risk_free_asset = symbol('SHY')
    context.target_vol = 0.01 #target 1% volatility
    context.atr_bars_back = 60
    schedule_function(rebalance,date_rules.month_end(days_offset=0), time_rules.market_close(minutes=60))
    

def get_minvar_weights(context,toplist):
    context.weights = {}
    if len(toplist) > 0:
        #if length of list is 1 then weight should be 100% there is no min variance
        if len(toplist) == 1:
            context.weights[toplist[0]]=1.0
        else:
            h = history( context.atr_bars_back,'1d','price')[toplist]
            pct_change = h.pct_change()
            #calculate covariance matrix
            cov = pct_change.cov()
            #avg pairwise covariance
            avg_cov = cov.mean()
            i=0
            gauss_conv = {}
            inv_var = {}
            
            for s in toplist:
                #gaussian conversion
                gauss_conv[s] = 1-stats.norm.cdf((avg_cov[s]-avg_cov.mean())/avg_cov.std())
                #inverse variance
                inv_var[s] = 1.0/cov.ix[i,i]
                i += 1
    
            gc = pd.Series(gauss_conv,name='Symbol')
            iv = pd.Series(inv_var, name='Symbol')
            #inverse variance weight
            inv_var_weight = iv/iv.sum()
            #proportional average covar weight
            avg_covar_weight = gc/gc.sum()
            #product of proportional average covar weight and inverse variance weight
            prod_avg_covar_inv_var = avg_covar_weight * inv_var_weight
            #final weights
            for s in toplist:
                context.weights[s] = prod_avg_covar_inv_var[s] / prod_avg_covar_inv_var.sum()
                log.info(" symbol: " + str(s.symbol) + " w: " + str(context.weights[s]) )
    
#rank symbols by TMOM and MA invest %100 if rank #1 on both counts %50 on 1 count and 0% if not #1 on by any measure
def mom_rank(context,data):    
    #get TMOM
    h = history( context.lookback,'1d','price')[context.etfs]
    h = h.resample('M',how='last')
    #drop first row because it is nan
    pct_change =h.iloc[[0,-2]].pct_change()[1:]
    #drop any other nan values
    pct_change = pct_change.dropna(axis=1)
    #convert dataframe to series for sorting. Then sort in descending order
    context.pct_change_series =  pct_change.squeeze().order(ascending=False)
    
    #get top mom list
    context.toplist = []
    for i in range(context.invest_num) :
        s = context.pct_change_series.index[i]
        context.toplist.append(s)
            
            
def rebalance(context,data):
    mom_rank(context,data)
    get_minvar_weights(context,context.toplist)
    invest_weight = 0.0

    for s in context.etfs:
        if s in context.toplist and context.pct_change_series[s] > 0.0:
            order_target_percent(s,context.weights[s])
            invest_weight += context.weights[s]
        else:
            order_target_percent(s,0.0)
    
    w = 1.0 - invest_weight
    order_target_percent(context.risk_free_asset,w)


def handle_data(context, data):
    record(leverage = context.account.leverage)
There was a runtime error.

Here is my implementation of the EAA algorithm.

Clone Algorithm
117
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Implementation of - Elastic Asset Allocation (EAA)
# as described here - http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2543979

# Implementation adapted from Alex's implementation here
#https://www.quantopian.com/posts/keller-slash-butler-a-century-of-generalized-momentum-elastic-asset-allocation-eaa#55029240b09017a8bf0001e1

import math
import scipy.stats as stats
import numpy as np
import pandas as pd
from operator import itemgetter

def initialize(context):
    set_symbol_lookup_date('2005-01-01')
    
    #Asset #1 - test 7 asset scenario from paper - the small global multi asset universe
    # from Keller/Butler - Elastic Asset Allocation (EAA)
    #removed HYG from sample set to allow for longer backtest
    '''
    context.etfs = symbols('SPY', #proxy for S&P
                           'EFA', #proxy for EAFE
                           'EEM', #proxy for EEM
                           'QQQ', #proxy for US Tech
                           'EWJ', #proxy for Japan Topix
                           'IEF') #proxy for US Govt 10 yr
    '''
    
    #Asset #2 - test a more generalized distribution of assets
    #US stocks, European stocks, Emerging market stocks, US bonds, gold, real estate
    #context.etfs = symbols('SPY','EFA','EEM','IEF','GLD','IYR')
    
    #Asset #3 - test against S&P sectors to estimate upside extraction
    
    context.etfs = symbols('XLY',  # XLY Consumer Discrectionary SPDR Fund  
                           'IYR', #Real Estate ETF in place of XLFS
                           'IYG', #Financial Services etf in place of 
                           'XLF',  # XLF Financial SPDR Fund  
                           'XLK',  # XLK Technology SPDR Fund  
                           'XLE',  # XLE Energy SPDR Fund  
                           'XLV',  # XLV Health Care SPRD Fund  
                           'XLI',  # XLI Industrial SPDR Fund  
                           'XLP',  # XLP Consumer Staples SPDR Fund   
                           'XLB',  # XLB Materials SPDR Fund  
                           'XLU')  # XLU Utilities SPRD Fund
    
    context.lookback = 280 
    context.risk_free_asset = symbol('SHY')
    context.treasury = symbol('SHY')
    
    # Weights:
    #   [wR, wC, wV, wS, eps]
    #   Golden Offensive EAA: wi ~ zi = (1-ci) * ri^2
    context.score_weights = (2.0, 1.0, 0.0, 1.0, 1e-6)
    #   Golden Defensive EAA: wi ~ zi = squareroot( ri * (1-ci) )
    #context.score_weights = (1.0, 1.0, 0.0, 0.5, 1e-6)
    #   Equal Weighted Return: wi ~ zi = ri ^ eps
    #context.score_weights = (1.0, 0.0, 0.0, 0.0, 1e-6)
    #   Equal Weighted Hedged: wi ~ zi = ( ri * (1-ci) )^eps
    #context.score_weights = (1.0, 1.0, 0.0, 0.0, 1e-6)
    
    context.N = len(context.etfs)
    schedule_function(rebalance,date_rules.month_end(days_offset=0), time_rules.market_close(minutes=60))

def gen_mom_score(context):
    
    # Generalized Momentum
    #
    # wi ~ zi = ( ri^wR * (1-ci)^wC / vi^wV )^wS
    
    wR  = context.score_weights[0]
    wC  = context.score_weights[1]
    wV  = context.score_weights[2]
    wS  = context.score_weights[3]
    eps = context.score_weights[4]
    
    #added factors to ensure returns and correlations are >1  when calculating weights
    context.z  = ((((context.avg_excess_returns*100.0) ** wR)/100.0) * ((((1 - context.corr)*100.0) ** wC)/100.0) / ((context.vol*100.0) ** wV)/100.0) ** (wS + eps)
    
  
def get_tmom(context):    
    h = context.h
    #TODO get return of treasury bill proxy
    h_treasury = history( context.lookback,'1d','price')[context.treasury]
    h = h.resample('M',how='last')
    h_treasury = h_treasury.resample('M',how='last')
   
    #calculate total excess returns over the last 1,3, 6 and 12 months
    r_1mth =h.iloc[[-2,-1]].pct_change()[1:] - h_treasury.iloc[[-2,-1]].pct_change()[1:] # 1 month return
    r_3mth =h.iloc[[-4,-1]].pct_change()[1:] - h_treasury.iloc[[-4,-1]].pct_change()[1:] # 3 month return
    r_6mth =h.iloc[[-7,-1]].pct_change()[1:] - h_treasury.iloc[[-7,-1]].pct_change()[1:] # 6 month return
    r_12mth =h.iloc[[-13,-1]].pct_change()[1:] - h_treasury.iloc[[-13,-1]].pct_change()[1:] # 12 month return
   
    context.avg_excess_returns = (r_1mth.squeeze().add(r_3mth.squeeze()).add(r_6mth.squeeze()).add(r_12mth.squeeze()))/4.0

    #set negative returns to 0
    context.avg_excess_returns[context.avg_excess_returns < 0] = 0

    
def get_vol(context):
    
    h = context.h.resample('M',how='last')
    monthly_returns = h.pct_change()[-12:]
    context.vol = monthly_returns.std()
    
def get_avg_corr(context):
    h = context.h.resample('M',how='last')
    
    #calculate the correlation between the last 12 monthly total nominal returns and the index
    #where the index is computed as the equal weighted monthly total nominal return over all N assets
    monthly_returns = h.pct_change()[-12:]
    index_returns = monthly_returns.mean(axis=1)
    context.corr = pd.Series([0.0] * context.N, index=context.etfs)
    for s in context.etfs:
        context.corr[s] = index_returns.corr(monthly_returns[s])

#calculate crash protection, (the number of assets with negative returns/the number of assets)
def calc_cp(context):
   
    context.cp_c = context.avg_excess_returns[context.avg_excess_returns<=0].count()
    context.cp_w = context.cp_c/context.N

#calculate top N assets 
def calc_topN(context):
    context.topN = int(min(1 + math.ceil(math.sqrt(context.N)),math.floor(context.N/2)))
    
    
def rebalance(context,data):
    context.h = history( context.lookback,'1d','price')[context.etfs]
    get_tmom(context)
    get_avg_corr(context)
    get_vol(context)
    calc_cp(context)
    
    calc_topN(context)
    gen_mom_score(context)
    
    #top_M = context.N - context.cp_c
    #top_z = context.z.order().index[-min(context.topN,top_M):]
    
    top_z = context.z.order().index[-context.topN:]
    
    #get weights for topN
    w = ((1 - context.cp_w) * context.z[top_z] / context.z[top_z].sum())
    
    for s in context.etfs:
        if s in w and w[s] > 0.0 :
            order_target_percent(s, w[s])
            log.info("invest: " +s.symbol+ " w: " + str(w))
        else:
            order_target_percent(s, 0)

    order_target_percent(context.risk_free_asset, context.cp_w)
    log.info("invest: " +context.risk_free_asset.symbol+ " w: " + str(context.cp_w))
    
        
def handle_data(context, data):
    record(leverage = context.account.leverage)
There was a runtime error.

Sorry I noticed that I didn't provide results for the MVA algorithm, here they are,

MVA ( Minimum Variance Algorithm)
Asset set #1
Time Period 1
Total Return = 104.30%, CAGR =6.76%, Sharpe = 0.52, Sortino = 0.66, Volatility = 0.14, Max DD = 22.70%
Time Period 2
Total Return = 42.80%, CAGR = 6.21% Sharpe = 0.36 ,Sortino = 0.43, Volatility = 0.14, Max DD = 18.90%

Asset set #2
Time Period 1
Total Return = 252.80%, CAGR = 12.24%, Sharpe = 1.65 Sortino = 2.21, Volatility = 0.13 Max DD = 16.60%
Time Period 2
Total Return = 64.70%, CAGR = 8.80%, Sharpe = 0.77, Sortino = 0.95, Volatility = 0.11, Max DD = 15.00%

Asset set #3
Time Period 1
Total Return = 184.40%, CAGR = 10.05%, Sharpe = 1.09, Sortino = 1.36, Volatility = 0.13, Max DD = 18.10%
Time Period 2
Total Return = 119.90%, CAGR = 14.25%, Sharpe = 1.22, Sortino = 1.54, Volatility = 0.15, Max DD = 18.10%

Cool

thanks your sharing, but these algorithm backtest in minutes and daily mode have some difference.

Hi Novice,

Thanks for pointing this out, I was not aware of the discrepancy in performance between running the algorithms on daily and minute data. The issue is described somewhat in this post,

https://www.quantopian.com/posts/differences-between-minute-and-daily-backtests

However it is not clear to me why a trade that is executed the next day would have such a large positive effect over a trade that is executed the next minute. Shouldn't the overall effect be net neutral, i.e., sometimes the next day price is better than the next minute sometimes it is worse? Perhaps someone from the community can help explain this.

Nevertheless, I did some testing and it seems if you change the starting balance from $1000000 to $10000 the performance goes back (for the most part) to the numbers I originally posted. So it seems this behavior is partly attributable to the way Quantopian simulates order execution.

However it is not clear to me why a trade that is executed the next
day would have such a large positive effect over a trade that is
executed the next minute. Shouldn't the overall effect be net neutral,
i.e., sometimes the next day price is better than the next minute
sometimes it is worse? Perhaps someone from the community can help
explain this.

I had found many algorithm in Quantopian community have the same issue ,
and like you it is not clear to me why a trade that is executed the next day would have such a large positive effect over a trade that is
executed the next minute,
I like the momentum algorithm than others quant algorithm , thanks your sharing.

In your VWM algorithm, what is this code doing exactly?
Can someone explain?

drop first row because it is nan

pct_change =h.iloc[[0,-2]].pct_change()[1:]  

Nice Post!