Back to Community
Accelerating Dual Momentum: 150 Year Backtest

Many of you may have heard of Gary Antonacci's strategy called Global Equities Momentum (GEM) in which he uses a "dual momentum" signal to compare US stocks to global stocks to bonds over the trailing 12 months. The strategy has done quite well historically but I felt that a 12-month signal was too long and too rigid. I wanted to pick up not only on the direction of an asset but also the rate of change of that direction.

In a strategy I'm calling "Accelerating Dual Momentum" we look at this "acceleration" of an asset by simply adding the 6 month, 3 month, and 1 month returns. Whichever asset is higher between US stocks and global small cap stocks we buy. If both are negative than we buy bonds. Hold for the next month and repeat a month later.

The algorithm below provides a backtest over the last 14 years. I also wrote an extensive article on my blog further explaining the strategy and providing performance data going back to 1871. Here's the link: Accelerating Dual Momentum.

Clone Algorithm
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
Even weight 1, 3, 6 month
Default to Long Treasuries
import pandas as pd
import math
import numpy as np
import datetime

def initialize(context):
    Called once at the start of the algorithm.
    schedule_function(set_allocation, date_rules.month_start(), time_rules.market_open())
    schedule_function(my_rebalance, date_rules.month_start(days_offset=0), time_rules.market_open(hours=1))
    schedule_function(buy_longs, date_rules.month_start(days_offset=0), time_rules.market_open(hours=2))
    context.sp500 =         sid(8554)    #S&P 500                             SPY
    context.midvalue =      sid(21770)   #US Midcap Value                     IJJ, VOE after  7/1/2008 =         sid(33486)   #All World ex-US Stocks              EFA, VEU after  1/1/2010
    context.world_small =   sid(22972)   #All World ex-US Small Cap Stocks    EFA, SCZ after  7/1/2008, VSS after  1/1/2012
    context.outofmarket =   sid(23921)   #Long Term Treasuries                TLT =          sid(25801)   #Inflation Protected (TIPS)          TIP
    context.started = 0
def set_allocation(context, data):
    Get our portfolio allocation
    if get_datetime('US/Eastern').date() >=, 7, 1):
        context.world_small =   sid(35248)   #SCZ =         sid(22972)   #EFA
        context.midvalue =      sid(32521)   #VOE
    if get_datetime('US/Eastern').date() >=, 4, 1):
        context.world_small =   sid(38272)   #VSS =         sid(33486)   #VEU
    assets = [context.sp500, context.midvalue,, context.world_small, context.outofmarket,]
    #Get best asset class within each subcategory
    df = pd.DataFrame(columns=['Weight','Score1','Score2','1Year','6Mon','3Mon','1Mon']) 
    #Calculate Momentum Ratios
    for stock in assets:
        his = data.history(stock, "price", 252, frequency="1d")
        df.loc[stock, '1Mon'] = his[-1] / his[-21] - 1
        df.loc[stock, '3Mon'] = his[-1] / his[-63] - 1
        df.loc[stock, '6Mon'] = his[-1] / his[-126] - 1
        df.loc[stock, '1Year'] = his[-1] / his[0] - 1
    #Check Term Trend is Positive
    df = df.astype(float)
    df['Score1'] = df['1Mon'] + df['3Mon'] + df['6Mon']
    df['Score2'] = df['3Mon'] + df['6Mon'] + df['1Year']
    df['Weight'] = df['Score1']
    df.loc[df['Weight'] < 0, 'Weight'] = 0.0
    #Set outofmarket, and allworld to 0
    df.loc[, 'Weight'] = 0.0
    df.loc[context.outofmarket, 'Weight'] = 0.0
    df.loc[context.midvalue, 'Weight'] = 0.0
    df.loc[, 'Weight'] = 0.0
    #Get only top assets
    df.loc[~df.index.isin(df['Weight'].nlargest(MAX_ASSETS).index.tolist()),'Weight'] = 0.0
    #Add Safe if none others are positive
    if len(df[df.Weight > 0]) == 0:
        if df.loc[, '1Mon'] > df.loc[context.outofmarket, '1Mon']:
            df.loc[, 'Weight'] = 1.0
            df.loc[context.outofmarket, 'Weight'] = 1.0
    #Determine Weights
    sum_weight = sum(df['Weight'])
    df['Weight'] = df['Weight']/sum_weight
    context.good = df
    record(sp500 = df.Weight.loc[context.sp500],
           world_small = df.Weight.loc[context.world_small]*0.75,
           treasuries = df.Weight.loc[context.outofmarket]*0.5,
           tips = df.Weight.loc[]*0.25,
           leverage = context.account.leverage)
def buy_longs(context, data):
    Determine how much of each asset to buy and place orders, making sure no extra cash is used
    stocks = context.good.index.tolist()
    weight = context.good['Weight'].values.tolist()      
    n = len(weight)
    if n < 1:
    #Determine necessary contribution
    for x in range(0, n):
        desired_balance = context.good.loc[stocks[x], 'Weight']*context.portfolio.portfolio_value
        curr_price = data.current(stocks[x],'price')
        current_balance = context.portfolio.positions[stocks[x]].amount*curr_price
        context.good.loc[stocks[x], 'Need'] = desired_balance-current_balance
        context.good.loc[stocks[x], 'Price'] = curr_price*1.005
    #Determine how much to get of each (truncate by share price)
    context.good['Get'] = context.good['Need']
    context.good.loc[context.good.Get < 0,'Get'] = 0 #set all gets less than 0 to 0
    get_sum = context.good['Get'].sum()
    if get_sum == 0:
        get_sum = 1
    cash = + ROBINHOOD_GOLD
    context.good['Get'] = context.good['Get']*cash/get_sum #scale gets by available cash
    context.good.loc[context.good.Get < MIN_BUY,'Get'] = 0 #set all gets less than 0 to 0
    context.good['Shares'] = np.trunc(context.good['Get']/context.good['Price']) #determine number of shares to buy
    context.good['Get'] = context.good['Shares'] * context.good['Price'] #recalculate how much will be bought from truncated shares
    #Figure out remaining cash and buy more of the stock that needs it most
    cash = cash - context.good['Get'].sum()
    context.good.loc[context.good['Need'].idxmax(),'Get'] += cash #use up all cash
    context.good['Shares'] = np.trunc(context.good['Get']/context.good['Price']) #recalculate number of shares after adding left over cash back in
    context.good['Get'] = context.good['Shares'] * context.good['Price'] #recalculate how much will be bought from truncated shares
    #place orders for each asset
    for x in range(0, n):   
        if data.can_trade(stocks[x]):         
            order(stocks[x], context.good.loc[stocks[x], 'Shares'], style=LimitOrder(context.good.loc[stocks[x], 'Price']))[['Weight','Need','Get']].sort_values(by='Need', ascending=False))

def my_rebalance(context,data):
    Scale down stocks held that are in portfolio, sell any that aren't
    context.started = 1
    context.long_turnover = 0
    good_stocks = context.good.index.tolist()
    print_out = ''
    #Sell stocks that are not in our lists
    for security in context.portfolio.positions:
        cost = context.portfolio.positions[security].cost_basis
        price = context.portfolio.positions[security].last_sale_price
        amount = context.portfolio.positions[security].amount
        gain = (price-cost)*amount
        if security not in good_stocks and data.can_trade(security):
            print_out += '\nSell: ' + security.symbol + ' | Gains: $' + '{:06.2f}'.format(gain) + ' | Gain: ' + '{:04.2f}'.format((price/cost-1)*100) + '%'
            context.long_turnover += 1
    #Determine weights and trim good stocks
    n = len(good_stocks)
    curr_weights = np.zeros(n)
    weight = context.good['Weight'].values.tolist()        
    for x in range(0, n):   
        security = good_stocks[x]
        curr_weights[x] = context.portfolio.positions[security].amount * context.portfolio.positions[security].last_sale_price / context.portfolio.portfolio_value
        if curr_weights[x] >  weight[x]:
            print_out += '\nTrim: ' + security.symbol
There was a runtime error.
19 responses

Attached is a notebook that has some more research information. It also includes a cell which can be used to calculate the signal now or at any day defined.

Loading notebook preview...

Hi Stephen!

When you said "compute 1+3+6 month momemtum signal" did you mean we must add these values?

Cool post! Haven't read it 100% yet, but I think this is kind of what I'm looking for nowadays strategy wise.

Yep, just add the signals. The backtest and the attached notebook show the calculation in more detail but it's super simple!

Hi @Stephen, very nice > 100 year LONG-term perspective & expansion on Gary's (excellent) work. I love it!!

Wow great work! What do you think of adding leveraged ETFs instead (SPXL, TMF, etc.)? They have a much higher expense ratio but the dynamic nature of this algo might make up for it. I am a newcomer so please relieve me of my ignorance if I am way off with this idea. Thank you!

Gary has some comments on this approach:

Govind, thanks for sharing, I did see this! I'm not entirely sure Gary read my full blog (don't blame him if he didn't, it was long!) because I directly addressed his concerns before he voiced them - because I shared them! I admit to first doing some guess and checking on the ~20 year period I had at first in portfolio visualizer. But I recognized how it could have been a total fluke what happened in the past 20 years, and before posting the strategy I wanted to test it on data I hadn't seen before. So I went and compiled and shared data going back to 1871. And this "accelerating dual momentum" strategy I shared worked better than the base GEM in that period too, without any tweaking or data mining done by me there. I fully discussed it here, although I probably need to write a more focused and shorter blog on the topic.

I also computed the rolling 30 year returns to show how the ADM strategy is beating or at least tying GEM in virtually every 30-year period. I can submit to the fact that data mining might have been done in the more recent history; but I'm not sure how one can objectively look at the over 100-year backtest and say it's the result of data mining.

I'd also like to remind Gary and others about the fact that Gary admited in his book about the viability of looking at accelerating momentum singles. Here's his quote:

Accelerating momentum as either curvature or trend salience might be effective with stock indexes and other assets, in addition to its use with individual stocks.

So I'm not sure why he's knocking the strategy on merits that aren't entirely true, and when he as admitted in the past to the potential that accelerating momentum offers. I suppose in his book he only gave it a few paragraphs and was dismissive there. But again, the strategy has worked well in all time periods better than regular GEM and the strategy makes a decent amount of intuitive sense to me (and hopefully others!).

BTW Andrew, avoid leveraged ETFs like the plague. Because of compounding they don't perform as you'd initially expect beyond a single day's return. I need to probably write my own post on the subject, but some googling will come up with good articles on the pitfalls of leveraged ETFs for investment horizons longer than a day.

Stephen - While you give reasons to justify their use based on the last 20 years of data, I think Gary's point is that your use of shorter look-back periods, long-term bonds, and small-cap international stocks may all come from data mining. He shows they do not hold up so well on longer-term data. Haven't both strategies performed the same over a trailing 30 year period since the 1990s according to your chart? Prior to 1970, isn't your strategy no longer dual momentum, since you have no international stocks to compare to the S&P 500? In other words, you are just looking at the trend of the S&P 500. I'm also unclear why you call your strategy Accelerating Dual Momentum. In Gary's book, he refers to 2 studies of accelerating momentum. The first uses a linear regression of daily returns against the square of time to see if momentum has been accelerating. The second uses the slope of performance relative to the 12-month geometric average rate of return. It looks to me like you have just dual momentum with shorter look-back periods from 1970 forward and simple trend following prior to 1970.

This algo run in live account ?

Hi Stephen

I noticed that you are using price momentum vs total returns momentum (including dividends) in your algorithm. I believe Gary's algorithm works with total returns. Can you please confirm that my understanding is correct about how your algorithm is calculating momentum?

update - Stephen - never mind - looks like quantopian price data is already dividend adjusted.

A second question: Gary's algo compares total return of each stock to that of the risk free asset total return before rank ordering them. The algo in your pipeline simply looks for positive momentum vs comparison to a risk free asset, per my understanding. Am I getting this correct?


The attached Quantopian algorithm did not explicitly include dividends, I always had a tough time understanding how Quantopian handled dividends in the price data.

I don't use the risk-free asset total return, just look to see if the average 1,3,6 month return is positive. My testing showed that the absolute return relative to the risk-free asset just confused the matter and marginally reduced returns (it's another way of trying to keep you out of the market, and in the long run, it's better to be in than out!).

I've actually gone away from using the Quantopian algorithm and just go directly to Portfolio Visualizer (go to the "Timing Periods" tab). I found that my Quantopian algorithm is doing a fixed number of day look back when the calendar month look back can actually differ quite a bit. I've just found it simpler to use the Portfolio Visualizer tool, but this Quantopian algorithm should agree 99% of the time (except during highly volatile times and inconsistent lengthed months... like the beginning of 2019).

After portfolio visualizer introduced paid plans and stopped giving forward looking signal results Quantopian is back to being the only free option left (besides manually calculating the signal). Attached is a super simple notebook to provide that calculation.

Loading notebook preview...

It looks great! Also, you can easily run this strategies on you own computer by IBridgePy. because it does not need Q's pipeline.
IBridgePy is an easy-to-use python platform to run Quantopian's strategies on your own computer. And there are a lot of tutorials on YouTube.
For example, Intro of IBridgePy: backtest and live trading

Disclaimer: This is Dr. Hui Liu, who created IBridgePy

I cloned your algorithm and backtested it but got drastically lower results. Not sure where the discrepancy lies.


Has this strategy been tested with limit price entering and exit, and possibly with target sell price (stop gain rule)?

Since the strategy uses so few ETFS it is probably well suited to investigate whether it is possible to get in and out of positions to better levels than market close price end of month

For example:
Entering 0.5% above yesterday's low
Exit 0.5% below yesterday's high
otherwise, a simple position stop-gain rule with a threshold around 10% -15%

Has anyone tested the strategy with trading costs?

I don't know exactly why, but I can't produce the same signals as you, for SPY VSS and TLT.

As you can see from the attached note, when I use the sum of 1 month, 3 months and 6 months (from Barchart), it is VSS, which is the purchase signal for July, but if you look at the print screen from your script it is SPY which is the signal for July.

What's right here? Is there anything about the strategy I haven't understood?

Barchart Script

Gary changed his blog. His comments on this strategy are now at

I've submitted this twice on Stephen's Engineered Portfolio blog, but Stephen hasn't allowed it to appear there.