Mebane Faber Relative Strength Strategy with MA Rule

This is a synthesis of two methods

Relative Strength Strategies for Investing
Asset Class Momentum - Rotational System
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585517

A Quantitative Approach to Tactical Asset Allocation
Asset Class Trend Following
Mebane Faber's MA Rule
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962461

I think it's pretty common and Mebane probably has published this somewhere.

1. Measure the M-month trailing returns of a basket of stocks
2. Rank the stocks and buy the top-K if monthly price > 10-month SMA.
3. Else, hold cash

It reduces the drawdown of the relative strength approach.

2998
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962461
# SPY EFA AGG VNQ GLD

def initialize(context):
context.secs = [sid(8554),sid(22972),sid(25485),sid(26669),sid(26807)]
set_commission(commission.PerShare(cost=.005))
leverage = 1.0
context.top_k = 1
context.weight = leverage/context.top_k

import numpy as np

@batch_transform(refresh_period=20, window_length=61)
def trailing_return(datapanel):
if datapanel['price'] is None: return None
pricedf = np.log(datapanel['price'])
return pricedf.ix[-1]-pricedf.ix[0]

def reweight(context,data,wt,min_pct_diff=0.1):
liquidity = context.portfolio.positions_value+context.portfolio.cash
orders = {}
pct_diff = 0
for sec in wt.keys():
target = liquidity*wt[sec]/data[sec].price
current = context.portfolio.positions[sec].amount
orders[sec] = target-current
pct_diff += abs(orders[sec]*data[sec].price/liquidity)
if pct_diff > min_pct_diff:
#log.info(("%s ordering %d" % (sec, target-current)))
for sec in orders.keys(): order(sec, orders[sec])

def handle_data(context, data):
ranks = trailing_return(data)
abs_mom = lambda x: data[x].mavg(20)-data[x].mavg(200)

if ranks is None: return
ranked_secs = sorted(context.secs, key=lambda x: ranks[x], reverse=True)
top_secs = ranked_secs[0:context.top_k]
wt = dict(((sec,context.weight if sec in top_secs and abs_mom(sec) > 0 else 0.0) for sec in context.secs))
reweight(context,data,wt)
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
23 responses

That's a very nifty example of the batch transform, too. I have to look more into this one! Thanks.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

The idiom of placing orders to close the gap between the current portfolio and the target portfolio is a great one as well.

Oh, you guys

(blushing)

Thanks

That's really interesting, Simon. I wonder if the peak moves or if there have been any changes throughout the years. Probably I would expect to see something after Faber published the paper in 2009.

Also, I think I posted this to the wrong system -- those results were for the simple MA crossover, nothing to do with the relative strength. In any case, yeah, interesting stuff. I will do some more tests tonight to see how a (lower resolution) heatmap changes over rolling time periods to see if there was a measurable difference after publishing.

Simon, in the heat map, how is the color scale working? blue is better returns?

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

In this case red is better, I am going to double-check and do some more of these tonight...

I double checked the numbers and reimplemented a couple of different ways to be sure. Never got around to the rolling analysis, probably tomorrow.

Tried briefly to reproduce quantopian's risk numbers, but I posted a separate question about that!

Nice. It seems like optimum has shifted to 50/100 (from 20/200)

Yeah -- also note that returns in 2006-2009 period were bad as well, it's just hard to tell because the color scale is changing from subplot to subplot. New ones I do will use a common coloring, but I had to go to bed before I got that working properly!

Hi - I just tried playing around with this and discovered that a recent performance optimization has caused a problem with np.log(datapanel['prices']). We are working on a fix!
thanks,
fawce

I simply "cloned" and then built it and am getting an error I can't understand:

AttributeError: log
... USER ALGORITHM:16, in trailing_return
pricedf = np.log(datapanel['price'])

Hi Dan,

We just found and fixed the issue, just working on getting the fix to production. What happened is that we did a performance optimization after this algo was shared. Our change inadvertently lost some of the type information for the data sent to the batch transform during simulation. Should be sorted very soon.

thanks,
fawce

https://www.leinenbock.com/cheese-and-etfs/

independent backtest (in python)

Thanks John, for sharing this alog. It's really well implemented, some great programming features. I almost feel a bit conscious now about my own version (see Simon's post above), that I've just coughed up quickly for a friend. What I found was that this strategy gives quite stable and consistent returns with a range of different securities and it really surprised me. In terms of slow moving strategies, this one of the better ones I've come across so far. Considering how slow it is, it wouldn't even require automated execution. The perfect superannuation strategy :-)
Because of the initial spec, In my code I used weeks rather than days as look-back periods which increased the complexity of the coding enormously. Any ideas on how to simplify that?

Hi John,
Thank you for sharing this algo. I like the way you implemented it, very compact and efficient. It does make it difficult for us newbies to understand but then efficiency is more important in my mind.

Can you implement a disaster stop loss in the system? Say if a stock drops more than 15% from its purchase value, it gets sold, and won't get bought unless another stock is bought first. This way, if the whole market is tanking, and that stock is still the leader, it will prevent the system from continuously buying and selling that stock.

Thank you,
Maji

Question for you all. Seeing this perform so well in back testing, what are your hesitations when actually taking this algo live?

Chase,
The market is really efficient. Do you see the plethora of Sector Rotation Systems out there now? It means that the market will digest these strategies soon, if not already, and the advantages will diminish and/or go away. So, I think the return going forward will be less than the back test. You can already see some of the Sector Rotation systems plateauing out.

A classic example is the Opening Range Breakout (ORB) System. When it was first disclosed, it was doing great. Then the market absorbed it and the opening range was slowly expanded from 1/2 hour to an hour to sometimes till lunch time, depending on what system you were looking at. Once people get tired of the system and abandon them, they slowly regain their effectiveness again.

That is the reason why I want a stop loss built into this system so that if it collapses, we don't lose everything.

I hope we have some good discussion on your question.

Hi John:
I have just read the original paper - for a long time, I have been a Dollar Cost Average investor - basically been buying the same stock over 10 years and I have always stayed way above the SMA - have you come across any comparison between DCA and Relative Strength Strategy? I am very curious to know this before changing my strategy.
San

How do you implement this strategy with Set_Universe()? I tried but it didn't seem to work because of the way the data is formatted in the @batch_transform.

Thanks.

I made a version of this that uses the history function so it can be used with minute data. It seems to work but has some long drawdown periods. I'm sure it can be improved with a few adjustments. The results are pretty close to the daily data version.

232
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962461
# SPY EFA AGG VNQ GLD

import numpy as np
import pandas as pd
from pytz import timezone
from zipline.utils import tradingcalendar as calendar

class EventManager(object):
'''
Manager for timing the entry point of periodic events.

'''
def __init__(self,
period=1,
max_daily_hits=1,
rule_func=None):
'''
:Parameters:
period: integer <default=1>
number of business days between events

max_daily_hits: integer <default=1>
upper limit on the number of times per day the event is triggered.
(trading controls could work for this too in some cases)

rule_func: function (returns a boolean) <default=None>
decision function for timimng an intraday entry point
'''
self.period = period
self.max_daily_hits = max_daily_hits
self.remaining_hits = max_daily_hits
self._rule_func = rule_func
self.next_event_date = None
self.market_open = None
self.market_close = None

@property
def todays_index(self):
dt = calendar.canonicalize_datetime(get_datetime())

def open_and_close(self, dt):
return calendar.open_and_closes.T[dt]

def signal(self, *args, **kwargs):
'''
Entry point for the rule_func
All arguments are passed to rule_func
'''
now = get_datetime()
dt = calendar.canonicalize_datetime(now)
if self.next_event_date is None:
self.next_event_date = dt
times = self.open_and_close(dt)
self.market_open = times['market_open']
self.market_close = times['market_close']
if now < self.market_open:
return False
if now == self.market_close:
self.set_next_event_date()
decision = self._rule_func(*args, **kwargs)
if decision:
self.remaining_hits -= 1
if self.remaining_hits <= 0:
self.set_next_event_date()
return decision

def set_next_event_date(self):
self.remaining_hits = self.max_daily_hits
idx = self.todays_index + self.period
self.next_event_date = tdays[idx]
times = self.open_and_close(self.next_event_date)
self.market_open = times['market_open']
self.market_close = times['market_close']

def entry_func(dt):
'''
Decision function for intraday entry point
'''
dt = dt.astimezone(timezone('US/Eastern'))
return dt.hour == 11 and dt.minute < 30

################################################################
################################################################

def initialize(context):
context.secs = [sid(8554), sid(22972), sid(25485), sid(26669), sid(26807)]
leverage = 1.0
context.top_k = 1
context.weight = leverage / context.top_k
context.e_manager = EventManager(period=20, rule_func=entry_func)

def reweight(context, data, wt, min_pct_diff=0.1):
liquidity = context.portfolio.positions_value + context.portfolio.cash
orders = {}
pct_diff = 0
for sec in wt.keys():
target = liquidity * wt[sec] / data[sec].price
current = context.portfolio.positions[sec].amount
orders[sec] = target - current
pct_diff += abs(orders[sec] * data[sec].price / liquidity)
if pct_diff > min_pct_diff:
for sec in orders.keys():
if orders[sec]:
log.info('\nOrdering %s of %s'%(int(orders[sec]), sec))
order(sec, orders[sec])

def handle_data(context, data):
if not context.e_manager.signal(get_datetime()):
return
prices = np.log(history(61, '1d', 'price'))
ranks = prices.ix[-1] - prices.ix[0]
abs_mom = lambda x: data[x].mavg(20) - data[x].mavg(200)
ranked_secs = sorted(context.secs, key=lambda x: ranks[x], reverse=True)
top_secs = ranked_secs[0:context.top_k]
wt = dict((
(sec, context.weight if sec in top_secs and abs_mom(sec) > 0 else 0.0)
for sec in context.secs))
reweight(context,data,wt)


There was a runtime error.

How can it be coded to buy a 2x leveraged etf if the fund is over the moving average and then sell when it is below.?

Ive been looking into this for ages now and really hoping someone can help.

im trying using the FTSE 100 with MA ( have tried many )

using the dialy bars im just trying to find a consistent MA or a few indicators which can help me to exit teh market and then try and reenter at a lower point.

ive tried so many MA and crossovers and keep getting whipsawed or even having to enter back in at a higher price.

im happy to time my exit back in but looking for something reliable to exit - can anyone help ?