Back to Community
CSSAnalytics - A Simple Tactical Asset Allocation Portfolio with Percentile Channels

[EDIT: see bottom of thread for most recent versions]

Hey everyone,

I just secured my Quantcon ticket, so I'll take the opportunity to say 'Hi!' to the community and give back a little with an implementation from the quant blogosphere.

The attached algorithm is an adaptation of a recent tactical asset allocation portfolio from David Varadi @ CSSAnalytics: "A Simple Tactical Asset Allocation Portfolio with Percentile Channels". I took the freedom to modify it slightly from the original, as noted in the code.

  • ASSETS: US stocks, EAFE stocks, US REIT, US Corporate Bonds, US Government Bonds (= cash)
  • BUY RULE: Long position if current price >= 0.75 percentile of [60, 120, 180, 252] days channel
  • SELL RULE: Sell position and move to cash if current price <= 0.25 percentile of [60, 120, 180, 252] days channel
  • ACTIVATION: Once a month, 5min before the close of the last trading day.
  • SIZING: proportional to channel count and inverse share in 20-day universe volatility

The strategy is simple and consistent and I like it for its use of percentile channels. Of course, it isn't optimized, but for a monthly approach it has fairly low volatility and draw down (less than 10 percent). Notably, when back testing with our Quantopian data, there isn't a single down-year and it still provides the performance of a balanced 60/40 portfolio. As a caveat, I'll point out that it often relies on corporate and government bonds and I wonder whether it will keep performing in an environment of rising interest rates.

I hope you'll find this implementation useful or, at least, fun to play with. I've certainly learned an interesting spin on percentile channels from it. In the same vein of learning, I'd appreciate your feedback on my implementation as I'm just getting warmed up with the Quantopian platform.

See you at Quantcon. Cheers!

Clone Algorithm
468
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54f017c719f3cc0f1164dfc6
There was a runtime error.
35 responses

I actually did this yesterday as well, although still working out some of the bugs. Shamelessly borrowed your code and plugged in the securities I was working with:

Clone Algorithm
40
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54f0a69bba336a0f10eb3da5
There was a runtime error.

Hi Ilya, it's good to see you in a pythonic forum ;-)

Oh, hey. Yeah, well, I can't write a line of this zipline stuff. Would love to learn someday, but doing my own thing at the moment >_<. Wish there was a MOOC on it instead of just a tiny bit of documentation. Maybe I can barter some R consulting for help getting this thing off the ground one day >_<

Hey Ilya, thanks for the link. Your evaluation convinced me to give the strategy a shot in the first place. Blog's awesome, keep it going. Also, I'd be glad to barter some pythonic zipline for R one day. ;-)

Tartarus, let me know you complete your implementation. I'd be glad to see some of your results for cross-validation.

Hello,
What is context.cash (Line 24)?
Also, can you change the codes under context.stocks to change what the algorithm it buying?
Thanks.

Hi Nick, cash is IEF (7-10 yr US Govt bonds ETF). You you can clone the algorithm and switch out stocks. This is what it looks like with a bunch of high-flying tech companies:

Clone Algorithm
468
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54f1640f1f52430f0c062589
There was a runtime error.

Thanks so much, Alex!

Why do I get an error when I run the code from your last reply, Alex?

# CSSAnalytics - Tactical Percentile Channels  
#  
# Source:  
#   David Varadi  
#   "A Simple Tactical Asset Allocation Portfolio with Percentile Channels"  
#   (https://cssanalytics.wordpress.com/2015/01/26/a-simple-tactical-asset-allocation-portfolio-with-percentile-channels/)  
#  
# Implementation notes:  
#   - Removed commodity asset  
#   - Added Intl. Equity asset  
#   - Cash asset is 10 year US Govt Bond  
#

import pandas as pd  
import numpy as np

def initialize(context):  
    # original active: VTI, IYR, LQD, DBC  
    # original cash:   SHY  
    # active: VTI, EFA, ICF, LQD  
    # cash:   IEF  
    context.active = [sid(24), sid(39840), sid(16841), sid(26578), sid(3951), sid(5061), sid(42950)]  
    context.cash = sid(23911)  
    context.assets = set(context.active + [context.cash])  
    context.channels = [60, 120, 180, 252]  
    context.entry = 0.75  
    context.exit = 0.25  
    context.leverage = 1.00  
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)  
    context.modes = {}  
    for s in context.active:  
        for l in context.channels:  
            context.modes[(s, l)] = 0

    schedule_function(  
        reallocate,  
        date_rules.month_end(days_offset=0),  
        time_rules.market_close(minutes=6)  
    )  
    schedule_function(  
        rebalance,  
        date_rules.month_end(days_offset=0),  
        time_rules.market_close(minutes=5)  
    )

def handle_data(context, data):  
    pass

def rebalance(context, data):  
    for s in context.alloc.index:  
        if s in data:  
            order_target_percent(s, context.alloc[s] * context.leverage)  
def reallocate(context, data):  
    h = history(300, '1d', 'price')[context.active]  
    hs = h.ix[-20:]  
    p = h.ix[-1]

    hvol = 1.0 / (hs / hs.shift(1)).apply(np.log).std()  
    hvol_all = hvol.sum()  
    r = (hvol / hvol_all) * 1.0 / len(context.channels)  
    alloc = pd.Series([0.0] * len(context.assets), index=context.assets)  
    for l in context.channels:  
        values = h.ix[-l:]  
        entry = values.quantile(context.entry)  
        exit = values.quantile(context.exit)  
        for s in context.active:  
            m = (s, l)  
            if context.modes[m] == 0 and p[s] >= entry[s]:  
                print "entry: %s/%d" % (s.symbol, l)  
                context.modes[m] = 1  
            elif context.modes[m] == 1 and p[s] <= exit[s]:  
                print "exit: %s/%d" % (s.symbol, l)  
                context.modes[m] = 0  
            if context.modes[m] == 0:  
                alloc[context.cash] += r[s]  
            elif context.modes[m] == 1:  
                alloc[s] += r[s]  
    context.alloc = alloc  
    print "Allocation:\n%s" % context.alloc  

Thanks,
Nick

Clone Algorithm
5
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54f1e066bd1f2e0f23437e9b
There was a runtime error.

I'm not exactly 100% sure on why it's getting a NaN, but it can be effectively treated as wanting to invest 0% because the other allocations still add up to 100%.

Alex, I broke it ...and can't figure out how.

Just tweaked David's default assets with my own picks. Keep getting, "ValueError: cannot float NaN to integer. There was a runtime error on line 57." (Never got close to line 57.

Here's the only mod I made (just the symbol lists):

def initialize(context):
# original active: VTI, IYR, LQD, DBC
# original cash: SHY

# active: VTI, EFA, ICF, LQD  
# cash:   IEF  
context.active = [symbol('GOOG'),symbol('AAPL'),symbol('HASI'),symbol('XLP'),symbol('XLU'),symbol('SNH'),symbol('FBT')]  
context.cash = symbol('IEF')

Anyone see the source of the error?

(Suspect this would make a nice portfolio ...in the intermediate-term.)

Thanks...

NaNs from missing data for your particular stocks are probably getting into r. To verify that, import isnan:

from math import isnan  

and then test for NaN when looping over s:

        for s in context.active:  
            if isnan(r[s]):  
                log.info('{s}: found NaN in r'.format(s=s))  

If you set a breakpoint on the log line, you can examine things in more detail.

If the problem is indeed in r, you could try replacing the NaNs with a default right after calculating r, something like this:

    channel_weight = 1.0 / len(context.channels)  
    r = (hvol / hvol_all) * channel_weight  
    active_asset_channel_weight = (1.0 / len(context.active)) * channel_weight  
    r.fillna(value=active_asset_channel_weight, inplace=True)  

Michael,

I really appreciate your reply. However, I'm a Q newb and you sound like a python-meister. (Spent this morning watching youtube tutorials!) Is there any way you could integrate those thoughts into the original code with my tickers ...whereby I would then find the changes that corrected the error ...and connect the dots above? (If not, I understand.)

Again ...grateful for any help.
Stephen

Here it is with your symbols and the NaN replacement but without the extra debugging code. I tried it with daily data from 1/1/2013 to 6/1/2013, and there were no errors.

# CSSAnalytics - Momentum Channels  
#  
# Source:  
#   David Varadi  
#   "A Simple Tactical Asset Allocation Portfolio with Percentile Channels"  
#   (https://cssanalytics.wordpress.com/2015/01/26/a-simple-tactical-asset-allocation-portfolio-with-percentile-channels/)  
#  
# Implementation notes:  
#   - Removed commodity asset  
#   - Added Intl. Equity asset  
#   - Cash asset is 10 year US Govt Bond  
#

import pandas as pd  
import numpy as np

def initialize(context):  
    # original active: VTI, IYR, LQD, DBC  
    # original cash:   SHY  
    context.active = [symbol('GOOG'), symbol('AAPL'), symbol('HASI'),  
                      symbol('XLP'), symbol('XLU'), symbol('SNH'), symbol('FBT')]  
    context.cash = symbol('IEF')  
    context.assets = set(context.active + [context.cash])  
    context.channels = [60, 120, 180, 252]  
    context.entry = 0.75  
    context.exit = 0.25  
    context.leverage = 1.00  
    context.alloc = pd.Series([0.0] * len(context.assets), index=context.assets)  
    context.modes = {}  
    for s in context.active:  
        for l in context.channels:  
            context.modes[(s, l)] = 0

    schedule_function(  
        reallocate,  
        date_rules.month_end(days_offset=0),  
        time_rules.market_close(minutes=6)  
    )  
    schedule_function(  
        rebalance,  
        date_rules.month_end(days_offset=0),  
        time_rules.market_close(minutes=5)  
    )


def handle_data(context, data):  
    pass

def rebalance(context, data):  
    for s in context.alloc.index:  
        if s in data:  
            order_target_percent(s, context.alloc[s] * context.leverage)  
def reallocate(context, data):  
    h = history(300, '1d', 'price')[context.active]  
    hs = h.ix[-20:]  
    p = h.ix[-1]

    hvol = 1.0 / hs.pct_change().std()  
    hvol_all = hvol.sum()  
    channel_weight = 1.0 / len(context.channels)  
    r = (hvol / hvol_all) * channel_weight  
    active_asset_channel_weight = (1.0 / len(context.active)) * channel_weight  
    r.fillna(value=active_asset_channel_weight, inplace=True)  
    alloc = pd.Series([0.0] * len(context.assets), index=context.assets)  
    for l in context.channels:  
        values = h.ix[-l:]  
        entry = values.quantile(context.entry)  
        exit = values.quantile(context.exit)  
        for s in context.active:  
            m = (s, l)  
            if context.modes[m] == 0 and p[s] >= entry[s]:  
                print "entry: %s/%d" % (s.symbol, l)  
                context.modes[m] = 1  
            elif context.modes[m] == 1 and p[s] <= exit[s]:  
                print "exit: %s/%d" % (s.symbol, l)  
                context.modes[m] = 0  
            if context.modes[m] == 0:  
                alloc[context.cash] += r[s]  
            elif context.modes[m] == 1:  
                alloc[s] += r[s]  
    context.alloc = alloc  
    print "Allocation:\n%s" % context.alloc  

Runs like a charm.

Michael, I really appreciate your help on this and the quick reply. I'll look into what you did to fix it. (No need to reply.)

Owe ya!
Stephen

I’d like to make one change in a clone of the algorithm. The original version (above) would serve to backtest different mixes of instruments …for in-sample testing, e.g. The new modified copy would serve solely to provide me with rebalancing weights - but only on the day I wish to rebalance …i.e. not automatically the end of every month.

The ability to run it “on demand” would allow it to act as a decision support tool for live trading in a real-money account.

Could someone tell me what change I make to the code? I’m guessing the change is in this segment:

)  
schedule_function(  
    rebalance,  
    date_rules.month_end(days_offset=0),  
    time_rules.market_close(minutes=5)  
)

I’m perfectly happy to go into the IDE, on the day I want to rebalance, and manually put a date in a line of code.

Thanks everyone,

Stephen

Hi Stephen,

Although I'd recommend against this, one way to go about implementing this would be to:

1) Move all your code from def rebalance() into def handle_data():

def handle_data(context, data):  
    for s in context.alloc.index:  
        if s in data:  
            order_target_percent(s, context.alloc[s] * context.leverage)  

2) Setup a Fetcher file - Tutorial here - that contains the dates you want to rebalance and a signal that tells handle_data() to say something like 'order_now'
CSV file format:
Date | Only_order_on

1/27/2014 | 1/27/2014
2/29/2014 | 2/29/2014


#fetch the CSV  
fetch_csv(file, symbol_column='order_date', 'date_column'='Date')

#in handle_data  
if get_datetime() == data['order_date']['order_on']:  
    # rebalance here  

3) Update the Fetcher file daily => E.g. keep historical rows but append new ones on top of it - This would be done outside of Quantopian on your own

If you were to take the other route and manually choose which day to rebalance by entering in some variable in the environment e.g. context.order_date='1/2/2015', you'd have to stop and start your algorithm every time you make a change.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Seong. Although not recommended, three good options to choose from.

pretty good!

Couple of things

1) I believe the intent of the model is to check price against the percentile channels every day, not just on the rebalance day, and then to trade based on those continuous checks every month. Meaning that if the price dipped down below the 25% mark a few days ago, then moved back up to somewhere around the 50% mark by the rebalance day, that would still be a "sell" signal. At least that is what I get from reading the follow up posts by Varadi, Kipnis, and others.

2) I also think that the volatility-weighting is supposed to be done absent of the cash asset and then the cash asset gets whatever is left over. Again just based on what I've read on Varadi's blog.

3) The results of the model appear less robust than one might like to choice of rebalance date. Try offsetting the monthly rebalance by 5, 10 or 15 trading days. You will notice significant differences in CAGR and MDD, which suggests that some portion of the return/risk element here is due to lucky choice of optimal start date. That suggests one of two tactics - a) run this as 4 concurrent portfolios trading each a week apart to average out the effect or b) try running it weekly (which as it turns out seems to work well with portfolio size >= $10K since transaction costs here are minimal).

Attached is my own rework of Alex's algorithm with the changes above implemented (I think - I'm still a Python novice).

Clone Algorithm
80
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54fc8801d9a9200f1092d856
There was a runtime error.

Agree on all three points.

In a paper titled “Adaptive Asset Allocation: A Primer,” David Varadi and colleagues (Butler, Philbrick, and Wolter) concluded a series of models rebalancing weekly. The return plot got very close to a straight line with a worst monthly loss of just 8%.

Reference: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2328254

In contrast, Walter Sun, et al. (Massachusetts Institute of Technology) examined optimal rebalancing schedules and concluded:

(1) Ad hoc methods of periodic rebalancing are simple but suboptimal. (2) Optimal rebalancing is achieved when rebalancing is employed solely to handle adverse moves. In their paper, dynamic policy minimized transaction costs, improved returns, and improved utility functions (quadratic, log wealth, and power utility).

Reference: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=639284

Two final considerations:

(1) Rebalancing beyond a 30-day period avoids the wash-sale rule. (2) Holding commission-free ETFs longer than 30 days avoids short-term trading fees (some brokers).

Matt and Stephen, Very good. Thanks for checking the algorithm. I've incorporated the changes and cleaned up the original code.

There's two things I'm wondering about:
(1) My original code allocates to 4 separate systems/channels whereas the proposed one has cross-talk, i.e. one to-cash channel cancels out one to-asset channel. Is this on purpose? (2) I've swapped out the assets to include the exact ones from the CSSA blog. It seems the performance does not quite hold up. Do we need to use adjusted close prices as proposed by Florent here?

Optimal rebalancing and tax-loss harvesting is another interesting aspect. Given the algorithm is fairly simple, we should be able to improve that as well.

Clone Algorithm
8
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54fcce3faa421c0f0e2b2d06
There was a runtime error.

So for those that don't read my blog:

Yes, the channels cross-talk. That's the entire point. Also, the signal is looked at every single day, not just on rebalancing periods.

I see. And M. Kapler's implementation answers any remaining questions.

That's where I improved my implementation from.

Alex,

Deconstruct the algorithm into three parts: (1)selection, (2) timing, (3) position sizing. With all due respect, a “simple” tactical asset allocation procedure - elegant by nature - selects from a universe whose performance is affected by far more than channel habitat. Take, e.g., the inclusion of IYR. Real estate investments do well when rates are low and equity markets are surging. But there’s no python code that pulls IYR out of the mix when those conditions are absent.

Or take a “top-down” approach. When market stress is rising (e.g. geopolitical events occurring or tension within banks’ intermarket funding), correlations will be increasing, volatility will be increasing - so fixed income and volatility assets should outperform equities and commodities. No code for that. And there’s one more macro layer: “defense.” Recently utilities, healthcare, and consumer staples were getting large institutional money flows, within a defensive context. No code for real money flows.

In a slightly broader paradigm, you’d pull assets from a universe whose constituents might look like this:

Stress Falling

Equities (VTI)
Commodities (DBC)

Stress Rising

Fixed Income (TLT or IEF based on what’s happening with rates)
Volatility (UVXY)

Defensive

Utilities (XLU)
Healthcare (XLV)
Consumer Staples (XLP)

Then the top-down questions become, “Where are stress levels at? Are imbalances in the financial system sufficient to move me into fixed income …get long volatility? Is it the season to minimize correlation or position size on risk? Should I get defensive? Stress is rising; is the 10-year outperforming the 30-year?”

Alex, I think the interplay of these forces account for the underperformance, that is - using a small universe …that simply selects by channel position. Feeling strong? Could we write an algorithm - call it “Complex Tactical Risk Allocation”? And then for version 2.0, how about forecasting, rather than following, based upon metrics with statistically valid predictive value - e.g. implied correlation, implied volatility, and non-block institutional money flows?

The overarching question for me, as I try and answer these questions …while managing my own life savings …is, “How valuable is an algorithm as a decision support tool?” Despite my greatest respect for simplicity and the empirical advantage to algorithmic trading, I am undecided.

Thought for the day!

Well, this thread is going to get philosophical quickly. Perhaps I need something stronger than coffee to continue.

Stephen - I think you may find a number of people here who are looking to algorithms precisely because they doubt (on a relative basis when compared to an algorithm) their own ability to make the sorts of subjective decisions you're referencing more consistently and correctly than an algorithm can. I certainly count myself as a member of that group. O'Shaughnessy's "What Works On Wall Street" has a great summary of some literature on this topic (Chapter 2). He references a number of examples in scientific literature that suggest rules-based statistical models represent an upper bound on what humans can accomplish and that the addition of subjective reasoning to the models worsens performance consistently. That said, I'm sure there are thousands of examples of where the opposite is true.

Alex - seems to me that the only asset allocation algorithms which have not underperformed in the last 2 years are the ones which somehow decided to go 100% long SPY at just the right moment. I haven't seen one of those which doesn't also have max drawdowns in the 20s or more. I wonder what we will say about today's "underperformers" after the next market correction.

Matt,

Oh, for sure. Understand. (Try Red Bull?)

Algorithmic trading systems have been shown to significantly outperform common forecasting methods. Algorithms tend not to hold onto losing trades nor hold winning trades until they move adversely. My sense, however, is that measures such as non-block institutional money flows, implied correlation, and implied volatility provide objective (not subjective) support to decision-making. My read on the financial academic literature leaves me thinking that intelligent human decision-making plus machine decision support holds the key to superior returns.

All, I've stumbled through the myriad blog posts on this topic and have yet to find a comprehensive strategy definition outlined in either english or pseudo code. Does one exist? It seems that the logic is stretched over half a dozen posts yet none are fully explanatory.

BUY RULE: Long position if current price >= 0.75 percentile of [60, 120, 180, 252] days channel

What exactly is the formula for a "channel"? A Donchian channel is assumed, what is the english representation of the formula for said channel?

Well, the objective of this post was: The attached algorithm is an adaptation of a recent tactical asset allocation portfolio from David Varadi @ CSSAnalytics
And I believe Alex is doing a wonderful job at it, sharing his code and updating it accordingly.

The rest should be in a different post to keep things tight and focused .. and most specifically with clearly defined rules to follow and implement. No magic, only hard boring and systematic rules.

Touche! I'll shut up. Agree Alex is doing an awesome job on the algo.

This thread sparked some great discussion indeed. I'm glad it's inspiring. I hope The Q reworks their "forum" soon to accommodate algos and these sorts of related threads.

@Stephen, Swapping out assets is straight forward. I'd be glad to see a backtest with these kinds of systematically chosen asset classes.

@Market Tech, the "channels" are the X past closing prices of an asset for X = [60, 120, 180, 252] days. Take the 0.25 and 0.75 quantiles of these prices and compare them to today's close.

Here's the current state of the implementation. Except for adjusted close prices, everything is there. Happy hacking.

Clone Algorithm
468
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 54fe9a30aa421c0f0e37c478
There was a runtime error.

Hi Alex,
I'm just getting warmed up with this strategy, checking here and there if it's worth incorporating into my portfolio. I did not check your code yet.
Nonetheless, I believe the current code does not represent the original strategy, neither in terms of returns, nor in terms of drawdown - and the interest in this strategy for me is a CAGR >10 for a max DD < 6 over 20yrs.

Using: VTI, ICF, LQD, DBC, cash = SHY

___original | current
Max DD | 6 | 18

Annual Returns
2006 | 12 | 13
2007 | 5 | 11
2008 | 10 | 17
2009 | 13 | 31
2010 | 14 | 20
2011 | 12 | 7
2012 | 8 | 0
2013 | 7 | 14
2014 | 7 | 11

What led me to look at this was the fact that the current implementation has flat periods, that are not reflected in the original strategy.

I will probably get back to this as I'd like to see the impact on the CAGR/MaxxDD pair if using a leverage of 2, and hedging on SPY by shorting TMV (or shorting SPXS instead of being long SPY)

@Alex Greatly appreciated.

I think that this algorithm is great!

It would be interesting to see how you can use the same criteria as filters in Pipeline.
Updated version attached.

Best,

Clone Algorithm
58
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58c93c9fb8b56017809cf509
There was a runtime error.