Back to Community
New Strategy - Presenting the “Quality Companies in an Uptrend” Model

We wanted to share with the Quantopian community an algorithm named “Quality Companies in an Uptrend”.

This non-optimized, long-only strategy has produced returns of 18.0% using the Q500US universe since 2003 with a Sharpe Ratio of 1.05 and 12% of Alpha and a Beta of 0.53.

We’d appreciate your input and feedback on the strategy.

Combining Quality With Momentum

This is a “quantamental” strategy, combining both fundamental factors (in this case, the quality factor) with technical factors (in this case, the cross-sectional momentum factor) in a quantitative, rules-based way.

The idea of this strategy is to first identify high-quality companies then tactically rotate into the high-quality companies with the best momentum.

What is Quality?

The characteristics of “quality” companies are rather broad. Quality is typically defined as companies that have some combination of:

  • stable earnings
  • strong balance sheets (low debt)
  • high profitability
  • high earnings growth
  • high margins.

How Will We Measure Quality?

For our strategy, we focus on companies with a high return on equity (ROE) ratio.

ROE is calculated by dividing the net income of a company by the average shareholder equity. Higher ROE companies indicate higher quality stocks. High ROE companies have historically produced strong returns.

Rules for The “Quality Companies in an Uptrend” Strategy:

  1. Universe = Q500US
  2. Quality (ROE) Filter. We then take the 50 stocks (top decile) with the highest ROE. This is our quality screen, we are now left with 50 high-
    quality stocks.
  3. Quality Stocks With Strong Momentum. We then buy the 20 stocks (of our 50 quality stocks) with the strongest relative momentum, skipping the last 10 days (to account for mean reversion over this shorter time frame).
  4. Trend Following Regime Filter. We only enter new positions if the trailing 6-month total return for the S&P 500 is positive. This is measured by the trailing 6-month total return of “SPY”.
  5. This strategy is rebalanced once a month, at the end of the month. We sell any stocks we currently hold that are no longer in our high ROE/high momentum list and replace them with stocks that have since made the list. We only enter new long positions if the trend-following regime filter is passed (SPY’s 6-month momentum is positive).
  6. Any cash not allocated to stocks gets allocated the IEF (7-10yr US Treasuries)

Potential Improvements?

What potential improvements do you think we can add to this strategy?

Some of our ideas include:

  • A composite to measure Quality, not just ROE
  • Adding a value component
  • Another way to measure momentum?
  • A better/different trend following filter?

We’d love to see what you guys come up with. Given the simple nature of this strategy, the performance is strong over the last 16+ years and should provide a good base for further testing.

Christopher Cain, CMT & Larry Connors
Connors Research LLC

Clone Algorithm
280
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end() , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roe = Fundamentals.roe.latest

    pipe = Pipeline(columns={'roe': roe},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    TF_hist = data.history(context.spy , "close", 140, "1d")
    TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    if TF_check > 0.0:
        context.TF_filter = True
    else:
        context.TF_filter = False
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.
122 responses

Wow, what a great strategy! Thank you for sharing this! I've been meaning to create something similar for trading in my own account (paper only initially).

I made the below quick modifications:

  1. Use ROIC instead of ROE, as ROIC includes debt as well (high returns on equity with little leverage is high quality in my book)
  2. Added low ltd to equity ranking to the 'quality ranking' as again, low leverage is high quality in my book. This results in lower total returns, but also lower volatility and lower drawdowns, so a slightly higher Sharpe Ratio.
  3. Also added two 'value' metrics and added to the ranking. I prefer to buy 'quality' when it's on sale, but you can easily comment this out.
  4. Changed rebalance to 6 days before month end. My 'hypothesis' is that most people get paid around this time (25th) so more money might be flowing into the market at this time, pushing it up (I have nothing to back this up, just my theory).

Will try to improve it further when I have more time.

Clone Algorithm
130
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end(days_offset=6) , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(days_offset=6), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roe = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank() ).rank()
    
    quality = (roe + 
               ltd_to_eq +
               value +
               0
               )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    TF_hist = data.history(context.spy , "close", 140, "1d")
    TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    if TF_check > 0.0:
        context.TF_filter = True
    else:
        context.TF_filter = False
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Another quality rank I would look at possibly add is consistently high ROIC over say the last 5 years. E.g. (mean 5yr ROIC) / (std_dev of 5yr ROIC).

Interesting!

Could someone please help me understand the line number 58 & 59? I am new to coding and I can only understand that Line number 58 creates a dataframe for daily close price for 140 days. and Line number 59 calculates the 126 days return and gives that value. Is that correct? I dont understand what iloc[-1 does and why it is required?

Is my understanding correct that the trend filter basically means - if 126 days return are positive then true if not then false?

In good times smaller companies will grow faster than the big companies, in bad times it is better to invest in the big and large companies. You might want to adjust your universe filter based on some metric that allows allocation to small versus big.

Great additions Joakim thank you.

I have found that "composite" methods to measure factors such as value and quality tend to work better. This is consistent with the research I have read. One that immediately comes to mind is Jim O'Shaughnessy's book "What Works on Wall Street", where he shows that a composite value factor outperforms each individual value factor. I have found the same with quality.

To be honest I was surprised by how well this tests out given the simple nature of the original algo. I have also done a lot of robustness testing with this algo, changing the trend following filter, the momentum look back, the days skipped, etc and it hold up well.

Interested to see what other come up with as far as improvements.

Chris

I dont know if it is intentional but the algo frequently uses higher leverage. I tried this and the average leverage over same period is 1.19. Is there anyway to restrict it in range 1.00-1.02?

@Guy, thank you very much for presenting us with screenshots instead of code of what you have managed to do with another's IP that they very kindly shared on the forums for us all to work on. Sure was useful...

I have to agree with Jamie here. If you are going to modify the strategy please be transparent about what you did and provide the source code in the spirit of collaboration.

Thanks,
Chris

Chris (Cain) I have done a great deal of work on these type of strategies and I tjink it essential to test using different rebalance dates. EG first of month, 13th, 21st....whatever. I found that huge and rather disturbing differences could result which made me feel uncomfortable with the robustness of the strategy. The effect was particularly noticeable where the filters resulted in small numbers of stocks in the portfolio. Nonetheless I will clone your code (for which many thanks) and look more closely with interest.

Incidentally it is good to see some mofe ideas coming through which do not follow the stifling criteria for the Quantopian competitions. It makes for a much more interesting forum. I was getting very fes up with the "neutral everything" approach.

Here's an update of my modified version of your strategy. Not sure it's much of an improvement, but posting nonetheless.

The main change is that this one is using SPY MA50 > MA200 as the bull/bear market trend check, rather than trailing positive 6mo returns of SPY. Either way seem to work quite well.

I also added 3yr high and 'stable_roic' and high and 'stable_margins' ranks, but these are commented out as they seem to bring down performance , possibly due to making the model too complex. Or maybe I've made a mistake with them?

One that immediately comes to mind is Jim O'Shaughnessy's book "What
Works on Wall Street", where he shows that a composite value factor
outperforms each individual value factor. I have found the same with
quality.

^Indeed! I keep hoping they will release a 5th edition, with updates of how their value composites have performed since the last edition. Value factors have struggled in recent years I believe.

FYI, I won't be sharing any more updates unless others start to contribute as well.

Clone Algorithm
27
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] =  np.nanmean(value) / np.nanstd(value)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end(days_offset=6) , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(days_offset=6), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank() ).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (roic + 
               # stable_roic +
               # stable_margins +
               ltd_to_eq +
               value +
               0
               )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Thanks @Guy, will you contribute any of your spectacular secret sauce here? :)

@All, looks like I did make a mistake in my CustomFactor. I believe this is the correct way of doing it:

class Mean_Over_STD(CustomFactor):  
    window_length = 756  
    def compute(self, today, assets, out, value):  
            out[:] =  np.nanmean(value[0:-1]) / np.nanstd(value[0:-1])  

Let me know if I still got it wrong.

I didn't make any changes to the strategy, but in the spirit of collaborating to improve on the algorithm, I tried to clean up the style and efficiency of the code a bit. Some of the changes include:
- Changed the custom factor definition to use the axis argument in the np.nanmean and np.nanstd functions.
- Moved the pipeline_output into the scheduled function instead of before_trading_start. It used to be best practice to call pipeline_output in before_trading_start, but last year, we made a change such that pipelines are computed in their own special event and calling pipeline_output just reads the output, so you no longer need to put it in before_trading_start.
- Condensed some of the code in trade.
- Cleaned up some of the spacing and indentation to match common Python style guides.

Again, nothing material, and I don't think it perfectly follows Python style conventions, but hopefully others can learn from some of the changes!

Clone Algorithm
43
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I tried few different known quality factors. Return on assets makes a tiny improvement in alpha and drawdown.

Clone Algorithm
38
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    """
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    """
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

This version reduces risk. The max drawdown is -10% , beta is way lower, and the Sharpe ratio is higher

Clone Algorithm
38
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 10 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

If you only care about returns this version is for you

Clone Algorithm
38
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 10.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

there is a flawed logic in algo above. it takes 3 times or more leverage. In fact, there are leverage spikes even in earlier version. Could someone please fix the earlier version so the leverage is not more than 1.

Thanks @Jamie for fixing the CustomFactor. It seems to be working as I had intended now, and I've included 'stable_roic' in the ranking composite in the attached update.

Other changes I made:

  • Changed the trading universe from Q500US to Q1500US, effectively a proxy for S&P1500 (S&P 500 LargeCaps + S&P 400 MidCaps + S&P 600 SmallCaps).
  • Excluded stocks in the Financial Services sector from the universe, since 'Quality' for financial companies tend to be measured differently from stocks in other sectors, e.g. due to their larger balance sheets.

I also kept latest ROIC rather than using latest ROA (to me, ROIC makes more intuitive sense, but I could be wrong).

Leverage is somewhat controlled in this one, but if anyone could help bringing it down to be consistently closer to 1.0 (without using the Q Optimizer), I think that would be a great contribution. Might require a daily check of leverage --> rebalance?

Clone Algorithm
42
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
                # roa +
        stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@ Indigo Monkey , perhaps this was your intention, but all you are doing here is messing with the leverage (for the stock positions anyway)

The risk reduced version is just running the strategy at 0.5 leverage (again just for the equity positions).

The version with huge returns runs the strategy at elevated leverage.

In the code, "context.Target_securities_to_buy" and "context.top_n_relative_momentum_to_buy" need to be the same to keep the leverage around 1.

These two variables control the amount we are buying (context.Target_securities_to_buy) and our final momentum sort (context.top_n_relative_momentum_to_buy).

For the reduced risk version, this is hard to tell since we are putting unused cash into bonds. It will show leverage around 1, but that is half bonds (how you have it coded)

Chris

@Joakim thank you great contributions as always

I know Joel Greenblatt and others have also taken out Financials as well, as they have a much different capital structure, making some value and quality metrics not analogous across sectors.

As for the leverage, I think one thing that we can do is change the rebalance logic.

As currently coded, if we are holding a stock for multiple months, we don't rebalance it back to the target allocation. My thought here was to let our winners run, and not make the position size smaller just because it had good performance.

If we change this logic to rebalance each position in the portfolio back to target weights every month that will go a lot way I believe.

Chris

I deleted earlier post as there was no improvement. I thought I will just backtest with Joakim trend filter and shorter value factor. Improved returns.

Clone Algorithm
11
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank()
    value = (Fundamentals.free_cash_flow.latest / Fundamentals.enterprise_value.latest).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # roa +
        stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()
 
    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Thanks @Chris,

As currently coded, if we are holding a stock for multiple months, we
don't rebalance it back to the target allocation. My thought here was
to let our winners run, and not make the position size smaller just
because it had good performance.

^This makes a lot of sense to me. Why penalize your winners? As you said, [cut your losses and] let your winners run! Or to paraphrase the Oracle: "The best holding period for a great [quality] company is forever." :)

@Nadeem, thanks for your contribution! I wonder how your way of defining value is different from Morningstar's 'cash_return', which is also FCF / EV:

cash_return
Refers to the ratio of free cash flow to enterprise value. Morningstar calculates the ratio by using the underlying data reported in the company filings or reports: FCF /Enterprise Value.

Here's another slight 'improved' version (during this backtest period at least). Only change is that I changed 'stable_roic' to be over 5 years instead of just 3. This won't really fully start to kick in until 2007 as there's no data on Q from before 2002.

Any kind soul out there want to help me set this up in IB's paper trading environment, using Quandl price and fundamental data (if available)? Would iBridgePy be the way to go? Or something else?

Clone Algorithm
42
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 1260
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    stable_quality = ( stable_roic + stable_margins  ).rank() 
    
    quality = (
        roic + 
                # stable_quality +
                # roa +
        stable_roic +
                # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Joakim - I read somewhere that morningstar cash_return gives ttm fcf. While I use latest fcf which is for latest quarter. My thinking is to use latest data (perhaps market has short memory). Just my opinion.

Fair enough, I didn't know that. Makes sense, thanks Nadeem.

@Joakim. I have a question. Maybe I m bit confused here. You are using (ascending=True) in debt to equity. This ascending order is default. Which means you are using high roic, high cash return, high total yield and high debt to equity. Isnt the original thought was to use lowest debt to equity? therefore, the parameter ascending should be set as False? Please help.. maybe I m confused about the logic here.

@Nadeem, good catch! Yes, my thought was indeed that low debt to equity companies were high quality companies, so I should have set ascending order to False. That doesn't work nearly as well obviously, but rather than keep this one as is, I would remove it and possibly replace it with some other quality factor.

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before we start send long-only strategy to the sky?

First of all thank you to Chis Cain. It's good of you to share your algo. And it's a very tempting proposition, although it probably needs a little more investigation. Now I am back at my computer I have been running a number of tests with different re-balancing dates, and, as I have always found with this type of algorithm, the differences in performance are worrying.

I have been using the tidied up code kindly provided by Jamie McCorriston un-amended save for the monthly re balance date.

Perhaps using the maximum of 22 days offset from the month end is foolish - I imagine there are months where there are less than this number of trading days. Nonetheless the results were interesting.

Here are some of the total returns I got by varying the re-balance date:

1574%
455%
1674%
1825%
1477%

Leverage needs looking at - it reaches 2 on occasion with a corresponding net dollar exposure of 200%. Uncomfortable for the lily livered such as myself.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=22), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=22), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Effectively,as I have always felt with these type of strategies, one would be best off hedging one's bets and split the portfolio into a few different parts, each part using a different re-balance date.

Hey @Zenothestoic. Yes I think date_rules.month_end(days_offset=22) doesn't make much sense. What happens in the months that have a holiday and less than 22 business days?

Thanks to Chris for posting this algorithm. It would be a good candidate to trade in one's IRA or another account that is restricted to long-only. The issue of leverage over 1.00 will have to be solved before I would actually trade this. The code that I suspect is causing excess leverage is the "GETTING IN" section. There doesn't seem to be any consideration of the current positions. It will add to the complexity, but the bonds/equities should be rebalanced together.

@Zenothestoic I share your concern with different starting dates, and I have put together a list of the results starting at different times during 2003. Although the total returns vary by 500% this is due to compounding, and once you adjust for time differences the CAGR is very similar. I am more concerned with the starting year.

Loading notebook preview...
Notebook previews are currently unavailable.

Peter
Yes, it would be interesting to see what would have happened in the tech crash. But good point on CAGR (and of course DD) being very close.
Mind you the system rode through 2008 very well, but of course each crash is different.

It is likely for instance that a severe drawdown would have occurred in 1987 - no trend following system could have reacted with the swiftness required at that date, certainly not six month MOM or a 50/250 MA crossover.

But it looks very tempting otherwise if a few kinks can be ironed out.

Chris yes - offset = 22 does not make much sense probably. But I always find it difficult to drill down and find out why on these online back testers. Its the sort of stuff I need to have my own data for with which I can fiddle to my heart's content.

Also I need to check to see whether Q's standard com and slippage is included.

Peter
Did you have to run those tests one by one or does Q let you automate that sort of testing now?

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before Guy Fleury start send long-only strategy to the sky?

@Peter Harrington,

Are the backtest results you posted from original (Chris Cane) algo or from others?
They have different Trend Filters and Factors.
Original (Chris Cane) algo with date_rules.month_end() has Total Returns 1521.59 %.

@Guy Fleury: Multiple participants in this thread have expressed frustration with the sharing of screenshots instead of attaching a backtest. Please refrain from sharing screenshots built on top of the shared work in this thread. You are entitled to keep your work private, so if you don't want to share, that's fine. But please don't share screenshots in this thread as it seems the intent of the thread is to collaborate on improving the algorithm.

@Jamie, understood. I have erased all my posts in this thread since my notes without screenshots become simple opinions without corroborating evidence.

(Added)

@Jamie, as you said: I have no obligation to share anything. I thought it was a forum where anything innovative or reasonably pertaining to the subject at hand would have been more welcomed in whatever form it was presented. My bad.

For those few that might be interested, this thing can exceed 30,000%. But, that is now just an opinion. Can't show a screenshot to corroborate or the program itself. It is nonetheless a 40% CAGR over the 16+ years giving a total profit of some $2.3 billion.

Of note, Jim Simons (Medallion Fund) has managed a 39% CAGR after fees for years. It required a 66% CAGR to make that happen. The fees were 5/44, a little bit more than the usual hedge fund 2/20. In case some are looking for objectives.

For me, this strategy is still not enough even though it could be pushed higher. I have other strategies that can go further without depending on what I consider an internal procedural bug. But, a program bug, if it is consistent, dependable and profitable, then it could come to be considered as some added “feature”.

No strategy change here -- just stylistic change (working off Jamie McCorriston's version). Moved the selection logic out of the rebalance function and into pipeline via a progressive mask, thinking it might be faster and that some might be more accustomed to doing the filtering via pipeline masks.

Clone Algorithm
18
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 20 #Number to buy


class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

            
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)            
    
 
def make_pipeline():
    # Base universe set to the Q500US
    universe = Q500US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)

    pipe = Pipeline(columns={},screen=m)
    return pipe
        
    
def trade(context, data):
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in security_list:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in security_list:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Quality Companies in an Uptrend (original by Chris Cane) Long-Short Count and TF Check with line 9 commented.
You may see the number of stock position slowly changing from 20 to 0 during several months in market downtrend.

@Chris Cane,
Is it by design?

Loading notebook preview...
Notebook previews are currently unavailable.

Quality Companies in an Uptrend (original by Chris Cane) Leverage.

The best ways to fix the problem :

Use order_optimal_portfolio()
Change execution time.
Use @Peter Harrington recommendation the bonds/equities should be rebalanced together.

Loading notebook preview...
Notebook previews are currently unavailable.

@Vladimir, Yes this is by design.

Here is the logic:

If the trend following filter is not passed (6-month momentum is negative, 50SMA<200SMA, whatever) then we sell stocks that fall out of our final buy list (in the orginal algo, that was stocks with best ROE then best momentum).

Since the TF filter is not passed, those stocks are not replaced.

If the TF filter is not passed and a stock remains in our final buy list, it is held.

The design is to scale out of positions if the market is trend down instead of get out of all of them at once. This is evident in the graphs you posted.

Thanks for the great question,
Chris

Here is a version that mostly fixes the leverage problem.

The starting point is the modified code posted by Viridian Hawk.

There is a problem in 2006-2007 time frame where a security BR is purchased but then is not able to be sold for many months.
Eventually the sell order does fill a year later at exactly 4 pm (I wonder if the system forced the sale?)
This led to a problem with bonds going short, I think because the max number of stock positions was exceeded.

Anyway I made the following changes:

Changed the bond trading logic so that the allocation would not go negative.

Changed the stock trading logic so that it re-balances winning positions that carry forward to the next month. Maybe better would be to let the winners run and reduce the size of new positions constrained by available cash, but it's a bit more complicated to implement.

Changed the stock trading logic so that it re-balances high quality-momentum stock positions that are held when the trend is negative. I think this helps to balance the bond/stock allocation to reduce leverage during these times.

I was NOT able to find a way to avoid buying BR, so there still is slightly elevated leverage during the 2006-2007 time frame.

Clone Algorithm
13
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 20 #Number to buy


class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

            
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)            
    

def make_pipeline():
    # Base universe set to the Q500US
    universe = Q500US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)

    pipe = Pipeline(columns={},screen=m)
    return pipe
        
    
def trade(context, data):
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in security_list:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
        elif x in security_list and context.TF_filter==False:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('REBALANCING',x)
    
    for x in security_list:
        if context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN OR REBALANCING',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = min(len(context.portfolio.positions),context.Target_securities_to_buy)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = min(len(context.portfolio.positions),context.Target_securities_to_buy) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@steve

I tried your algo - It is now always holding one extra position. Try recording number of positions and you will see that they are 21 instead of 20. Not sure what is going on. Perhaps it not selling Bond during uptrend and keeping it in portfolio. This might be a factor which is reducing returns. Not sure though.

The 2.5 mo/1yr crossover filter on SPY is the weakest point in this strategy. It saves the strategy during 2008. So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe. Who knows if it will work in the future -- we don't have enough data points to draw any statistically meaningful conclusions on that signal and how it correlates to "quality." So, that bit of the code is likely an overfit.

@Steve Jost,

To avoid buying BR you may try this code

from quantopian.pipeline.filters import Q500US, StaticAssets

universe = Q500US() & ~StaticAssets(symbols('BR'))

I think this will not solve the problem completely.

@Nadeem,

You are right, in Steve algo IEF exist all the time at least at the beginning.

"So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe."

Then cut it out. The strategy is still far from shabby without it and you have the comfort that it is no longer curve fit. The drawdown has increased from 20 to 40% in the attached test. Still way lower than the S&P DD in 2008.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 50, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Jamie McCorriston,

Can you advise on how to make this exclusion filter work?

universe = Q500US() &~StaticAssets(symbols('BR','PD'))  

2006-04-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-05-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-06-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-07-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-08-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-09-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-10-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-11-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-12-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-01-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-02-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-03-22 12:30 WARN Cannot place order for PD, as it has de-listed. Any existing positions for this asset will be liquidated on 2007-03-22 00:00:00+00:00.
2007-03-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.

@Vladimir, What are the mods? Will you post the algo?

@Vladimir, @Jamie, looks like those stocks might have been halted trading or delisted around that time?

@Vladimir, be ready to add to the list as you increase the number of stocks to be treated.

universe = Q1500US() & ~StaticAssets(symbols('CE', 'CFBX', 'DL', 'GPT', 'INVN', 'WLP', 'ADVP', 'IGEN', 'MME', 'MWI'))  

Why are you guys introducing lookahead bias by filtering specific stocks from the universe? Stocks get halted and delisted all the time -- it's just part of trading. If you have found evidence of data errors, perhaps best to just report them to Quantopian so they can fix them. Otherwise, I think it's best to make a strategy's logic robust enough that it doesn't trip up when positions get halted or delisted.

@Viridian, I these cases, stocks are delisted, halted or have gone bankrupt, but their positions stay open meaning that your bet is still on the table and might not be accounted for in the final result. Ignoring them should liberate those bets. A quick and dirty method, I agree. But in development, it becomes acceptable since your interest is at a much higher level than solving trivia.

As I have said before, you can push this strategy beyond 30,000% total return. You will be able to do so by putting more stocks at play and improving on the strategy design.

Leaving in those delisted stocks will require added code to track them yourself and somehow get rid of them as you go (in order to be more realistic). But then again, Quantopian could make it that their program takes care of it by automatically closing those positions as they appear. But, they will have to distinguish between halted stocks and permanently delisted.

(ADDED)

@Viridian, even if you put an exclude list, some come back anyway. Go figure.

@Guy

Guy I can understand that you have chosen not to share code on this forum but I am intrigued by the idea of a 30,000% return since 2003 Would you consider telling us exactly how it is achieved? I am assuming you use no leverage?

Can you also tell us the max DD and volatility on the system extended to these lofty levels?

I would love to invest for that sort of return, but lack those sort of skills.

@Chris Cane,

it looks like I was able to take the leverage in your algorithm to an acceptable level using order_optimal_portfolio().
I hope you will comment on the consistency of the backtest results with the design before I post the code snippet.

Loading notebook preview...
Notebook previews are currently unavailable.

Long-Short Count, and TF Check.

Loading notebook preview...
Notebook previews are currently unavailable.

@Vladimir: The symbols method assumes you are asking for the asset whose ticker symbols are BR and PD as of today. So the ~StaticAssets(symbols('BR', 'PD')) filter is excluding two stocks that picked up the tickers BR and PD in 2007 and 2019, respectively. You can specify a reference date with set_symbol_lookup_date or use sid to specify the assets that delisted in 2006.

That said, I agree with Viridian Hawk. When an asset gets delisted, the backtester automatically closes out any shares held in that asset, 3 days after the delist date, at the last known price of the asset. It's probably best to let the backtester handle that position so as to avoid lookahead bias as much as possible.

@Jamie McCorriston

set_symbol_lookup_date('2006-01-01') worked but it costs 30% of profit.

Thank you.

Properties of This Trading Strategy

I got interested in the above strategy, first for its longevity (16+ years) and second, for its built-in alpha.

My first steps are always to see the limitations and then see if I can improve the thing or not. The initial strategy used \(\$\)100k and 20 stocks putting its initial bet size at \(\$\)5k.

A portfolio will have to live by the following payoff matrix equation: $$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H} \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$The total return on the original scenario was 1551.58\(\%\) giving a total return of \(\$\)1,521,580, it surely demonstrated that even with 16+ years it did not get that far. Nonetheless, it is in CAGR terms, a 17.64\(\%\) compounded rate over the period. It starts to be interesting since it does outperform the majority of its peers. See the majority at a 10.00\(\%\) CAGR or less. Therefore, we could say there is approximately a 7.6\(\%\) alpha in the initial design.

The structure of the program can allow more. First, the design is scalable. It was my first acid test. I upped the initial stake to \(\$\)10M. But this makes the bet size jump to \(\$\)500,000 per initial bet. Due to the structure of the scheduled rebalance, these bets would catch most of their returns from common return (about 70\(\%\)) and not from specific returns. But it did generate alpha over and above its benchmark (SPY). And that was the point of interest.

I raised the number of stocks to be treated in order to reduce the bet size knowing that doing so would reduce the average portfolio CAGR. The reason is simple, the stocks were ranked by expected performance levels, and the more you took in the more the lower-ranked stocks with their lower expected CAGR would tend to lower the overall average. This could be compensated elsewhere and could even help produce higher returns.

There is a slight idiosyncracy in the original program which made it have a 1.04 average gross leverage. Its cost would have been about \(0.04 \times 0.04 = 0.0016 \) should we consider IB leveraging fee for instance. A negligible effect on the 17.64\(\%\) CAGR.

The Basic Equation

The equation illustrated above is all you can play with. However, when you break it down into its components, the only thing that matters in order to raise the overall CAGR of about any stock trading strategy is \(\mathbf{H}\), the behavior of the trading strategy itself. It is the how you will handle and manage the ongoing inventory over time.

The price matrix \(\mathbf{P}\) is the same for everyone. In this case, the original stock universe was Q500US. To get a better selection, I jumped to Q1500US since my intention was to raise the number of stocks to 100 and over. The \(\Delta \mathbf{P}\) is simply the price variation from period to period, and therefore, is also the same for everyone. The differences will come from the holding matrix \(\mathbf{H}\) which is the game at play. If the inventory is at zero, there is no money coming in nor is there any money going out. To win, you have to play, and that is where you also risk to lose.

The first chart I presented had an overall 2,405.92\(\%\) total return on a \(\$\)10M initial stake with 40 stocks. That resulted in overall profits of \(\$\)240M over the 16+ years. Already over 100 times the original trading script. Most of it coming from the 100 times the initial capital demonstrating the program's scalability.

By accepting a marginal increase in volatility and drawdown, I raised the bar to 3,803.87\(\%\) total return which is a 24.26\(\%\) CAGR equivalent for the period.

@Joakim's Version of The Program

I next switched to Joakim's version of the program because it accentuated an idiosyncracy of the original program and pushed on involuntary leveraging. But I did not see it as a detriment. The more I studied the impact the more I started to appreciate this "feature" even though it was not intended. If a program anomaly can become persistent, dependable, and can generate money, it might stop to be considered a "potential bug" and be view as an "added feature".

Using Joakim's program version as base, I push on some of the buttons, increased the strategy's stock count again, changed the trading dates and timing, make the strategy more responsive to market swings and tried to capture more trades and a higher average net profit per trade. The impact was to raise the overall total return to 10,126.6\(\%\). On the same \(\$\)10M this translated to a 31.74\(\%\) CAGR with total profits in excess of \(\$\)1B. It is a far cry from the original strategy.

I kept on improving the design by adding new features and more stocks to be traded with result that the total return jumped to 13,138.85\(\%\) which is a 33.8\(\%\) CAGR over the 16+ years. To achieve those results, I also put the "Financials" back in play since there was no way of knowing in 2003 that the financial crisis would be unfolding and be as bad as it was.

But, you could do even more by accepting a little bit more of leverage as long as the strategy would be able to pay for it all, and remain consistent in its general behavior. Thereby exploiting the anomaly found in Joakim's and the original strategy. Here you could really push and not by pushing that much either. A leverage of 1.4 was sufficient to bring the total return to 32,143.38\(\%\) with a total profit of \(\$\)3.2B and a CAGR of 41.1\(\%\). Quantopian once said they were ready to leverage some strategies up to 6 times. So, 1.4 might look as not that high especially if the trading strategy can afford it.

You could do even more by accepting a leverage of 1.5, raising the total return to 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). In total profit that would be \(\$\)5.09B.

At 1.5 leverage, you would be charge on the 0.5 excess, and at IB's rate it would give: \(0.5 \times 0.04 = 0.02\). Thereby reducing the 45.0\(\%\) CAGR to 43.0\(\%\). Still costing some \(\$\)1.056B over the period and leaving some \(\$\)4.045B as net total profit in the account.

A prior version to the one above tried \(\$\)20M as initial stake and achieved a 43,795.04\(\%\) total return. It could have been jacked up higher, but it was not my main interest at the time. Nonetheless, in CAGR terms that was 43.79\(\%\) and in total profits \(\$\)8.76B.

I think the strategy could be improved even further, but I have not tried. My next steps would be to scale it down now that I know how far it can go and install better protective measures which would tend to increase overall performance while reducing drawdowns.

As part of my acid tests, I want to know how far a trading strategy can go. Therefore, I push on the strategy's pressure points in the first equation knowing that the inventory management procedures are where all the efforts should be concentrated. Once you know what your trading strategy can do, it is all easy to scale it down to whatever level you feel more comfortable, that it be in using lower leverage, reducing overall CAGR, or installing more downside protection. It becomes a matter of choice.

But once you have pushed the limits of your trading strategy, you at least know that those limits could be reached and even if you scale down a bit, you also know that your trading strategy could scale up it if you desired to. It would not come as a surprise, if at all, you would have planned for higher performance and you would know how you could deliver if need be.

It is all so simple, it is all in the first equation above. It is how you inject new equations to the mix that you can transform your trading strategy. In this case, the above equation was changed to:$$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H}\cdot (1+\hat g(t)) \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$where \(\hat g(t)\) is partly the result of a collection of functions of your own design.

I would usually have shown screenshots as corroborating evidence of the numbers presented above. But, it appears that such charts are not desired in this forum.

To me, it transforms all the numbers above as claims, unsubstantiated claims at best since no kind of evidence is presented to support them. They become like just opinions. Nonetheless, I do have those screenshots on my machine, but they will stay private for the moment.

Of note, the explanations for these equations which can be considered innovative for what they can do, even if they have been around for quite a while, can be seen all over my website.

Changed a single number in the last program that generated the 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). It resulted in a total return of 76,849.31\(\%\). A 48.8\(\%\) CAGR. In total profits: \(\$\)7.68B. I will still need to deduct the leveraging fees which will exceed \(\$\)1.B when compared to the previous scenario.

For a single digit, it increased the total outcome from \(\$\)5.09B to \(\$\)7.68B. Now that is a digit that is worthwhile... Sorry, no screenshot to display as some kind of evidence that those numbers were actually reached. But, they still happened.

Guy

In broad terms the above tells us that you increased the portfolio size from 500 to 1500 stocks and that leverage went to 1.4. But little else.

I can understand that the cost of leverage would not be high at current interest rates and that indeed such level of leverage is modest. As to Q's 6x leverage that of course was to be used on their " zero everything" strategy. I have never understood Quantopian's approach to be honest and I refuse to believe that such neutrality would hold under all conditions. I suspect it would get its comeuppance at some stage as do most strategies. But what do I know, to be honest.

The real problem people have with your posts is not the screenshots themselves but the lack of detail they contain as to how the results were achieved. And your above post does the same (to some extent!)

I now understand the increased portfolio size and the leverage- for which many thanks. But of course most of the detail is still hidden. And it is the detail people would like you to share. Which I also would like you to share. If you would be willing of course.

You mention a starting capital of $20m but I'm not sure that a huge starting capital is so relevant for stocks. With futures I can readily understand it. The contract sizes are enormous for the humble retail investor such as you or I.

But with stocks, even 1500 of them (whittled down to 100, or 50 or whatever) its a different matter surely. Stocks are not all the price of Berkshire Hathaway and the lot size is not huge, so surely you could trade small capital up to dizzy levels?

If you would like to share more details I would be happy to put up some capital to try and shoot the lights out if I can make sense of it. Perhaps you might prefer to discuss this in private.

Great post. @Chris Cain . Thank you for sharing your original algo! Also, kudos to everyone pitching in with comments and improvements. I've never felt quant trading to be a zero-sum game. Publishing, peer review, and building on previous ideas has worked in the sciences and serves as a model for moving from 'quant arts' to 'quant science'. A rising tide can raise all ships.

Anyway, in that spirit, here is a version of the original algo with several changes. The goal was 1) to separate the data from the logic 2) separate the security selection logic from order execution, and 3) use order_optimal_portfolio. This was in an effort to make modifications easier, add flexibility, and allow for using the various optimize constraints.

While the logic is faithful to the original, the execution differs a bit. The positions are the same however, the quantity of shares purchased vary by a few at times, which seems to account for the performance numbers not matching exactly.

Clone Algorithm
34
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# import the base algo class. Not entirely needed but a good practice
import quantopian.algorithm as algo

# import things need to run pipeline
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage as SMA
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution
import quantopian.optimize as opt
 
# import numpy and pandas because they rock
import numpy as np 
import pandas as pd


def initialize(context):
    # Set algo 'constants'...
    
    # List of bond ETFs when market is down. Can be more than one.
    context.BONDS = [symbol('IEF')]

    # Set target number of securities to hold and top ROE qty to filter
    context.TARGET_SECURITIES = 20
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter
    context.SPY = symbol('SPY')
    context.TF_LOOKBACK = 200
    context.TF_CURRENT_LOOKBACK = 50

    # This is for the determining momentum
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback
    context.MOMENTUM_SKIP_DAYS = 10
        
    # Initialize any other variables before being used
    context.stock_weights = pd.Series()
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    # Create and attach pipeline for fetching all data
    algo.attach_pipeline(make_pipeline(context), 'pipeline')    
    
    # Schedule functions
    # Separate the stock selection from the execution for flexibility
    schedule_function(
        select_stocks_and_set_weights, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    
 
def make_pipeline(context):
    # Base universe set to the Q500US
    universe = Q500US()
    
    # Fetch SPY returns for our trend following condition
    # Use SimpleMovingAverage (SMA) to broadcast spy averages to all assets (sort of a hack)
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close], 
                          window_length=context.TF_LOOKBACK)[context.SPY]
    
    spy_ma50 = SMA(inputs=[spy_ma50_slice], window_length=1)
    spy_ma200 = SMA(inputs=[spy_ma200_slice], window_length=1)
    trend_up = spy_ma50 > spy_ma200
    
    # Get the fundamentals we are using. 
    # Rank relative to others in the base universe (not entire universe)
    # Rank allows for convenient way to scale values with different ranges
    cash_return = ms.cash_return.latest.rank() #(mask=universe)
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)
    roic = ms.roic.latest.rank() #(mask=universe)
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(ascending=True) #, mask=universe)

    # Create value and quality 'scores'
    value = (cash_return + fcf_yield).rank() #(mask=universe)
    quality = roic + ltd_to_eq + value
    
    # Create a 'momentum' factor. Could also have been done with a custom factor.
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)
    momentum = returns_overall.log1p() - returns_recent.log1p()
    
    # Filters for top quality and momentum to use in our selection criteria
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)
    
    # Only return values we will use in our selection criteria
    pipe = Pipeline(columns={
                        'trend_up': trend_up,
                        'top_quality_momentum': top_quality_momentum,
                        },
                    screen=universe
                   )
    return pipe

def select_stocks_and_set_weights(context, data):
    """
    Select the stocks to hold based upon data fetched in pipeline.
    Then determine weight for stocks.
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested
    Sets context.stock_weights and context.bond_weights used in trade function
    """
    # Get pipeline output and select stocks
    df = algo.pipeline_output('pipeline')
    current_holdings = context.portfolio.positions
    
    # Define our rule to open/hold positions
    # top momentum and don't open in a downturn but, if held, then keep
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'
    stocks_to_hold = df.query(rule).index
    
    # Set desired stock weights 
    # Equally weight
    stock_weight = 1.0 / context.TARGET_SECURITIES
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)
    
    # Set desired bond weight
    # Open bond position to fill unused portfolio balance
    # But always have at least 1 'share' of bonds
    bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)
        
def trade(context, data):
    """
    Execute trades using optimize.
    Expects securities (stocks and bonds) with weights to be in context.weights
    """
    # Create a single series from our stock and bond weights
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint
    order_optimal_portfolio(
        objective = target_weights,
        constraints = []
        )
    
    # Record our weights for insight into stock/bond mix and impact of trend following
    record(stocks=context.stock_weights.sum(),
           bonds=context.bond_weights.sum()
          )
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Guy

I took a look at your paper here: https://alphapowertrading.com/papers/AlphaPowerImplementation.pdf

To we who are not privy to the underlying methods, your formulas provide no real information. You mention trend following, re-investment of profits and covered calls.

And your formulas state that compounding your strategies will lead to outsized profits.

But the problem people here find with your posts is that we all know the benefits of compounding. What we do not know is how you achieve that compounding.

Much of your website repeats this basic message. But nowhere do you state how you achieve that compounding. Except to mention "boosters and enhancers" which are never explicitly explained.

I think you would achieve much kudos here by providing a precise declaration of exactly what these boosters and enhancers are. And if you would be willing to provide the code here then that would be a great step forward.

Thanks and regards

@Dan - Thank You very much for posting the familiar version of the code.

I have a question though - in the record pane in your algo - it seems like the leverage is always at 1.05. Please check Stock weight + Bond weight. It seems like the bond of 0.05 is always in portfolio. Is it intentional? If not, how we can make the leverage at 1.00?

Thank You in advance for your help.

@ Nadeem - You could make the following change to reduce the leverage from 1.04 to 1.00.
Comment out the first line and replace it with the second line.

bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)

@Nadeem Yes, @Steve Jost's code above will reduce the 'target leverage' to 1.0. It will let the bond weight go to zero. In the original algo, the bond weight was always a minimum of .05 (specifically 1.0 / context.TARGET_SECURITIES) and the leverage would go to 1.05. This was a 'feature' of the original algo so I left it in. It actually helps the sharpe ratio and returns.

Even with this change the leverage spikes to 1.05 in 2006. This is because the weights are set assuming all the orders fill. During that time, the algo tries to sell BR but cannot. The algo essentially 'over buys' and the leverage goes above 1.0. The way to keep that from happening is to place all the sell orders. Cancel any open orders after a set time. Then place buys equal to the amount of cash left. Basically don't buy until all the sell orders fill or are canceled.

Good catch.

@Dan,

You probably took as template for trading logic not the original @Chris Cane algo but somebody else together with its bugs which create
Max 21-23 positions, leverage 1.05 - 1.1... (see attached notebook)

Not selling BR during 11 month in 2006-2007 is more likely engine problem.

What is this for?

set_slippage(slippage.FixedSlippage(spread = 0.0))  

Try to do the same with the original @Chris Cane algo trading logic .

I solved more or less everything except BR in 11 lines trade ().

PS. I will attach backtest when Quantopian let me to do that.

Loading notebook preview...
Notebook previews are currently unavailable.

Here are the results :

Loading notebook preview...
Notebook previews are currently unavailable.

@vladimir

these are some crazy coding skills. you have shrinked the original code from 90+ lines to 38 lines. & yet improved the performance. Awesome work. Great to have you here and learn from you.

I have one question for you. In the algo posted by Joakim 4 days ago with 2811% return. What is the purpose of mask=universe in the mean_over_std factor? Isn't there will be a mismatch if we rank other factor without mask and rank mean_over_std with mask? and why does even it matter if we have mask in pipeline screen.

I am asking because if we remove mask=universe from line 90, the result are hugely different. Please see the attached backtest. the result with mask are somewhat similar to what Joakim had. (even though little different because of leverage fix in attached version).

Clone Algorithm
19
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# import the base algo class. Not entirely needed but a good practice
import quantopian.algorithm as algo

# import things need to run pipeline
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage as SMA
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data.morningstar import Fundamentals as ms
from quantopian.pipeline.classifiers.morningstar import Sector

# import optimize for trade execution
import quantopian.optimize as opt
 
# import numpy and pandas because they rock
import numpy as np 
import pandas as pd


def initialize(context):
    # Set algo 'constants'...
    
    # List of bond ETFs when market is down. Can be more than one.
    context.BONDS = [symbol('IEF')]

    # Set target number of securities to hold and top ROE qty to filter
    context.TARGET_SECURITIES = 20
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter
    context.SPY = symbol('SPY')
    context.TF_LOOKBACK = 200
    context.TF_CURRENT_LOOKBACK = 50

    # This is for the determining momentum
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback
    context.MOMENTUM_SKIP_DAYS = 10
        
    # Initialize any other variables before being used
    context.stock_weights = pd.Series()
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    # Create and attach pipeline for fetching all data
    algo.attach_pipeline(make_pipeline(context), 'pipeline')    
    
    # Schedule functions
    # Separate the stock selection from the execution for flexibility
    schedule_function(
        select_stocks_and_set_weights, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    
def make_pipeline(context):
    
    financials = Sector().eq(103)
    universe = Q1500US() & ~financials
    
    # Fetch SPY returns for our trend following condition
    # Use SimpleMovingAverage (SMA) to broadcast spy averages to all assets (sort of a hack)
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close], 
                          window_length=context.TF_LOOKBACK)[context.SPY]
    
    spy_ma50 = SMA(inputs=[spy_ma50_slice], window_length=1)
    spy_ma200 = SMA(inputs=[spy_ma200_slice], window_length=1)
    trend_up = spy_ma50 > spy_ma200
    
    # Get the fundamentals we are using. 
    # Rank relative to others in the base universe (not entire universe)
    # Rank allows for convenient way to scale values with different ranges
    cash_return = ms.cash_return.latest.rank() #(mask=universe)
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)
    roic = ms.roic.latest.rank() #(mask=universe)
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank() #, mask=universe)
    stable_roic = Mean_Over_STD(inputs=[ms.roic]).rank()

    # Create value and quality 'scores'
    value = (cash_return + fcf_yield).rank() #(mask=universe)
    quality = roic + ltd_to_eq + value + stable_roic
    
    # Create a 'momentum' factor. Could also have been done with a custom factor.
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)
    momentum = returns_overall.log1p() - returns_recent.log1p()
    
    # Filters for top quality and momentum to use in our selection criteria
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)
    
    # Only return values we will use in our selection criteria
    pipe = Pipeline(columns={
                        'trend_up': trend_up,
                        'top_quality_momentum': top_quality_momentum,
                        },
                    screen=universe
                   )
    return pipe

def select_stocks_and_set_weights(context, data):
    """
    Select the stocks to hold based upon data fetched in pipeline.
    Then determine weight for stocks.
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested
    Sets context.stock_weights and context.bond_weights used in trade function
    """
    # Get pipeline output and select stocks
    df = algo.pipeline_output('pipeline')
    current_holdings = context.portfolio.positions
    
    # Define our rule to open/hold positions
    # top momentum and don't open in a downturn but, if held, then keep
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'
    stocks_to_hold = df.query(rule).index
    
    # Set desired stock weights 
    # Equally weight
    stock_weight = 1.0 / context.TARGET_SECURITIES
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)
    
    # Set desired bond weight
    # Open bond position to fill unused portfolio balance
    # But always have at least 1 'share' of bonds
    # bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)
        
def trade(context, data):
    """
    Execute trades using optimize.
    Expects securities (stocks and bonds) with weights to be in context.weights
    """
    # Create a single series from our stock and bond weights
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint
    order_optimal_portfolio(
        objective = target_weights,
        constraints = []
        )

    # Record our weights for insight into stock/bond mix and impact of trend following
    record(stocks=context.stock_weights.sum(),
           bonds=context.bond_weights.sum())
    record(lev=context.account.leverage)
    record(pos=len(context.portfolio.positions))
    
class Mean_Over_STD(CustomFactor):
    window_length = 1260
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)
There was a runtime error.

@Nadeem

rank () -> security factor rating among all traded securities in the Quantopian database

rank(mask=Q500US()) -> rank security factor among Q500US()

Example:

In my code I have changed only ranking in make_pipeline(), added mask = m to all fundamentals factors,

And got some improvements in metrics.

Loading notebook preview...
Notebook previews are currently unavailable.

Thank You @valdimir. I am still confuse as to why it should matter when we already have mask in pipeline. we should still end up with those stocks in pipeline which are in Q500US because we have mask in pipeline. Cant get me head around it. Could you please help me understand it?

Dan, very well said "A rising tide can raise all ships". I still remember one of your earlier quotes "Communities are built from collaboration, not competition".

It's an interesting strategy. However, it did not outperform the SP500 in the last 2 years. Do you guys think, that the alpha is gone?

@Valdimir - I agree with @Nadeem - mad coding skills. Glad you found a workaround to not being able to post backtests.

@Nadeem - Thanks for pursuing the issue of mask in pipeline with @Valdimir.
Using rank(mask=universe or sub-universe) turns out to be very important, otherwise ranks from the larger universe can skew the factor weightings.

Very interesting! Thanks for posting. For measuring "quality," it would be good to see how adding (i) positive insider buying activity and (ii) positive analyst ratings affect the results.

Do you guys think, that the alpha is gone?

Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market.

"Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market."

Rather reminds me of Keats Ode to Melancholy and the concerns of the romantic poets regarding the temporary nature of our world. The fleetingness of life, the impermanence of the flower and all else in our temporal world.

At a time when many major hedge funds are struggling or closing their shutters, we may do well to dwell on impermanence.

Moved make_pipeline() to initialize (), removed unnecessary masks, changed the end of the month to 7, and made some cosmetic changes.
Got some more improvements in metrics.

Loading notebook preview...
Notebook previews are currently unavailable.

The attached notebook is based on Vladimir's program version which used the optimizer for trade execution (order_optimal_portfolio).

It is hard to "force" the optimizer in the direction you want. It is a "black-box" with a mindset of its own. Nonetheless, by changing the structure and objectives of the program, one can push the strategy to higher levels. Some leverage and shorts have been used to reach that 34.1\(\%\) CAGR. However, the strategy, at that level, can afford the extra leveraging fees.

Evidently, the strategy looked for more volatility and as a consequence suffered a higher max drawdown while keeping a relatively low beta. I have not improved on the protective measures as of yet. Currently, the trend definition is still the moving average crossover thingy which will alleviate the financial crisis drawdown but will also whipsaw a lot more than desired or necessary.

A total return of 14,059\(\%\) will turn a \(\$\)10M initial cap into a \(\$\)1.4B account.

Still more work to do.

Loading notebook preview...
Notebook previews are currently unavailable.

@Guy. Not trying to criticize or anything but whatever you did with the strategy, didn't clearly worked. If you look closely, the highest returns were achieved in 2015 and at the end of 2019 it is still at same level. In other words, the strategy didnt make any money after 2015 if you stay invested. Clearly, the alpha has gone from the "objectives of the program".

In contrast, vladimir algo is consistently making money. An ever increasing upward sloping curve.

@Nadeem, let me see. You are playing a money game and the final result is inconsequential. You like a smoother equity curve even if it is \(\$\)1.2B lower over the trading interval. Well, to each his own as they say. And as I have said, there is still work to be done. Especially in the protective measure department.

Vladimir does have a good trading strategy and I do admire his coding skills. They are a lot higher than mine.

Maybe you would prefer the following notebook. Who knows?

Loading notebook preview...
Notebook previews are currently unavailable.

@Guy, I'm curious if you've allocated any real capital to any of your strategies, and if so, what the result has been in the live market?

@Guy Fleury

To my mind,

order_optimal_portfolio(opt.TargetWeights(wt), [opt.MaxGrossExposure(LEV)])  

does not produce any weights optimization by any criteria it just exercise the requested weights more accurately and constrain target leverage.

When writing the code of my version of the strategy, I entered the LEV parameter specifically for you.
Here are the results with only two changes in setup LEV = 2.0, initial capital = 10000.

It is not possible to trade this strategy with LEV > 1.0 on IRA accounts.
Results do not include marginal expenses.

Loading notebook preview...
Notebook previews are currently unavailable.

@Guy Fleury

Guy, I am sorry to say that from my point of view you continue to make the same mistakes you have always made. I simply do not see the point of your posts if you are not willing to share your code.

Or indeed to elucidate on the mysterious "pressure points" you refer to to produce the equity curves you come up with. I am well aware I have made no contribution to this thread either but I have at least taken the trouble to look through your website trying to find out exactly what your "alpha power" is based on.

Sadly it is more of the same - many formulae, many equity charts, and much obfuscation.

It is of course entirely your prerogative to present your trading systems in this way. But to my way of thinking the exercise is entirely pointless.

I am genuinely interested in your point of view but by refusing to provide details, what could have been an interesting contribution to the obscure and arcane arts you portray is rendered entirely without meaning.

Once again may I respectfully request that you provide the code behind your alterations to this system kindly provided by and improved upon by others.

And I repeat the words "respectfully" and "request" lest we get into the same sort of dispute we have so often fallen into in the past.

@Vladimir
In various posts above @Guy quotes leverage of 1.4 to achieve one of his more impressive equity curves. He also states that he has used the Q1500 portfolio rather than the Q500. I have tried the Q1500 as well and the Q3000 and the difference is underwhelming.

Here is a "pressure point". Use the Q3000 and reduce the number of stocks invested in to 10. Probably very unstable but I have not bothered to run it over different re-balance dates.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 10.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Here is another pressure point for you....and hey, I'm going to tell you what it is! Q3000 and invest in 5 stocks. See we can all do it eh? And of course you can make it even better with a little extra secret sauce....like reducing the max DD and vol.....but that is for another day. Don't want to over-excite myself.

If you take a look at the code you will see I have made a total mess of the parameters. But that is the point isn't it? If you just present pretty pictures you have absolutely no idea how they were created. And no interest either.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Hey All,

Thank you so much for the contributions to this Algo. This was what I had in mind when I shared this strategy.

Special thanks to those that made the code more efficient (Jamie, Joakim, Dan, Vladimir)

As you can see, many versions of this idea work well. On a historical basis, we can say this tweak or that tweak worked “better”, but we have no way to knowing which tweak will have the best performance in the future. As such, I look for strategies that are simple, explainable, robust and show good performance on a wide variety of parameter changes. To me, this strategy does that.

I’ll address a few comments I read in this thread.

Some have said that perhaps the Alpha has gone away b/c the last couple years have had a bit lower performance. While this is always a possibility, I certainly don’t think this is the case. First of all, things we are using here (value, quality, momentum, trend following) have been around for decades. All of these have been used well before this backtest even started. Factors go in and out of favor (this is especially true with Value’s bad performance over the last 5 years). To me, that certainly doesn’t mean the Alpha is gone. Those that have had that opinion in the past (such as with Value’s underperformance in the late 1990s) were very mistaken.

I view these factors as rooted in human behavior (too long of an explanation to get into now). I am of the opinion that human behavior will not change.

Some have questioned the validity of the trend following filter. The original algo used a ROC over the last 6 months. Joakim’s versions used the 50 and 200-day moving averages. Both of these techniques essentially do the same thing (though at slightly different speeds). I think both are logical and will provide value in the future.

As far as the validity of this rule in general, the questions becomes - do you believe in trend following (time-series momentum) or not? I certainly do. There are 100+ year backtests that prove its value (see AQR), not to mention decades of real-world practitioner results.

Over the last 9 years, we have only had equity pullbacks in the 10-20% range. In these type of shallow pullbacks, our trend-following regime filter rule will be a drag on performance. What happens is you get out of the market, then have to but back in at a higher price. The question then becomes - do you think these shallow pullbacks are the new normal, or that we will eventually see a 30-50% pullback which has happened many times in history? In a 30-50% pullback, our trend-following regime filter will add a ton of value (such as what happened in 2008).

You can always mess with the different speeds of the lookback for trend following. Academic research has shown that 3-12 month lookbacks work. Instead of trying to pick the optimal lookback, I am in favor of diversifying amongst several lookbacks (this is not implemented in the current algo).

Anyway, those are some of my thoughts. Thank you all for checking out my algo, I am happy with the great response from the wonderful Quantopian community!

Chris Cain, CMT

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29% when you don't employ any leverage. The code still needs to correct the occasional unintended leverage.

But for what it is worth here is one such back test where I have not played fast and loose with the parameters. Pretty impressive.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Chris, very well put. I 100% agree with your comments.

Antony,

in your last 3 posts you probably used somebodies broken algo with bugs and added your own.
By Christopher Cain definition this algo is long only with no leverage.
You set context.Target_securities_to_buy = 5.0.
Initial capital $100000.
Jast check positions on the first day of your algo trading 5-19-2003.
It had 20 positions in stocks $20000 each, total $400000 and short position in bond -$315000.
That is leverage more then 7, sometimes it reaches more then 13.

Did the algorithm realized your intentions?
Is it appropriate to use the results of broken algo in argumentation?

I have tested your parameter setting in my algo.

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF', 'TLT'); D = 8;
MOM = 126; EXCL = 10; N_Q = 50; N = 5; MA_F = 20; MA_S = 200; LEV = 1.0;

The results are not perfect but for somebody may be acceptable.
Another proof of Christopher Cain concept.

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Vladimir thanks for the comment.

The first couple of tests were simply to show the futility of posting charts without the code. I deliberately mucked up the parameters.

The third test I used Jamie Corriston's code (I believe?) and none of my own. I simply set the number of stocks to 5 in both relevant lines. Leverage is 1 most of the time with the occasional spike to 2 which I have not investigated.

I am going to look much further if I decide to trade this thing and will now download/clone your version of Jamie's's code as amended.

With many thanks to you.

I was originally attracted to the idea since I had drafted a monthly momentum re-balance system here on the website a couple of years ago. Which attracted much attention until Guy Fleury started commenting and the whole thread then went off the rails.

This system is much better - I had failed to add any fundamentals filter which certainly helps with the variability over different rebalance dates.

Actually Vladimir, I don't think you posted a version of the code? Whose or which version are you using?

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29%

I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly. Typically this will give your backtest a lot more data points, allowing you to hold more positions at once without diluting your alpha signal (just the hits, no deep cuts, so to speak) nor increasing turnover, and is likely to improve your sharpe ratio (via lower volatility and slippage). Though this algo trades on such low frequency data it's not likely to benefit so much from day-of-the-month diversification, I have successfully used this technique to great results in the past.

Chris, thanks for posting this. Has anyone been able to convert this algorithm to work with alpaca? Or if not can someone point me in the right direction of how to get started (with converting this or any other quantopian algorithm)?

Hey Viridian,

"I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly."

Can you expand on this? How would the logic work to implement this technique with this Algo?

@Viridian Hawk
Thank you for that. It certainly sounds an excellent idea to average the signal.

@Mike Burke
I'm not sure there would be much point averaging the fundamental factors, unless they are ratios to shareprice, since their frequency is so low. But you could easily average the momentum factor which is the second leg of this algo's filter.

I'm a bit rusty with the Q API, but i believe all one has to do is to add the momentum provisions as a custom factor. Custom since the built in momentum factor does not adjust for not including the last ten mean reverting days.

Then add it to the pipline, find the top x, and use it as a filter as per the existing algo.

When I get around to it I will post an example.

I'm still puzzling over the occasional spike in leverage - or rather how to correct it. I do not want to use optimise since I don't want equal weighting. Therefore you need to allocate slightly differently to the current algo. You need to only allocate a percentage of unused capital...which is not the way it is currently done.

@Mike Burke -- Sometimes it's as easy as putting a SimpleMovingAverage on the pipeline output, but for this algorithm you'd have to refactor the execution aspect of it. Basically you'll want start by creating a dictionary of target weights (including the bond allocation) based on the current day's pipeline output and bull market crossover, but instead of ordering those weights, you'd add them to a context list, ala context.daily_weights.append(today_target_weights) then you'll want to prevent overflow by popping off any entries once you're over 20, ala if len(context.daily_weights) > 20: context.daily_weights.pop(0);Then you just combine the weights and normalize them. That'll give you the 20-day average portfolio, which you can then order (most easily via the optimizer w/TargetWeights(combined_weights)).

@Viridian Hawk.
Yes, the portfolio management aspect is the key. Nice solution and simple. You just add the new stocks at a weight of 1/20th and existing stocks in the portfolio at whatever percentage of equity they have reached and then normalize.

This is what you should be doing even if you do not intend to average the signal or trade daily.

I find it very difficult to analyse the output on Q. Loathe it in fact. But I suppose what is happening is that for most of the time enough big hitting stocks drop out to make room for new entrants. And then sometimes they don't and you get huge leverage because you have failed to normalize the allocations.

At least with your suggestion new entrants get a fair crack of the whip and strongly trending stocks still retain an overweight position.

I was stupidly thinking of just dividing the un-allocated capital amongst the new entrants at a roll date, but I like your solution better.

@Viridian Hawk.
Another thing I have been pondering is the running of this strategy now that you can no longer trade through Quantopian. If you were wiling to take the risk of monthly allocation it would be no effort to run it manually, although whether you would take your signals from Quantopian, buy your own data, or look for online screeners I am not too sure. Perhaps Morningstar offer a free or cheap screener on the fundamentals.

If you wanted to run it daily, automation would be the better option and I suppose you could convert the algo to run on Quantconnect.

I suppose there must be other solutions but the last thing I would want is to have to run the system on my own server.

What do you do?

@Viridian Hawk.

I have become so used to designing my own systems on my own software and it is so very much better to be able to analyse a spreadsheet of your results which contains all the prices, all the signals, all the trades and so forth. You can turn them inside out and upside down and really get to the bottom of why the system is doing what it is doing.

In that respect I find Quantopian so very difficult - I can never grasp the full picture clearly enough.

I suppose logging is one option, although it is restricted. I suppose running in debug is another although so slow and tedious.

I understand the need to protect their data suppliers but for me at least it does make life difficult.

I suppose the research environment may be a better option since I think (?) you can use pipeline there now.

How best do you analyse your systems on Quantopian?

Incidentally, for those who insist on using leverage by design, it is worth considering leveraging the bond portfolio using futures rather than IEF. I did a great deal of work a while ago on the all weather portfolio concept and it might be worth looking at replacing IEF with the relevant future on US Government bonds. I have no idea yet whether Quantopian allows you to mix futures and equities within one system, but by way of example you could allocate 90% of your cash to stocks and 100% equivalent of your cash to bonds. Or whatever.

tenquant.io looks to be a promising source of high quality free fundamental data. In theory it is faster than MorningStar, which can have up to a three-day delay, whereas tenquant.io claims they scrape the financial data as soon as it goes public. As far as automation goes, I've been using Alpaca. I couldn't get their version of zipline to run on my computer, so I just rolled my own barebones trading framework using the REST API, which was pretty simple. So maybe somebody with more experience with Alpaca's version of zipline, which I think is called pylivetrader, can chime in whether there's any incompatibility that would stop this algo from running, but my impression is that it shouldn't be too hard to get it to work.

Thank you, most useful.

I have by no means finished my work on this excellent algo but as an interim report I have made progress reducing leverage without using "optimise". Certain stocks were repeatedly not getting sold at or around the close so I moved the stock transactions to the open. By the time the bond trades happened at the close, in tests so far, all stock sales were getting processed. And hence the allocations more accurate.

To combat negative allocations to bonds, I simply reduced any negative allocations to zero.

Imperfect and doubtless I shall improve on it once I get to the bottom of the matter.

I'm not at all sure about trading every day and averaging the momentum signal. A similar effect could be achieved (so far as avoiding the dangers of using a single monthly re-allocation date) by trading weekly on an un-averaged signal.

My concern is that trading once a month is very convenient, if potentially risky. Trading every day or even every week would be impossible for me unless I automated.

Clone Algorithm
53
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=0), 
        #date_rules.week_end(),
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=0), 
        #date_rules.week_end(),
        time_rules.market_close(minutes=20)
    )
   #date_rules.month_end(days_offset=0),  
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    log.info(get_datetime(tz=None))
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x, 'percentage to buy',(1.0 / context.Target_securities_to_buy))

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    if percent_bonds_to_buy <0:
        percent_bonds_to_buy=0
    order_target_percent(context.bonds , percent_bonds_to_buy)
    print('percent_bonds_to_buy',percent_bonds_to_buy)
There was a runtime error.

@Zenothestoic

The thing that has always bothered me about the pipeline implementation (starting with Joakim's version I think) is that the ranking is done against all stocks (QTradableStocksUS ?). In my mind the correct way to rank the factors is to use a mask to limit the comparison to those that are in your universe, i.e. rank(mask=universe), in this case Q3000US. However if you do it this way the cumulative return drops to less than half. Can you make an argument (other than it works better) for using rank() and not rank(mask=universe) ?

@Steve Jost
To be honest I am very unfamiliar with the Quantopian API especially as it seems to have changed somewhat since I last visited it.

I find working with the Quantopian IDE about as difficult an experience as engaging in carpentry where you are only allowed to look through a keyhole at your hands and the workbench.

Looks like you are right, how very bizarre. I added a mask separately to each of the fundamental rankings and came up with different (as it happens) worse results. Live and learn eh?

@Zenothestoic

I'm not an expert on Quantopian API, but to understand how rank(mask=universe) differs from rank(), it can help to construct two versions of the pipeline in a notebook (see attached). From this, it seems that rank() is ranking against a larger population than Q3000US and using screen = universe does not change the numerical ranks.

Loading notebook preview...
Notebook previews are currently unavailable.
ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)  

Yes, the wrong way round of course if you are seeking low debt to equity. As it stands, however, high debt to equity creates exceptional profits, presumably because the company's debt produces leveraged earnings......

If you are a leverage junkie then this might actually suit your purposes - a highly profitable system (at least for the test period) and no need to take on leverage in your trading account.

A sort of no-recourse borrowing where your account can not go below zero.

@ Steve Jost
No, now you have pointed out the error, I can not put forward any argument for using rank() as opposed to rank(mask=universe).

@ Zenothestoic

It seems the prudent approach is to use rank(mask=universe) even though the return is less.
Without the mask, I worry that the exceptional return may have been a happy accident and not likely to repeat going forward.

Regarding debt to equity ratio - I've also found also that high 'financial_leverage' gives good results.
I think the two metrics are more or less equivalent.

That said it's most likely the combination of large debt and high 'roic' that does the trick.
A company that generates high return on capital will do well to leverage it's capital at low (and historically decreasing) interest rates.

@ Steve Jost
Your logic sounds right. Lots of debt, but used profitably.

@ Steve Jost

For the private punter that sort of algo makes a lot of sense. Leverage is built in, but in such a way it can not bankrupt you. The return is high enough that you can devote a small amount of capital to it and still have it make you a decent amount of money over 5 or 10 years. And your capital employed is small enough that if it all goes horribly wrong it won't be a catastrophe.

I'm glad this sort of algo has made a return here. So much more interesting than what Big Steve wants for his Billions.

@steve

Thank You for posting the notebook. Using mask=universe is something I have been experimenting a lot lately. I am trying to understand why choosing mask results in different returns. It should not. For example - Lets assume a scenario. Lets say you rank against the whole universe of 9000 securities (i.e. not using mask). Now lets pick one security in that universe - X. Lets say X is having highest fcf and hence has a rank of 9000. So it will be on top. Now lets assume you rank against Q500US but the stock X is not in Q500US. According to our screen - it will be excluded. Lets continue this further - say a stock Y is ranked as 8999 in whole universe and it also happens to be in Q500US, so it will end up in our selection. Now lets say had you use mask=ranking then the rank would be 500 and it would still end up in our selection. Therefore, using mask should not differ the results.

But the question is why it happens in the above algo - I think the answer lies in this line --> quality = (roic +ltd_to_eq +value)

adding different rank scores messes up the whole ranking. If one instead use quality = (roic +ltd_to_eq +value).rank(mask=universe) the result will be exactly same. Hence, using mask in ranking does not change the result.

This is the most plausible explanation I can come up with. I might be missing something. Let me know please if this sounds logical.

@ Nadeem
I agree, if it's just one factor it doesn't matter if you rank over the universe or over the entire population.
If you combine several factors (as in this script) than it seems to me that it does matter which way you do it.
You don't want stocks outside of your universe to influence the weighting placed on the factors.

I think Dan Whitnable maybe said it the best in comments that he added to his source code.
Note that he included (mask=universe) for the various factors but commented out that part of the code.
I think this was commented out only so that his back test would match the result of previous versions.

    # Get the fundamentals we are using.  
    # Rank relative to others in the base universe (not entire universe)  
    # Rank allows for convenient way to scale values with different ranges  
    cash_return = ms.cash_return.latest.rank() #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)  
    roic = ms.roic.latest.rank() #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(ascending=True) #, mask=universe)  
    # Create value and quality 'scores'  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  

In the spirit of co-operation I wanted to report that I am having better results by doing away with the skipped period of 10 days in the momentum calculation (which people leave out because stocks are claimed to mean revert over the period). Frankly, I have never been impressed by the argument. Yes, I tend to use 10 days in my mean reversion system tests, but the benefits of NOT leaving out the past 10 days in TF calculations seem sound.

The other change I have made is to use a more sensitive MA crossover of 10 / 100 for the SPY permission filter.

The algo is now reaching Guy Fleury proportions.

For a really ritzy and risky shoot the lights out type system, I'm just using debt to equity as a factor - the higher the better.

Using 5 stocks creates 38% cagr over the 2003 to 2019 test period. Still the occasional leverage spike to iron out -average leverage 1.06. Vol 30, max DD 37%. Universe = Q3000US(). Weekly rebalancing.
I won't bother to post the algo at this stage since I still have to deal with a few matter such as the leverage problem.

But it is certainly all beginning to look highly amusing.

This trading strategy has about the same structure as many others on Quantopian: select some stocks, rank them on some criteria and rebalance periodically. Use some minimal (lagging) protection, i.e.: a 50–200 SMA crossover for this stock to bond switcher. A simple technique that has been around for ages.

Would it have been reasonable in 2003 to do so? Definitely, we were just getting out of the aftermath of the Dot.com bubble. A lot of developers (and portfolio managers) had bad memories of that debacle. So, yes, going forward, they would have put some kind of protection which could have very well been some variant and serving the same purpose: to limit the impact of drawdowns and volatility. It would also appear that it is easier to sell to management an automated trading strategy with some kind of protection than without.

This trading strategy is very hard to play with high stakes and a limited number of stocks. However, for a smaller account, it should do quite fine, as long as it stays limited.

The strategy has built-in scalability by design (up to a limit).

Playing 5 stocks on a \(\$\)10M account is not that reasonable. It starts with a \(\$\)2M bet size. I do not think that many here are ready for that whatever the results of some backtests. However, as you increase the number of stocks you see a decline in overall performance. This is quite reasonable too. The strategy tries to pick stocks that are already performing above market averages, and therefore, should provide on average an above-average performance. The thing is that as you add new stocks, they have a lower-ranked expectancy than those already selected. And this will tend to lower bet size and overall performance.

The question becomes: is this still acceptable over the long term?

It is all a question of confidence. In which scenario would you put your money on the table for some 16+ years? A backtest can give you an indication of what could have been. It does not give you what will be. However, based on the behavior of a particular trading strategy, you can ascertain that going forward would be much like what the trading strategy did in the past. You will not have the exact numbers, but you can still make some reasonable approximations. Sometimes, relatively accurate, considering that, otherwise, you would not even have a clue as to what is coming your way.

It is only Quantopian who is looking to place £10m into a strategy. I will be placing a mere £10k.

So yes, I entirely agree. And yes, as you expand to 20 stocks and beyond the returns decrease but that is what you would have to do as the capital grew.
Trend following is old as the hills. The only difference with this strategy is the accidental discovery of the "wrong" use of the debt to equity ratio.

The big advantage of this strategy is that for a small amount of capital, large gains may be possible for some period without the use of leverage in the trading account.

The leverage is applied by the corporates themselves and is therefore not a direct risk to your trading account ~ it is non-recourse borrowing as regards the trader.

Incidentally, and not surprisingly, you will find returns are also pretty high using 10 and 20 stocks, so for the smaller account as the equity grows, you could employ the greater capital in this way.

All in all, this is a strategy for the small player looking for large returns from a leveraged play without the risks of taking on borrowings on his own balance sheet.

And agreed as to the future. Quo vadis. As in life, so in the markets.

Now, Guy – I have shared an exact strategy with the community. How about you share the code to one of your adaptations of this strategy so that we can see how you achieve your outsize returns?

Alpha Decay Compensation

The following chart is based on Dan Whitnable's version of the program at the top of this thread. All the tests were done using \(\$\)10M as initial capital. The only thing I wanted to demonstrate was that the structure of the program itself will dictate some of its long-term behavior. And as such, one thing we could do was make an estimated as to the number of trades that will be taken based on how many stocks will be traded.

There are 17 consecutive tests presented on that chart. Each test having the number of stocks incremented as the BT number increased. The Q3000US was used instead of the Q500US universe. There was a 2\(\%\) CAGR advantage in doing so. No leverage was used. Nonetheless, the strategy did use some at times (up to 1.6) for short periods of time. This mainly due to the slippage factor. On average the leverage was at 1.0, some 95+\(\%\) of the time.

An analysis of the data can help better understand the overall behavior of the trading strategy and plan for what you would like to see or might prefer as initial setting. I have not made any change to the logic of Whitnable's version except from BT # 24 where I commented out the no slippage line of code. Moved the rebalance function to beginning of month and beginning of day to allow more trades to be executed. On the last test you had 240 stocks, yet a rebalance generated 16,486 transactions in one day. That is about 3 hours for average trade execution. So, yes, there is a lot of slippage.

I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for.

The strategy, as you increase the number of stocks, does generate a lot of slippage and there is a cost to that. I prefer having a picture net of costs. Therefore, the Quantopian default settings for commissions and slippage were in force.

Some observations on the above chart.

It takes about 30+ stocks to have a diversified portfolio. The more stocks you add, the more the portfolio's average price movement will start to resemble the market average indices. It is only if a trading strategy can generate some positive alpha that it can exceed market averages.

As the number of stocks increases, we see the total return increase up to BT # 25 with its 17.8\(\%\) CAGR. After that, we have the total CAGR decrease as the number of stocks increases.

As explained in my previous post, this is expected since as we increase the number of stocks we are adding lower-ranked stocks having lower CAGR expectancies. The result is reducing the overall portfolio performance.

The more you want to make this a diversified portfolio (having it trade more stocks), the more we have a reduction in the overall performance. It is still positive, it is still above market averages and it does generate some positive alpha.

What I find interesting is the actual vs estimated number of trades columns. The actual number of trades come from the tearsheets. Whereas, the estimate, is just that, an estimate based on the behavior of this type of portfolio rebalancing strategy. There is a direct relationship between the number of trades and the number of stocks to be traded as the following chart illustrates.

Participation Prize

A subject that is not discussed very often here. There is a participation prize to play the game. Since the rebalance is on a monthly basis, as the number of stocks grows, the average holding duration will too. Jumping by close to month multiples. Some attribute the gains as coming from their alpha generation when all it is is partly market average background.

The estimated free x_bar is a measure of what the market offers just for holding some positions over some time interval. If you hold SPY for 20 years, you should expect to get SPY's CAGR over that period. The same goes for holding stocks months at the time. And over the past 10 years, in an upmarket, just participating would have generated a profit on the condition you made a reasonable stock selection.

One cannot call the estimated free x_bar as alpha generation. It is a source of profit for sure, but that is not alpha per se. Alpha is what is above the market average. The stuff that exceeds the benchmark average total return. As the number of stock increases, we can see the proportion of the estimated free x_bar increase in percentage over the actual x_bar. It is understandable, the average net profit per trade is decreasing (actual x_bar) while the average duration increases and the turnover rate is decreasing (x_bar is the average net profit per trade, refer to the long equation in a previous post).

Having the total return decrease after BT # 25 can also be interpreted as alpha decay. This is due to the very structure of the program itself. No compensation is applied for this return degradation, and it should continue simply by adding more stocks to the portfolio, or adding more time. And this becomes a rather limiting factor. The more you want to scale this thing up by adding more stocks, more time, the more the alpha with disintegrate. All that is needed is to compensate for the phenomena.

I have this free and old 2014 paper, which is still valid today, that deals with how to compensate for this return decay (see https://alphapowertrading.com/index.php/publications/papers/263-fix-fraction-2). It should help anyone solve that problem, and thereby, achieve higher returns. The solution does not require much. However, the first step is to understand the problem, and then apply the solution. The paper does provide the explanations and equations needed to address the return decay problem.

Thank You, Zenothestoic for your continued work on this. I look forward to seeing your final version.

@Nadeem Ahmed
Thank you for your kind words!

@Guy Fleury
"I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for."

Then just for once Guy, why don't you do so? It would be most interesting. I am sure everyone would enjoy your adaptation of this system.

Here is Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights) and some other improvements.

# Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 20  
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  

def make_pipeline(context):  
    universe = Q500US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    # record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)  
    longs = shorts = 0  
    for position in context.portfolio.positions.itervalues():  
        if position.amount > 0: longs += 1  
        elif position.amount < 0: shorts += 1  
    record(long_count = longs, short_count = shorts)  

And its performance with parameters of my code.
You may compare to the results of my code.

Loading notebook preview...
Notebook previews are currently unavailable.

@Vladimir
Wonderful, thank you for that I will look closely tomorrow. I too have been working on putting all the ranking and masking into Pipeline but so far it does not seem to relate at all to my former versions. I shall look to correct the errors over the next few days And in the meantime will look with great interest at your code.