Back to Community
New Strategy - Presenting the “Quality Companies in an Uptrend” Model

We wanted to share with the Quantopian community an algorithm named “Quality Companies in an Uptrend”.

This non-optimized, long-only strategy has produced returns of 18.0% using the Q500US universe since 2003 with a Sharpe Ratio of 1.05 and 12% of Alpha and a Beta of 0.53.

We’d appreciate your input and feedback on the strategy.

Combining Quality With Momentum

This is a “quantamental” strategy, combining both fundamental factors (in this case, the quality factor) with technical factors (in this case, the cross-sectional momentum factor) in a quantitative, rules-based way.

The idea of this strategy is to first identify high-quality companies then tactically rotate into the high-quality companies with the best momentum.

What is Quality?

The characteristics of “quality” companies are rather broad. Quality is typically defined as companies that have some combination of:

  • stable earnings
  • strong balance sheets (low debt)
  • high profitability
  • high earnings growth
  • high margins.

How Will We Measure Quality?

For our strategy, we focus on companies with a high return on equity (ROE) ratio.

ROE is calculated by dividing the net income of a company by the average shareholder equity. Higher ROE companies indicate higher quality stocks. High ROE companies have historically produced strong returns.

Rules for The “Quality Companies in an Uptrend” Strategy:

  1. Universe = Q500US
  2. Quality (ROE) Filter. We then take the 50 stocks (top decile) with the highest ROE. This is our quality screen, we are now left with 50 high-
    quality stocks.
  3. Quality Stocks With Strong Momentum. We then buy the 20 stocks (of our 50 quality stocks) with the strongest relative momentum, skipping the last 10 days (to account for mean reversion over this shorter time frame).
  4. Trend Following Regime Filter. We only enter new positions if the trailing 6-month total return for the S&P 500 is positive. This is measured by the trailing 6-month total return of “SPY”.
  5. This strategy is rebalanced once a month, at the end of the month. We sell any stocks we currently hold that are no longer in our high ROE/high momentum list and replace them with stocks that have since made the list. We only enter new long positions if the trend-following regime filter is passed (SPY’s 6-month momentum is positive).
  6. Any cash not allocated to stocks gets allocated the IEF (7-10yr US Treasuries)

Potential Improvements?

What potential improvements do you think we can add to this strategy?

Some of our ideas include:

  • A composite to measure Quality, not just ROE
  • Adding a value component
  • Another way to measure momentum?
  • A better/different trend following filter?

We’d love to see what you guys come up with. Given the simple nature of this strategy, the performance is strong over the last 16+ years and should provide a good base for further testing.

Christopher Cain, CMT & Larry Connors
Connors Research LLC

Clone Algorithm
3080
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end() , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roe = Fundamentals.roe.latest

    pipe = Pipeline(columns={'roe': roe},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    TF_hist = data.history(context.spy , "close", 140, "1d")
    TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    if TF_check > 0.0:
        context.TF_filter = True
    else:
        context.TF_filter = False
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.
390 responses

Wow, what a great strategy! Thank you for sharing this! I've been meaning to create something similar for trading in my own account (paper only initially).

I made the below quick modifications:

  1. Use ROIC instead of ROE, as ROIC includes debt as well (high returns on equity with little leverage is high quality in my book)
  2. Added low ltd to equity ranking to the 'quality ranking' as again, low leverage is high quality in my book. This results in lower total returns, but also lower volatility and lower drawdowns, so a slightly higher Sharpe Ratio.
  3. Also added two 'value' metrics and added to the ranking. I prefer to buy 'quality' when it's on sale, but you can easily comment this out.
  4. Changed rebalance to 6 days before month end. My 'hypothesis' is that most people get paid around this time (25th) so more money might be flowing into the market at this time, pushing it up (I have nothing to back this up, just my theory).

Will try to improve it further when I have more time.

Clone Algorithm
969
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end(days_offset=6) , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(days_offset=6), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roe = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank() ).rank()
    
    quality = (roe + 
               ltd_to_eq +
               value +
               0
               )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    TF_hist = data.history(context.spy , "close", 140, "1d")
    TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    if TF_check > 0.0:
        context.TF_filter = True
    else:
        context.TF_filter = False
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Another quality rank I would look at possibly add is consistently high ROIC over say the last 5 years. E.g. (mean 5yr ROIC) / (std_dev of 5yr ROIC).

Interesting!

Could someone please help me understand the line number 58 & 59? I am new to coding and I can only understand that Line number 58 creates a dataframe for daily close price for 140 days. and Line number 59 calculates the 126 days return and gives that value. Is that correct? I dont understand what iloc[-1 does and why it is required?

Is my understanding correct that the trend filter basically means - if 126 days return are positive then true if not then false?

In good times smaller companies will grow faster than the big companies, in bad times it is better to invest in the big and large companies. You might want to adjust your universe filter based on some metric that allows allocation to small versus big.

Great additions Joakim thank you.

I have found that "composite" methods to measure factors such as value and quality tend to work better. This is consistent with the research I have read. One that immediately comes to mind is Jim O'Shaughnessy's book "What Works on Wall Street", where he shows that a composite value factor outperforms each individual value factor. I have found the same with quality.

To be honest I was surprised by how well this tests out given the simple nature of the original algo. I have also done a lot of robustness testing with this algo, changing the trend following filter, the momentum look back, the days skipped, etc and it hold up well.

Interested to see what other come up with as far as improvements.

Chris

I dont know if it is intentional but the algo frequently uses higher leverage. I tried this and the average leverage over same period is 1.19. Is there anyway to restrict it in range 1.00-1.02?

@Guy, thank you very much for presenting us with screenshots instead of code of what you have managed to do with another's IP that they very kindly shared on the forums for us all to work on. Sure was useful...

I have to agree with Jamie here. If you are going to modify the strategy please be transparent about what you did and provide the source code in the spirit of collaboration.

Thanks,
Chris

Chris (Cain) I have done a great deal of work on these type of strategies and I tjink it essential to test using different rebalance dates. EG first of month, 13th, 21st....whatever. I found that huge and rather disturbing differences could result which made me feel uncomfortable with the robustness of the strategy. The effect was particularly noticeable where the filters resulted in small numbers of stocks in the portfolio. Nonetheless I will clone your code (for which many thanks) and look more closely with interest.

Incidentally it is good to see some mofe ideas coming through which do not follow the stifling criteria for the Quantopian competitions. It makes for a much more interesting forum. I was getting very fes up with the "neutral everything" approach.

Here's an update of my modified version of your strategy. Not sure it's much of an improvement, but posting nonetheless.

The main change is that this one is using SPY MA50 > MA200 as the bull/bear market trend check, rather than trailing positive 6mo returns of SPY. Either way seem to work quite well.

I also added 3yr high and 'stable_roic' and high and 'stable_margins' ranks, but these are commented out as they seem to bring down performance , possibly due to making the model too complex. Or maybe I've made a mistake with them?

One that immediately comes to mind is Jim O'Shaughnessy's book "What
Works on Wall Street", where he shows that a composite value factor
outperforms each individual value factor. I have found the same with
quality.

^Indeed! I keep hoping they will release a 5th edition, with updates of how their value composites have performed since the last edition. Value factors have struggled in recent years I believe.

FYI, I won't be sharing any more updates unless others start to contribute as well.

Clone Algorithm
269
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] =  np.nanmean(value) / np.nanstd(value)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(trade, date_rules.month_end(days_offset=6) , time_rules.market_close(minutes=30))
    schedule_function(trade_bonds, date_rules.month_end(days_offset=6), time_rules.market_close(minutes=20))
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank() ).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (roic + 
               # stable_roic +
               # stable_margins +
               ltd_to_eq +
               value +
               0
               )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe

def before_trading_start(context, data):
    
    context.output = algo.pipeline_output('pipeline')
    context.security_list = context.output.index
        
def trade(context, data):

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(context.security_list,"close", 180, "1d")      
    #DF here is the output of our pipeline, contains 500 rows (for 500 stocks) and one column - ROE
    df = context.output  
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[top_n_roe.index][:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)
            

            
            
def trade_bonds(context , data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Thanks @Guy, will you contribute any of your spectacular secret sauce here? :)

@All, looks like I did make a mistake in my CustomFactor. I believe this is the correct way of doing it:

class Mean_Over_STD(CustomFactor):  
    window_length = 756  
    def compute(self, today, assets, out, value):  
            out[:] =  np.nanmean(value[0:-1]) / np.nanstd(value[0:-1])  

Let me know if I still got it wrong.

I didn't make any changes to the strategy, but in the spirit of collaborating to improve on the algorithm, I tried to clean up the style and efficiency of the code a bit. Some of the changes include:
- Changed the custom factor definition to use the axis argument in the np.nanmean and np.nanstd functions.
- Moved the pipeline_output into the scheduled function instead of before_trading_start. It used to be best practice to call pipeline_output in before_trading_start, but last year, we made a change such that pipelines are computed in their own special event and calling pipeline_output just reads the output, so you no longer need to put it in before_trading_start.
- Condensed some of the code in trade.
- Cleaned up some of the spacing and indentation to match common Python style guides.

Again, nothing material, and I don't think it perfectly follows Python style conventions, but hopefully others can learn from some of the changes!

Clone Algorithm
235
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I tried few different known quality factors. Return on assets makes a tiny improvement in alpha and drawdown.

Clone Algorithm
420
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    """
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    """
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

This version reduces risk. The max drawdown is -10% , beta is way lower, and the Sharpe ratio is higher

Clone Algorithm
420
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 10 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

If you only care about returns this version is for you

Clone Algorithm
420
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 10.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    quality = (
        roa +
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

there is a flawed logic in algo above. it takes 3 times or more leverage. In fact, there are leverage spikes even in earlier version. Could someone please fix the earlier version so the leverage is not more than 1.

Thanks @Jamie for fixing the CustomFactor. It seems to be working as I had intended now, and I've included 'stable_roic' in the ranking composite in the attached update.

Other changes I made:

  • Changed the trading universe from Q500US to Q1500US, effectively a proxy for S&P1500 (S&P 500 LargeCaps + S&P 400 MidCaps + S&P 600 SmallCaps).
  • Excluded stocks in the Financial Services sector from the universe, since 'Quality' for financial companies tend to be measured differently from stocks in other sectors, e.g. due to their larger balance sheets.

I also kept latest ROIC rather than using latest ROA (to me, ROIC makes more intuitive sense, but I could be wrong).

Leverage is somewhat controlled in this one, but if anyone could help bringing it down to be consistently closer to 1.0 (without using the Q Optimizer), I think that would be a great contribution. Might require a daily check of leverage --> rebalance?

Clone Algorithm
344
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
                # roa +
        stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@ Indigo Monkey , perhaps this was your intention, but all you are doing here is messing with the leverage (for the stock positions anyway)

The risk reduced version is just running the strategy at 0.5 leverage (again just for the equity positions).

The version with huge returns runs the strategy at elevated leverage.

In the code, "context.Target_securities_to_buy" and "context.top_n_relative_momentum_to_buy" need to be the same to keep the leverage around 1.

These two variables control the amount we are buying (context.Target_securities_to_buy) and our final momentum sort (context.top_n_relative_momentum_to_buy).

For the reduced risk version, this is hard to tell since we are putting unused cash into bonds. It will show leverage around 1, but that is half bonds (how you have it coded)

Chris

@Joakim thank you great contributions as always

I know Joel Greenblatt and others have also taken out Financials as well, as they have a much different capital structure, making some value and quality metrics not analogous across sectors.

As for the leverage, I think one thing that we can do is change the rebalance logic.

As currently coded, if we are holding a stock for multiple months, we don't rebalance it back to the target allocation. My thought here was to let our winners run, and not make the position size smaller just because it had good performance.

If we change this logic to rebalance each position in the portfolio back to target weights every month that will go a lot way I believe.

Chris

I deleted earlier post as there was no improvement. I thought I will just backtest with Joakim trend filter and shorter value factor. Improved returns.

Clone Algorithm
54
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank()
    value = (Fundamentals.free_cash_flow.latest / Fundamentals.enterprise_value.latest).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # roa +
        stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()
 
    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Thanks @Chris,

As currently coded, if we are holding a stock for multiple months, we
don't rebalance it back to the target allocation. My thought here was
to let our winners run, and not make the position size smaller just
because it had good performance.

^This makes a lot of sense to me. Why penalize your winners? As you said, [cut your losses and] let your winners run! Or to paraphrase the Oracle: "The best holding period for a great [quality] company is forever." :)

@Nadeem, thanks for your contribution! I wonder how your way of defining value is different from Morningstar's 'cash_return', which is also FCF / EV:

cash_return
Refers to the ratio of free cash flow to enterprise value. Morningstar calculates the ratio by using the underlying data reported in the company filings or reports: FCF /Enterprise Value.

Here's another slight 'improved' version (during this backtest period at least). Only change is that I changed 'stable_roic' to be over 5 years instead of just 3. This won't really fully start to kick in until 2007 as there's no data on Q from before 2002.

Any kind soul out there want to help me set this up in IB's paper trading environment, using Quandl price and fundamental data (if available)? Would iBridgePy be the way to go? Or something else?

Clone Algorithm
344
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 
from quantopian.pipeline.classifiers.morningstar import Sector

class Mean_Over_STD(CustomFactor):
    window_length = 1260
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    financials = Sector().eq(103)
    utilities = Sector().eq(207)
    # Base universe set to the Q500US
    universe = Q1500US() & ~financials #& ~utilities

    roic = Fundamentals.roic.latest.rank()
    roa = Fundamentals.roa.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    stable_quality = ( stable_roic + stable_margins  ).rank() 
    
    quality = (
        roic + 
                # stable_quality +
                # roa +
        stable_roic +
                # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Joakim - I read somewhere that morningstar cash_return gives ttm fcf. While I use latest fcf which is for latest quarter. My thinking is to use latest data (perhaps market has short memory). Just my opinion.

Fair enough, I didn't know that. Makes sense, thanks Nadeem.

@Joakim. I have a question. Maybe I m bit confused here. You are using (ascending=True) in debt to equity. This ascending order is default. Which means you are using high roic, high cash return, high total yield and high debt to equity. Isnt the original thought was to use lowest debt to equity? therefore, the parameter ascending should be set as False? Please help.. maybe I m confused about the logic here.

@Nadeem, good catch! Yes, my thought was indeed that low debt to equity companies were high quality companies, so I should have set ascending order to False. That doesn't work nearly as well obviously, but rather than keep this one as is, I would remove it and possibly replace it with some other quality factor.

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before we start send long-only strategy to the sky?

First of all thank you to Chis Cain. It's good of you to share your algo. And it's a very tempting proposition, although it probably needs a little more investigation. Now I am back at my computer I have been running a number of tests with different re-balancing dates, and, as I have always found with this type of algorithm, the differences in performance are worrying.

I have been using the tidied up code kindly provided by Jamie McCorriston un-amended save for the monthly re balance date.

Perhaps using the maximum of 22 days offset from the month end is foolish - I imagine there are months where there are less than this number of trading days. Nonetheless the results were interesting.

Here are some of the total returns I got by varying the re-balance date:

1574%
455%
1674%
1825%
1477%

Leverage needs looking at - it reaches 2 on occasion with a corresponding net dollar exposure of 200%. Uncomfortable for the lily livered such as myself.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=22), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=22), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Effectively,as I have always felt with these type of strategies, one would be best off hedging one's bets and split the portfolio into a few different parts, each part using a different re-balance date.

Hey @Zenothestoic. Yes I think date_rules.month_end(days_offset=22) doesn't make much sense. What happens in the months that have a holiday and less than 22 business days?

Thanks to Chris for posting this algorithm. It would be a good candidate to trade in one's IRA or another account that is restricted to long-only. The issue of leverage over 1.00 will have to be solved before I would actually trade this. The code that I suspect is causing excess leverage is the "GETTING IN" section. There doesn't seem to be any consideration of the current positions. It will add to the complexity, but the bonds/equities should be rebalanced together.

@Zenothestoic I share your concern with different starting dates, and I have put together a list of the results starting at different times during 2003. Although the total returns vary by 500% this is due to compounding, and once you adjust for time differences the CAGR is very similar. I am more concerned with the starting year.

Click to load notebook preview

Peter
Yes, it would be interesting to see what would have happened in the tech crash. But good point on CAGR (and of course DD) being very close.
Mind you the system rode through 2008 very well, but of course each crash is different.

It is likely for instance that a severe drawdown would have occurred in 1987 - no trend following system could have reacted with the swiftness required at that date, certainly not six month MOM or a 50/250 MA crossover.

But it looks very tempting otherwise if a few kinks can be ironed out.

Chris yes - offset = 22 does not make much sense probably. But I always find it difficult to drill down and find out why on these online back testers. Its the sort of stuff I need to have my own data for with which I can fiddle to my heart's content.

Also I need to check to see whether Q's standard com and slippage is included.

Peter
Did you have to run those tests one by one or does Q let you automate that sort of testing now?

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before Guy Fleury start send long-only strategy to the sky?

@Peter Harrington,

Are the backtest results you posted from original (Chris Cane) algo or from others?
They have different Trend Filters and Factors.
Original (Chris Cane) algo with date_rules.month_end() has Total Returns 1521.59 %.

@Guy Fleury: Multiple participants in this thread have expressed frustration with the sharing of screenshots instead of attaching a backtest. Please refrain from sharing screenshots built on top of the shared work in this thread. You are entitled to keep your work private, so if you don't want to share, that's fine. But please don't share screenshots in this thread as it seems the intent of the thread is to collaborate on improving the algorithm.

@Jamie, understood. I have erased all my posts in this thread since my notes without screenshots become simple opinions without corroborating evidence.

(Added)

@Jamie, as you said: I have no obligation to share anything. I thought it was a forum where anything innovative or reasonably pertaining to the subject at hand would have been more welcomed in whatever form it was presented. My bad.

For those few that might be interested, this thing can exceed 30,000%. But, that is now just an opinion. Can't show a screenshot to corroborate or the program itself. It is nonetheless a 40% CAGR over the 16+ years giving a total profit of some $2.3 billion.

Of note, Jim Simons (Medallion Fund) has managed a 39% CAGR after fees for years. It required a 66% CAGR to make that happen. The fees were 5/44, a little bit more than the usual hedge fund 2/20. In case some are looking for objectives.

For me, this strategy is still not enough even though it could be pushed higher. I have other strategies that can go further without depending on what I consider an internal procedural bug. But, a program bug, if it is consistent, dependable and profitable, then it could come to be considered as some added “feature”.

No strategy change here -- just stylistic change (working off Jamie McCorriston's version). Moved the selection logic out of the rebalance function and into pipeline via a progressive mask, thinking it might be faster and that some might be more accustomed to doing the filtering via pipeline masks.

Clone Algorithm
68
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 20 #Number to buy


class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

            
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)            
    
 
def make_pipeline():
    # Base universe set to the Q500US
    universe = Q500US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)

    pipe = Pipeline(columns={},screen=m)
    return pipe
        
    
def trade(context, data):
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in security_list:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in security_list:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Quality Companies in an Uptrend (original by Chris Cane) Long-Short Count and TF Check with line 9 commented.
You may see the number of stock position slowly changing from 20 to 0 during several months in market downtrend.

@Chris Cane,
Is it by design?

Click to load notebook preview

Quality Companies in an Uptrend (original by Chris Cane) Leverage.

The best ways to fix the problem :

Use order_optimal_portfolio()
Change execution time.
Use @Peter Harrington recommendation the bonds/equities should be rebalanced together.

Click to load notebook preview

@Vladimir, Yes this is by design.

Here is the logic:

If the trend following filter is not passed (6-month momentum is negative, 50SMA<200SMA, whatever) then we sell stocks that fall out of our final buy list (in the orginal algo, that was stocks with best ROE then best momentum).

Since the TF filter is not passed, those stocks are not replaced.

If the TF filter is not passed and a stock remains in our final buy list, it is held.

The design is to scale out of positions if the market is trend down instead of get out of all of them at once. This is evident in the graphs you posted.

Thanks for the great question,
Chris

Here is a version that mostly fixes the leverage problem.

The starting point is the modified code posted by Viridian Hawk.

There is a problem in 2006-2007 time frame where a security BR is purchased but then is not able to be sold for many months.
Eventually the sell order does fill a year later at exactly 4 pm (I wonder if the system forced the sale?)
This led to a problem with bonds going short, I think because the max number of stock positions was exceeded.

Anyway I made the following changes:

Changed the bond trading logic so that the allocation would not go negative.

Changed the stock trading logic so that it re-balances winning positions that carry forward to the next month. Maybe better would be to let the winners run and reduce the size of new positions constrained by available cash, but it's a bit more complicated to implement.

Changed the stock trading logic so that it re-balances high quality-momentum stock positions that are held when the trend is negative. I think this helps to balance the bond/stock allocation to reduce leverage during these times.

I was NOT able to find a way to avoid buying BR, so there still is slightly elevated leverage during the 2006-2007 time frame.

Clone Algorithm
41
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 20 #Number to buy


class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

            
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)            
    

def make_pipeline():
    # Base universe set to the Q500US
    universe = Q500US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)

    pipe = Pipeline(columns={},screen=m)
    return pipe
        
    
def trade(context, data):
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()

    if spy_ma50 > spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in security_list:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
        elif x in security_list and context.TF_filter==False:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('REBALANCING',x)
    
    for x in security_list:
        if context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN OR REBALANCING',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = min(len(context.portfolio.positions),context.Target_securities_to_buy)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = min(len(context.portfolio.positions),context.Target_securities_to_buy) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@steve

I tried your algo - It is now always holding one extra position. Try recording number of positions and you will see that they are 21 instead of 20. Not sure what is going on. Perhaps it not selling Bond during uptrend and keeping it in portfolio. This might be a factor which is reducing returns. Not sure though.

The 2.5 mo/1yr crossover filter on SPY is the weakest point in this strategy. It saves the strategy during 2008. So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe. Who knows if it will work in the future -- we don't have enough data points to draw any statistically meaningful conclusions on that signal and how it correlates to "quality." So, that bit of the code is likely an overfit.

@Steve Jost,

To avoid buying BR you may try this code

from quantopian.pipeline.filters import Q500US, StaticAssets

universe = Q500US() & ~StaticAssets(symbols('BR'))

I think this will not solve the problem completely.

@Nadeem,

You are right, in Steve algo IEF exist all the time at least at the beginning.

"So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe."

Then cut it out. The strategy is still far from shabby without it and you have the comfort that it is no longer curve fit. The drawdown has increased from 20 to 40% in the attached test. Still way lower than the S&P DD in 2008.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 20.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q500US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 50, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Jamie McCorriston,

Can you advise on how to make this exclusion filter work?

universe = Q500US() &~StaticAssets(symbols('BR','PD'))  

2006-04-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-05-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-06-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-07-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-08-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-09-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-10-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-11-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-12-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-01-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-02-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-03-22 12:30 WARN Cannot place order for PD, as it has de-listed. Any existing positions for this asset will be liquidated on 2007-03-22 00:00:00+00:00.
2007-03-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.

@Vladimir, What are the mods? Will you post the algo?

@Vladimir, @Jamie, looks like those stocks might have been halted trading or delisted around that time?

@Vladimir, be ready to add to the list as you increase the number of stocks to be treated.

universe = Q1500US() & ~StaticAssets(symbols('CE', 'CFBX', 'DL', 'GPT', 'INVN', 'WLP', 'ADVP', 'IGEN', 'MME', 'MWI'))  

Why are you guys introducing lookahead bias by filtering specific stocks from the universe? Stocks get halted and delisted all the time -- it's just part of trading. If you have found evidence of data errors, perhaps best to just report them to Quantopian so they can fix them. Otherwise, I think it's best to make a strategy's logic robust enough that it doesn't trip up when positions get halted or delisted.

@Viridian, I these cases, stocks are delisted, halted or have gone bankrupt, but their positions stay open meaning that your bet is still on the table and might not be accounted for in the final result. Ignoring them should liberate those bets. A quick and dirty method, I agree. But in development, it becomes acceptable since your interest is at a much higher level than solving trivia.

As I have said before, you can push this strategy beyond 30,000% total return. You will be able to do so by putting more stocks at play and improving on the strategy design.

Leaving in those delisted stocks will require added code to track them yourself and somehow get rid of them as you go (in order to be more realistic). But then again, Quantopian could make it that their program takes care of it by automatically closing those positions as they appear. But, they will have to distinguish between halted stocks and permanently delisted.

(ADDED)

@Viridian, even if you put an exclude list, some come back anyway. Go figure.

@Guy

Guy I can understand that you have chosen not to share code on this forum but I am intrigued by the idea of a 30,000% return since 2003 Would you consider telling us exactly how it is achieved? I am assuming you use no leverage?

Can you also tell us the max DD and volatility on the system extended to these lofty levels?

I would love to invest for that sort of return, but lack those sort of skills.

@Chris Cane,

it looks like I was able to take the leverage in your algorithm to an acceptable level using order_optimal_portfolio().
I hope you will comment on the consistency of the backtest results with the design before I post the code snippet.

Click to load notebook preview

Long-Short Count, and TF Check.

Click to load notebook preview

@Vladimir: The symbols method assumes you are asking for the asset whose ticker symbols are BR and PD as of today. So the ~StaticAssets(symbols('BR', 'PD')) filter is excluding two stocks that picked up the tickers BR and PD in 2007 and 2019, respectively. You can specify a reference date with set_symbol_lookup_date or use sid to specify the assets that delisted in 2006.

That said, I agree with Viridian Hawk. When an asset gets delisted, the backtester automatically closes out any shares held in that asset, 3 days after the delist date, at the last known price of the asset. It's probably best to let the backtester handle that position so as to avoid lookahead bias as much as possible.

@Jamie McCorriston

set_symbol_lookup_date('2006-01-01') worked but it costs 30% of profit.

Thank you.

Properties of This Trading Strategy

I got interested in the above strategy, first for its longevity (16+ years) and second, for its built-in alpha.

My first steps are always to see the limitations and then see if I can improve the thing or not. The initial strategy used \(\$\)100k and 20 stocks putting its initial bet size at \(\$\)5k.

A portfolio will have to live by the following payoff matrix equation: $$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H} \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$The total return on the original scenario was 1551.58\(\%\) giving a total return of \(\$\)1,521,580, it surely demonstrated that even with 16+ years it did not get that far. Nonetheless, it is in CAGR terms, a 17.64\(\%\) compounded rate over the period. It starts to be interesting since it does outperform the majority of its peers. See the majority at a 10.00\(\%\) CAGR or less. Therefore, we could say there is approximately a 7.6\(\%\) alpha in the initial design.

The structure of the program can allow more. First, the design is scalable. It was my first acid test. I upped the initial stake to \(\$\)10M. But this makes the bet size jump to \(\$\)500,000 per initial bet. Due to the structure of the scheduled rebalance, these bets would catch most of their returns from common return (about 70\(\%\)) and not from specific returns. But it did generate alpha over and above its benchmark (SPY). And that was the point of interest.

I raised the number of stocks to be treated in order to reduce the bet size knowing that doing so would reduce the average portfolio CAGR. The reason is simple, the stocks were ranked by expected performance levels, and the more you took in the more the lower-ranked stocks with their lower expected CAGR would tend to lower the overall average. This could be compensated elsewhere and could even help produce higher returns.

There is a slight idiosyncracy in the original program which made it have a 1.04 average gross leverage. Its cost would have been about \(0.04 \times 0.04 = 0.0016 \) should we consider IB leveraging fee for instance. A negligible effect on the 17.64\(\%\) CAGR.

The Basic Equation

The equation illustrated above is all you can play with. However, when you break it down into its components, the only thing that matters in order to raise the overall CAGR of about any stock trading strategy is \(\mathbf{H}\), the behavior of the trading strategy itself. It is the how you will handle and manage the ongoing inventory over time.

The price matrix \(\mathbf{P}\) is the same for everyone. In this case, the original stock universe was Q500US. To get a better selection, I jumped to Q1500US since my intention was to raise the number of stocks to 100 and over. The \(\Delta \mathbf{P}\) is simply the price variation from period to period, and therefore, is also the same for everyone. The differences will come from the holding matrix \(\mathbf{H}\) which is the game at play. If the inventory is at zero, there is no money coming in nor is there any money going out. To win, you have to play, and that is where you also risk to lose.

The first chart I presented had an overall 2,405.92\(\%\) total return on a \(\$\)10M initial stake with 40 stocks. That resulted in overall profits of \(\$\)240M over the 16+ years. Already over 100 times the original trading script. Most of it coming from the 100 times the initial capital demonstrating the program's scalability.

By accepting a marginal increase in volatility and drawdown, I raised the bar to 3,803.87\(\%\) total return which is a 24.26\(\%\) CAGR equivalent for the period.

@Joakim's Version of The Program

I next switched to Joakim's version of the program because it accentuated an idiosyncracy of the original program and pushed on involuntary leveraging. But I did not see it as a detriment. The more I studied the impact the more I started to appreciate this "feature" even though it was not intended. If a program anomaly can become persistent, dependable, and can generate money, it might stop to be considered a "potential bug" and be view as an "added feature".

Using Joakim's program version as base, I push on some of the buttons, increased the strategy's stock count again, changed the trading dates and timing, make the strategy more responsive to market swings and tried to capture more trades and a higher average net profit per trade. The impact was to raise the overall total return to 10,126.6\(\%\). On the same \(\$\)10M this translated to a 31.74\(\%\) CAGR with total profits in excess of \(\$\)1B. It is a far cry from the original strategy.

I kept on improving the design by adding new features and more stocks to be traded with result that the total return jumped to 13,138.85\(\%\) which is a 33.8\(\%\) CAGR over the 16+ years. To achieve those results, I also put the "Financials" back in play since there was no way of knowing in 2003 that the financial crisis would be unfolding and be as bad as it was.

But, you could do even more by accepting a little bit more of leverage as long as the strategy would be able to pay for it all, and remain consistent in its general behavior. Thereby exploiting the anomaly found in Joakim's and the original strategy. Here you could really push and not by pushing that much either. A leverage of 1.4 was sufficient to bring the total return to 32,143.38\(\%\) with a total profit of \(\$\)3.2B and a CAGR of 41.1\(\%\). Quantopian once said they were ready to leverage some strategies up to 6 times. So, 1.4 might look as not that high especially if the trading strategy can afford it.

You could do even more by accepting a leverage of 1.5, raising the total return to 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). In total profit that would be \(\$\)5.09B.

At 1.5 leverage, you would be charge on the 0.5 excess, and at IB's rate it would give: \(0.5 \times 0.04 = 0.02\). Thereby reducing the 45.0\(\%\) CAGR to 43.0\(\%\). Still costing some \(\$\)1.056B over the period and leaving some \(\$\)4.045B as net total profit in the account.

A prior version to the one above tried \(\$\)20M as initial stake and achieved a 43,795.04\(\%\) total return. It could have been jacked up higher, but it was not my main interest at the time. Nonetheless, in CAGR terms that was 43.79\(\%\) and in total profits \(\$\)8.76B.

I think the strategy could be improved even further, but I have not tried. My next steps would be to scale it down now that I know how far it can go and install better protective measures which would tend to increase overall performance while reducing drawdowns.

As part of my acid tests, I want to know how far a trading strategy can go. Therefore, I push on the strategy's pressure points in the first equation knowing that the inventory management procedures are where all the efforts should be concentrated. Once you know what your trading strategy can do, it is all easy to scale it down to whatever level you feel more comfortable, that it be in using lower leverage, reducing overall CAGR, or installing more downside protection. It becomes a matter of choice.

But once you have pushed the limits of your trading strategy, you at least know that those limits could be reached and even if you scale down a bit, you also know that your trading strategy could scale up it if you desired to. It would not come as a surprise, if at all, you would have planned for higher performance and you would know how you could deliver if need be.

It is all so simple, it is all in the first equation above. It is how you inject new equations to the mix that you can transform your trading strategy. In this case, the above equation was changed to:$$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H}\cdot (1+\hat g(t)) \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$where \(\hat g(t)\) is partly the result of a collection of functions of your own design.

I would usually have shown screenshots as corroborating evidence of the numbers presented above. But, it appears that such charts are not desired in this forum.

To me, it transforms all the numbers above as claims, unsubstantiated claims at best since no kind of evidence is presented to support them. They become like just opinions. Nonetheless, I do have those screenshots on my machine, but they will stay private for the moment.

Of note, the explanations for these equations which can be considered innovative for what they can do, even if they have been around for quite a while, can be seen all over my website.

Changed a single number in the last program that generated the 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). It resulted in a total return of 76,849.31\(\%\). A 48.8\(\%\) CAGR. In total profits: \(\$\)7.68B. I will still need to deduct the leveraging fees which will exceed \(\$\)1.B when compared to the previous scenario.

For a single digit, it increased the total outcome from \(\$\)5.09B to \(\$\)7.68B. Now that is a digit that is worthwhile... Sorry, no screenshot to display as some kind of evidence that those numbers were actually reached. But, they still happened.

Guy

In broad terms the above tells us that you increased the portfolio size from 500 to 1500 stocks and that leverage went to 1.4. But little else.

I can understand that the cost of leverage would not be high at current interest rates and that indeed such level of leverage is modest. As to Q's 6x leverage that of course was to be used on their " zero everything" strategy. I have never understood Quantopian's approach to be honest and I refuse to believe that such neutrality would hold under all conditions. I suspect it would get its comeuppance at some stage as do most strategies. But what do I know, to be honest.

The real problem people have with your posts is not the screenshots themselves but the lack of detail they contain as to how the results were achieved. And your above post does the same (to some extent!)

I now understand the increased portfolio size and the leverage- for which many thanks. But of course most of the detail is still hidden. And it is the detail people would like you to share. Which I also would like you to share. If you would be willing of course.

You mention a starting capital of $20m but I'm not sure that a huge starting capital is so relevant for stocks. With futures I can readily understand it. The contract sizes are enormous for the humble retail investor such as you or I.

But with stocks, even 1500 of them (whittled down to 100, or 50 or whatever) its a different matter surely. Stocks are not all the price of Berkshire Hathaway and the lot size is not huge, so surely you could trade small capital up to dizzy levels?

If you would like to share more details I would be happy to put up some capital to try and shoot the lights out if I can make sense of it. Perhaps you might prefer to discuss this in private.

Great post. @Chris Cain . Thank you for sharing your original algo! Also, kudos to everyone pitching in with comments and improvements. I've never felt quant trading to be a zero-sum game. Publishing, peer review, and building on previous ideas has worked in the sciences and serves as a model for moving from 'quant arts' to 'quant science'. A rising tide can raise all ships.

Anyway, in that spirit, here is a version of the original algo with several changes. The goal was 1) to separate the data from the logic 2) separate the security selection logic from order execution, and 3) use order_optimal_portfolio. This was in an effort to make modifications easier, add flexibility, and allow for using the various optimize constraints.

While the logic is faithful to the original, the execution differs a bit. The positions are the same however, the quantity of shares purchased vary by a few at times, which seems to account for the performance numbers not matching exactly.

Clone Algorithm
179
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# import the base algo class. Not entirely needed but a good practice
import quantopian.algorithm as algo

# import things need to run pipeline
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage as SMA
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution
import quantopian.optimize as opt
 
# import numpy and pandas because they rock
import numpy as np 
import pandas as pd


def initialize(context):
    # Set algo 'constants'...
    
    # List of bond ETFs when market is down. Can be more than one.
    context.BONDS = [symbol('IEF')]

    # Set target number of securities to hold and top ROE qty to filter
    context.TARGET_SECURITIES = 20
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter
    context.SPY = symbol('SPY')
    context.TF_LOOKBACK = 200
    context.TF_CURRENT_LOOKBACK = 50

    # This is for the determining momentum
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback
    context.MOMENTUM_SKIP_DAYS = 10
        
    # Initialize any other variables before being used
    context.stock_weights = pd.Series()
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    # Create and attach pipeline for fetching all data
    algo.attach_pipeline(make_pipeline(context), 'pipeline')    
    
    # Schedule functions
    # Separate the stock selection from the execution for flexibility
    schedule_function(
        select_stocks_and_set_weights, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    
 
def make_pipeline(context):
    # Base universe set to the Q500US
    universe = Q500US()
    
    # Fetch SPY returns for our trend following condition
    # Use SimpleMovingAverage (SMA) to broadcast spy averages to all assets (sort of a hack)
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close], 
                          window_length=context.TF_LOOKBACK)[context.SPY]
    
    spy_ma50 = SMA(inputs=[spy_ma50_slice], window_length=1)
    spy_ma200 = SMA(inputs=[spy_ma200_slice], window_length=1)
    trend_up = spy_ma50 > spy_ma200
    
    # Get the fundamentals we are using. 
    # Rank relative to others in the base universe (not entire universe)
    # Rank allows for convenient way to scale values with different ranges
    cash_return = ms.cash_return.latest.rank() #(mask=universe)
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)
    roic = ms.roic.latest.rank() #(mask=universe)
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(ascending=True) #, mask=universe)

    # Create value and quality 'scores'
    value = (cash_return + fcf_yield).rank() #(mask=universe)
    quality = roic + ltd_to_eq + value
    
    # Create a 'momentum' factor. Could also have been done with a custom factor.
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)
    momentum = returns_overall.log1p() - returns_recent.log1p()
    
    # Filters for top quality and momentum to use in our selection criteria
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)
    
    # Only return values we will use in our selection criteria
    pipe = Pipeline(columns={
                        'trend_up': trend_up,
                        'top_quality_momentum': top_quality_momentum,
                        },
                    screen=universe
                   )
    return pipe

def select_stocks_and_set_weights(context, data):
    """
    Select the stocks to hold based upon data fetched in pipeline.
    Then determine weight for stocks.
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested
    Sets context.stock_weights and context.bond_weights used in trade function
    """
    # Get pipeline output and select stocks
    df = algo.pipeline_output('pipeline')
    current_holdings = context.portfolio.positions
    
    # Define our rule to open/hold positions
    # top momentum and don't open in a downturn but, if held, then keep
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'
    stocks_to_hold = df.query(rule).index
    
    # Set desired stock weights 
    # Equally weight
    stock_weight = 1.0 / context.TARGET_SECURITIES
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)
    
    # Set desired bond weight
    # Open bond position to fill unused portfolio balance
    # But always have at least 1 'share' of bonds
    bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)
        
def trade(context, data):
    """
    Execute trades using optimize.
    Expects securities (stocks and bonds) with weights to be in context.weights
    """
    # Create a single series from our stock and bond weights
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint
    order_optimal_portfolio(
        objective = target_weights,
        constraints = []
        )
    
    # Record our weights for insight into stock/bond mix and impact of trend following
    record(stocks=context.stock_weights.sum(),
           bonds=context.bond_weights.sum()
          )
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Guy

I took a look at your paper here: https://alphapowertrading.com/papers/AlphaPowerImplementation.pdf

To we who are not privy to the underlying methods, your formulas provide no real information. You mention trend following, re-investment of profits and covered calls.

And your formulas state that compounding your strategies will lead to outsized profits.

But the problem people here find with your posts is that we all know the benefits of compounding. What we do not know is how you achieve that compounding.

Much of your website repeats this basic message. But nowhere do you state how you achieve that compounding. Except to mention "boosters and enhancers" which are never explicitly explained.

I think you would achieve much kudos here by providing a precise declaration of exactly what these boosters and enhancers are. And if you would be willing to provide the code here then that would be a great step forward.

Thanks and regards

@Dan - Thank You very much for posting the familiar version of the code.

I have a question though - in the record pane in your algo - it seems like the leverage is always at 1.05. Please check Stock weight + Bond weight. It seems like the bond of 0.05 is always in portfolio. Is it intentional? If not, how we can make the leverage at 1.00?

Thank You in advance for your help.

@ Nadeem - You could make the following change to reduce the leverage from 1.04 to 1.00.
Comment out the first line and replace it with the second line.

bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)

@Nadeem Yes, @Steve Jost's code above will reduce the 'target leverage' to 1.0. It will let the bond weight go to zero. In the original algo, the bond weight was always a minimum of .05 (specifically 1.0 / context.TARGET_SECURITIES) and the leverage would go to 1.05. This was a 'feature' of the original algo so I left it in. It actually helps the sharpe ratio and returns.

Even with this change the leverage spikes to 1.05 in 2006. This is because the weights are set assuming all the orders fill. During that time, the algo tries to sell BR but cannot. The algo essentially 'over buys' and the leverage goes above 1.0. The way to keep that from happening is to place all the sell orders. Cancel any open orders after a set time. Then place buys equal to the amount of cash left. Basically don't buy until all the sell orders fill or are canceled.

Good catch.

@Dan,

You probably took as template for trading logic not the original @Chris Cane algo but somebody else together with its bugs which create
Max 21-23 positions, leverage 1.05 - 1.1... (see attached notebook)

Not selling BR during 11 month in 2006-2007 is more likely engine problem.

What is this for?

set_slippage(slippage.FixedSlippage(spread = 0.0))  

Try to do the same with the original @Chris Cane algo trading logic .

I solved more or less everything except BR in 11 lines trade ().

PS. I will attach backtest when Quantopian let me to do that.

Click to load notebook preview

Here are the results :

Click to load notebook preview

@vladimir

these are some crazy coding skills. you have shrinked the original code from 90+ lines to 38 lines. & yet improved the performance. Awesome work. Great to have you here and learn from you.

I have one question for you. In the algo posted by Joakim 4 days ago with 2811% return. What is the purpose of mask=universe in the mean_over_std factor? Isn't there will be a mismatch if we rank other factor without mask and rank mean_over_std with mask? and why does even it matter if we have mask in pipeline screen.

I am asking because if we remove mask=universe from line 90, the result are hugely different. Please see the attached backtest. the result with mask are somewhat similar to what Joakim had. (even though little different because of leverage fix in attached version).

Clone Algorithm
78
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# import the base algo class. Not entirely needed but a good practice
import quantopian.algorithm as algo

# import things need to run pipeline
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage as SMA
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data.morningstar import Fundamentals as ms
from quantopian.pipeline.classifiers.morningstar import Sector

# import optimize for trade execution
import quantopian.optimize as opt
 
# import numpy and pandas because they rock
import numpy as np 
import pandas as pd


def initialize(context):
    # Set algo 'constants'...
    
    # List of bond ETFs when market is down. Can be more than one.
    context.BONDS = [symbol('IEF')]

    # Set target number of securities to hold and top ROE qty to filter
    context.TARGET_SECURITIES = 20
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter
    context.SPY = symbol('SPY')
    context.TF_LOOKBACK = 200
    context.TF_CURRENT_LOOKBACK = 50

    # This is for the determining momentum
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback
    context.MOMENTUM_SKIP_DAYS = 10
        
    # Initialize any other variables before being used
    context.stock_weights = pd.Series()
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    # Create and attach pipeline for fetching all data
    algo.attach_pipeline(make_pipeline(context), 'pipeline')    
    
    # Schedule functions
    # Separate the stock selection from the execution for flexibility
    schedule_function(
        select_stocks_and_set_weights, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=6), 
        time_rules.market_close(minutes=30)
    )
    
def make_pipeline(context):
    
    financials = Sector().eq(103)
    universe = Q1500US() & ~financials
    
    # Fetch SPY returns for our trend following condition
    # Use SimpleMovingAverage (SMA) to broadcast spy averages to all assets (sort of a hack)
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close], 
                          window_length=context.TF_LOOKBACK)[context.SPY]
    
    spy_ma50 = SMA(inputs=[spy_ma50_slice], window_length=1)
    spy_ma200 = SMA(inputs=[spy_ma200_slice], window_length=1)
    trend_up = spy_ma50 > spy_ma200
    
    # Get the fundamentals we are using. 
    # Rank relative to others in the base universe (not entire universe)
    # Rank allows for convenient way to scale values with different ranges
    cash_return = ms.cash_return.latest.rank() #(mask=universe)
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)
    roic = ms.roic.latest.rank() #(mask=universe)
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank() #, mask=universe)
    stable_roic = Mean_Over_STD(inputs=[ms.roic]).rank()

    # Create value and quality 'scores'
    value = (cash_return + fcf_yield).rank() #(mask=universe)
    quality = roic + ltd_to_eq + value + stable_roic
    
    # Create a 'momentum' factor. Could also have been done with a custom factor.
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)
    momentum = returns_overall.log1p() - returns_recent.log1p()
    
    # Filters for top quality and momentum to use in our selection criteria
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)
    
    # Only return values we will use in our selection criteria
    pipe = Pipeline(columns={
                        'trend_up': trend_up,
                        'top_quality_momentum': top_quality_momentum,
                        },
                    screen=universe
                   )
    return pipe

def select_stocks_and_set_weights(context, data):
    """
    Select the stocks to hold based upon data fetched in pipeline.
    Then determine weight for stocks.
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested
    Sets context.stock_weights and context.bond_weights used in trade function
    """
    # Get pipeline output and select stocks
    df = algo.pipeline_output('pipeline')
    current_holdings = context.portfolio.positions
    
    # Define our rule to open/hold positions
    # top momentum and don't open in a downturn but, if held, then keep
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'
    stocks_to_hold = df.query(rule).index
    
    # Set desired stock weights 
    # Equally weight
    stock_weight = 1.0 / context.TARGET_SECURITIES
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)
    
    # Set desired bond weight
    # Open bond position to fill unused portfolio balance
    # But always have at least 1 'share' of bonds
    # bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)
        
def trade(context, data):
    """
    Execute trades using optimize.
    Expects securities (stocks and bonds) with weights to be in context.weights
    """
    # Create a single series from our stock and bond weights
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint
    order_optimal_portfolio(
        objective = target_weights,
        constraints = []
        )

    # Record our weights for insight into stock/bond mix and impact of trend following
    record(stocks=context.stock_weights.sum(),
           bonds=context.bond_weights.sum())
    record(lev=context.account.leverage)
    record(pos=len(context.portfolio.positions))
    
class Mean_Over_STD(CustomFactor):
    window_length = 1260
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)
There was a runtime error.

@Nadeem

rank () -> security factor rating among all traded securities in the Quantopian database

rank(mask=Q500US()) -> rank security factor among Q500US()

Example:

In my code I have changed only ranking in make_pipeline(), added mask = m to all fundamentals factors,

And got some improvements in metrics.

Click to load notebook preview

Thank You @valdimir. I am still confuse as to why it should matter when we already have mask in pipeline. we should still end up with those stocks in pipeline which are in Q500US because we have mask in pipeline. Cant get me head around it. Could you please help me understand it?

Dan, very well said "A rising tide can raise all ships". I still remember one of your earlier quotes "Communities are built from collaboration, not competition".

It's an interesting strategy. However, it did not outperform the SP500 in the last 2 years. Do you guys think, that the alpha is gone?

@Valdimir - I agree with @Nadeem - mad coding skills. Glad you found a workaround to not being able to post backtests.

@Nadeem - Thanks for pursuing the issue of mask in pipeline with @Valdimir.
Using rank(mask=universe or sub-universe) turns out to be very important, otherwise ranks from the larger universe can skew the factor weightings.

Very interesting! Thanks for posting. For measuring "quality," it would be good to see how adding (i) positive insider buying activity and (ii) positive analyst ratings affect the results.

Do you guys think, that the alpha is gone?

Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market.

"Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market."

Rather reminds me of Keats Ode to Melancholy and the concerns of the romantic poets regarding the temporary nature of our world. The fleetingness of life, the impermanence of the flower and all else in our temporal world.

At a time when many major hedge funds are struggling or closing their shutters, we may do well to dwell on impermanence.

Moved make_pipeline() to initialize (), removed unnecessary masks, changed the end of the month to 7, and made some cosmetic changes.
Got some more improvements in metrics.

Click to load notebook preview

The attached notebook is based on Vladimir's program version which used the optimizer for trade execution (order_optimal_portfolio).

It is hard to "force" the optimizer in the direction you want. It is a "black-box" with a mindset of its own. Nonetheless, by changing the structure and objectives of the program, one can push the strategy to higher levels. Some leverage and shorts have been used to reach that 34.1\(\%\) CAGR. However, the strategy, at that level, can afford the extra leveraging fees.

Evidently, the strategy looked for more volatility and as a consequence suffered a higher max drawdown while keeping a relatively low beta. I have not improved on the protective measures as of yet. Currently, the trend definition is still the moving average crossover thingy which will alleviate the financial crisis drawdown but will also whipsaw a lot more than desired or necessary.

A total return of 14,059\(\%\) will turn a \(\$\)10M initial cap into a \(\$\)1.4B account.

Still more work to do.

Click to load notebook preview

@Guy. Not trying to criticize or anything but whatever you did with the strategy, didn't clearly worked. If you look closely, the highest returns were achieved in 2015 and at the end of 2019 it is still at same level. In other words, the strategy didnt make any money after 2015 if you stay invested. Clearly, the alpha has gone from the "objectives of the program".

In contrast, vladimir algo is consistently making money. An ever increasing upward sloping curve.

@Nadeem, let me see. You are playing a money game and the final result is inconsequential. You like a smoother equity curve even if it is \(\$\)1.2B lower over the trading interval. Well, to each his own as they say. And as I have said, there is still work to be done. Especially in the protective measure department.

Vladimir does have a good trading strategy and I do admire his coding skills. They are a lot higher than mine.

Maybe you would prefer the following notebook. Who knows?

Click to load notebook preview

@Guy, I'm curious if you've allocated any real capital to any of your strategies, and if so, what the result has been in the live market?

@Guy Fleury

To my mind,

order_optimal_portfolio(opt.TargetWeights(wt), [opt.MaxGrossExposure(LEV)])  

does not produce any weights optimization by any criteria it just exercise the requested weights more accurately and constrain target leverage.

When writing the code of my version of the strategy, I entered the LEV parameter specifically for you.
Here are the results with only two changes in setup LEV = 2.0, initial capital = 10000.

It is not possible to trade this strategy with LEV > 1.0 on IRA accounts.
Results do not include marginal expenses.

Click to load notebook preview

@Guy Fleury

Guy, I am sorry to say that from my point of view you continue to make the same mistakes you have always made. I simply do not see the point of your posts if you are not willing to share your code.

Or indeed to elucidate on the mysterious "pressure points" you refer to to produce the equity curves you come up with. I am well aware I have made no contribution to this thread either but I have at least taken the trouble to look through your website trying to find out exactly what your "alpha power" is based on.

Sadly it is more of the same - many formulae, many equity charts, and much obfuscation.

It is of course entirely your prerogative to present your trading systems in this way. But to my way of thinking the exercise is entirely pointless.

I am genuinely interested in your point of view but by refusing to provide details, what could have been an interesting contribution to the obscure and arcane arts you portray is rendered entirely without meaning.

Once again may I respectfully request that you provide the code behind your alterations to this system kindly provided by and improved upon by others.

And I repeat the words "respectfully" and "request" lest we get into the same sort of dispute we have so often fallen into in the past.

@Vladimir
In various posts above @Guy quotes leverage of 1.4 to achieve one of his more impressive equity curves. He also states that he has used the Q1500 portfolio rather than the Q500. I have tried the Q1500 as well and the Q3000 and the difference is underwhelming.

Here is a "pressure point". Use the Q3000 and reduce the number of stocks invested in to 10. Probably very unstable but I have not bothered to run it over different re-balance dates.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 10.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Here is another pressure point for you....and hey, I'm going to tell you what it is! Q3000 and invest in 5 stocks. See we can all do it eh? And of course you can make it even better with a little extra secret sauce....like reducing the max DD and vol.....but that is for another day. Don't want to over-excite myself.

If you take a look at the code you will see I have made a total mess of the parameters. But that is the point isn't it? If you just present pretty pictures you have absolutely no idea how they were created. And no interest either.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=8), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 20 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

Hey All,

Thank you so much for the contributions to this Algo. This was what I had in mind when I shared this strategy.

Special thanks to those that made the code more efficient (Jamie, Joakim, Dan, Vladimir)

As you can see, many versions of this idea work well. On a historical basis, we can say this tweak or that tweak worked “better”, but we have no way to knowing which tweak will have the best performance in the future. As such, I look for strategies that are simple, explainable, robust and show good performance on a wide variety of parameter changes. To me, this strategy does that.

I’ll address a few comments I read in this thread.

Some have said that perhaps the Alpha has gone away b/c the last couple years have had a bit lower performance. While this is always a possibility, I certainly don’t think this is the case. First of all, things we are using here (value, quality, momentum, trend following) have been around for decades. All of these have been used well before this backtest even started. Factors go in and out of favor (this is especially true with Value’s bad performance over the last 5 years). To me, that certainly doesn’t mean the Alpha is gone. Those that have had that opinion in the past (such as with Value’s underperformance in the late 1990s) were very mistaken.

I view these factors as rooted in human behavior (too long of an explanation to get into now). I am of the opinion that human behavior will not change.

Some have questioned the validity of the trend following filter. The original algo used a ROC over the last 6 months. Joakim’s versions used the 50 and 200-day moving averages. Both of these techniques essentially do the same thing (though at slightly different speeds). I think both are logical and will provide value in the future.

As far as the validity of this rule in general, the questions becomes - do you believe in trend following (time-series momentum) or not? I certainly do. There are 100+ year backtests that prove its value (see AQR), not to mention decades of real-world practitioner results.

Over the last 9 years, we have only had equity pullbacks in the 10-20% range. In these type of shallow pullbacks, our trend-following regime filter rule will be a drag on performance. What happens is you get out of the market, then have to but back in at a higher price. The question then becomes - do you think these shallow pullbacks are the new normal, or that we will eventually see a 30-50% pullback which has happened many times in history? In a 30-50% pullback, our trend-following regime filter will add a ton of value (such as what happened in 2008).

You can always mess with the different speeds of the lookback for trend following. Academic research has shown that 3-12 month lookbacks work. Instead of trying to pick the optimal lookback, I am in favor of diversifying amongst several lookbacks (this is not implemented in the current algo).

Anyway, those are some of my thoughts. Thank you all for checking out my algo, I am happy with the great response from the wonderful Quantopian community!

Chris Cain, CMT

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29% when you don't employ any leverage. The code still needs to correct the occasional unintended leverage.

But for what it is worth here is one such back test where I have not played fast and loose with the parameters. Pretty impressive.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_close(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_close(minutes=20)
    )
    
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x)

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    order_target_percent(context.bonds , percent_bonds_to_buy)
There was a runtime error.

@Chris, very well put. I 100% agree with your comments.

Anthony,

in your last 3 posts you probably used somebodies broken algo with bugs and added your own.
By Christopher Cain definition this algo is long only with no leverage.
You set context.Target_securities_to_buy = 5.0.
Initial capital $100000.
Jast check positions on the first day of your algo trading 5-19-2003.
It had 20 positions in stocks $20000 each, total $400000 and short position in bond -$315000.
That is leverage more then 7, sometimes it reaches more then 13.

Did the algorithm realized your intentions?
Is it appropriate to use the results of broken algo in argumentation?

I have tested your parameter setting in my algo.

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF', 'TLT'); D = 8;
MOM = 126; EXCL = 10; N_Q = 50; N = 5; MA_F = 20; MA_S = 200; LEV = 1.0;

The results are not perfect but for somebody may be acceptable.
Another proof of Christopher Cain concept.

Click to load notebook preview

Hi Vladimir thanks for the comment.

The first couple of tests were simply to show the futility of posting charts without the code. I deliberately mucked up the parameters.

The third test I used Jamie Corriston's code (I believe?) and none of my own. I simply set the number of stocks to 5 in both relevant lines. Leverage is 1 most of the time with the occasional spike to 2 which I have not investigated.

I am going to look much further if I decide to trade this thing and will now download/clone your version of Jamie's's code as amended.

With many thanks to you.

I was originally attracted to the idea since I had drafted a monthly momentum re-balance system here on the website a couple of years ago. Which attracted much attention until Guy Fleury started commenting and the whole thread then went off the rails.

This system is much better - I had failed to add any fundamentals filter which certainly helps with the variability over different rebalance dates.

Actually Vladimir, I don't think you posted a version of the code? Whose or which version are you using?

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29%

I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly. Typically this will give your backtest a lot more data points, allowing you to hold more positions at once without diluting your alpha signal (just the hits, no deep cuts, so to speak) nor increasing turnover, and is likely to improve your sharpe ratio (via lower volatility and slippage). Though this algo trades on such low frequency data it's not likely to benefit so much from day-of-the-month diversification, I have successfully used this technique to great results in the past.

Chris, thanks for posting this. Has anyone been able to convert this algorithm to work with alpaca? Or if not can someone point me in the right direction of how to get started (with converting this or any other quantopian algorithm)?

Hey Viridian,

"I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly."

Can you expand on this? How would the logic work to implement this technique with this Algo?

@Viridian Hawk
Thank you for that. It certainly sounds an excellent idea to average the signal.

@Mike Burke
I'm not sure there would be much point averaging the fundamental factors, unless they are ratios to shareprice, since their frequency is so low. But you could easily average the momentum factor which is the second leg of this algo's filter.

I'm a bit rusty with the Q API, but i believe all one has to do is to add the momentum provisions as a custom factor. Custom since the built in momentum factor does not adjust for not including the last ten mean reverting days.

Then add it to the pipline, find the top x, and use it as a filter as per the existing algo.

When I get around to it I will post an example.

I'm still puzzling over the occasional spike in leverage - or rather how to correct it. I do not want to use optimise since I don't want equal weighting. Therefore you need to allocate slightly differently to the current algo. You need to only allocate a percentage of unused capital...which is not the way it is currently done.

@Mike Burke -- Sometimes it's as easy as putting a SimpleMovingAverage on the pipeline output, but for this algorithm you'd have to refactor the execution aspect of it. Basically you'll want start by creating a dictionary of target weights (including the bond allocation) based on the current day's pipeline output and bull market crossover, but instead of ordering those weights, you'd add them to a context list, ala context.daily_weights.append(today_target_weights) then you'll want to prevent overflow by popping off any entries once you're over 20, ala if len(context.daily_weights) > 20: context.daily_weights.pop(0);Then you just combine the weights and normalize them. That'll give you the 20-day average portfolio, which you can then order (most easily via the optimizer w/TargetWeights(combined_weights)).

@Viridian Hawk.
Yes, the portfolio management aspect is the key. Nice solution and simple. You just add the new stocks at a weight of 1/20th and existing stocks in the portfolio at whatever percentage of equity they have reached and then normalize.

This is what you should be doing even if you do not intend to average the signal or trade daily.

I find it very difficult to analyse the output on Q. Loathe it in fact. But I suppose what is happening is that for most of the time enough big hitting stocks drop out to make room for new entrants. And then sometimes they don't and you get huge leverage because you have failed to normalize the allocations.

At least with your suggestion new entrants get a fair crack of the whip and strongly trending stocks still retain an overweight position.

I was stupidly thinking of just dividing the un-allocated capital amongst the new entrants at a roll date, but I like your solution better.

@Viridian Hawk.
Another thing I have been pondering is the running of this strategy now that you can no longer trade through Quantopian. If you were wiling to take the risk of monthly allocation it would be no effort to run it manually, although whether you would take your signals from Quantopian, buy your own data, or look for online screeners I am not too sure. Perhaps Morningstar offer a free or cheap screener on the fundamentals.

If you wanted to run it daily, automation would be the better option and I suppose you could convert the algo to run on Quantconnect.

I suppose there must be other solutions but the last thing I would want is to have to run the system on my own server.

What do you do?

@Viridian Hawk.

I have become so used to designing my own systems on my own software and it is so very much better to be able to analyse a spreadsheet of your results which contains all the prices, all the signals, all the trades and so forth. You can turn them inside out and upside down and really get to the bottom of why the system is doing what it is doing.

In that respect I find Quantopian so very difficult - I can never grasp the full picture clearly enough.

I suppose logging is one option, although it is restricted. I suppose running in debug is another although so slow and tedious.

I understand the need to protect their data suppliers but for me at least it does make life difficult.

I suppose the research environment may be a better option since I think (?) you can use pipeline there now.

How best do you analyse your systems on Quantopian?

Incidentally, for those who insist on using leverage by design, it is worth considering leveraging the bond portfolio using futures rather than IEF. I did a great deal of work a while ago on the all weather portfolio concept and it might be worth looking at replacing IEF with the relevant future on US Government bonds. I have no idea yet whether Quantopian allows you to mix futures and equities within one system, but by way of example you could allocate 90% of your cash to stocks and 100% equivalent of your cash to bonds. Or whatever.

tenquant.io looks to be a promising source of high quality free fundamental data. In theory it is faster than MorningStar, which can have up to a three-day delay, whereas tenquant.io claims they scrape the financial data as soon as it goes public. As far as automation goes, I've been using Alpaca. I couldn't get their version of zipline to run on my computer, so I just rolled my own barebones trading framework using the REST API, which was pretty simple. So maybe somebody with more experience with Alpaca's version of zipline, which I think is called pylivetrader, can chime in whether there's any incompatibility that would stop this algo from running, but my impression is that it shouldn't be too hard to get it to work.

Thank you, most useful.

I have by no means finished my work on this excellent algo but as an interim report I have made progress reducing leverage without using "optimise". Certain stocks were repeatedly not getting sold at or around the close so I moved the stock transactions to the open. By the time the bond trades happened at the close, in tests so far, all stock sales were getting processed. And hence the allocations more accurate.

To combat negative allocations to bonds, I simply reduced any negative allocations to zero.

Imperfect and doubtless I shall improve on it once I get to the bottom of the matter.

I'm not at all sure about trading every day and averaging the momentum signal. A similar effect could be achieved (so far as avoiding the dangers of using a single monthly re-allocation date) by trading weekly on an un-averaged signal.

My concern is that trading once a month is very convenient, if potentially risky. Trading every day or even every week would be impossible for me unless I automated.

Clone Algorithm
259
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor
import numpy as np 

class Mean_Over_STD(CustomFactor):
    window_length = 756
    def compute(self, today, assets, out, value):
            out[:] = np.nanmean(value, axis=0) / np.nanstd(value, axis=0)

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        trade, 
        date_rules.month_end(days_offset=0), 
        #date_rules.week_end(),
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        date_rules.month_end(days_offset=0), 
        #date_rules.week_end(),
        time_rules.market_close(minutes=20)
    )
   #date_rules.month_end(days_offset=0),  
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5.0
    context.bonds = sid(23870)
    
    #Other parameters
    context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 10
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True)
    value = (Fundamentals.cash_return.latest.rank() + Fundamentals.fcf_yield.latest.rank()).rank()
    
    stable_roic = Mean_Over_STD(
        inputs=[Fundamentals.roic],
        mask=universe
    ).rank()
    
    stable_margins = Mean_Over_STD(
        inputs=[Fundamentals.ebit_margin ],
        mask=universe
    ).rank()
    
    quality = (
        roic + 
        # stable_roic +
        # stable_margins +
        ltd_to_eq +
        value
    )

    pipe = Pipeline(columns={'roe': quality},screen=universe)
    return pipe
        
def trade(context, data):
    log.info(get_datetime(tz=None))
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
    spy_ma200 = data.history(context.spy , "close", 200, "1d").mean()

    if spy_ma50 >= spy_ma200:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    prices = data.history(top_n_roe.index,"close", 180, "1d")     
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x, 'percentage to buy',(1.0 / context.Target_securities_to_buy))

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    if percent_bonds_to_buy <0:
        percent_bonds_to_buy=0
    order_target_percent(context.bonds , percent_bonds_to_buy)
    print('percent_bonds_to_buy',percent_bonds_to_buy)
There was a runtime error.

@Zenothestoic

The thing that has always bothered me about the pipeline implementation (starting with Joakim's version I think) is that the ranking is done against all stocks (QTradableStocksUS ?). In my mind the correct way to rank the factors is to use a mask to limit the comparison to those that are in your universe, i.e. rank(mask=universe), in this case Q3000US. However if you do it this way the cumulative return drops to less than half. Can you make an argument (other than it works better) for using rank() and not rank(mask=universe) ?

@Steve Jost
To be honest I am very unfamiliar with the Quantopian API especially as it seems to have changed somewhat since I last visited it.

I find working with the Quantopian IDE about as difficult an experience as engaging in carpentry where you are only allowed to look through a keyhole at your hands and the workbench.

Looks like you are right, how very bizarre. I added a mask separately to each of the fundamental rankings and came up with different (as it happens) worse results. Live and learn eh?

@Zenothestoic

I'm not an expert on Quantopian API, but to understand how rank(mask=universe) differs from rank(), it can help to construct two versions of the pipeline in a notebook (see attached). From this, it seems that rank() is ranking against a larger population than Q3000US and using screen = universe does not change the numerical ranks.

Click to load notebook preview
ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)  

Yes, the wrong way round of course if you are seeking low debt to equity. As it stands, however, high debt to equity creates exceptional profits, presumably because the company's debt produces leveraged earnings......

If you are a leverage junkie then this might actually suit your purposes - a highly profitable system (at least for the test period) and no need to take on leverage in your trading account.

A sort of no-recourse borrowing where your account can not go below zero.

@ Steve Jost
No, now you have pointed out the error, I can not put forward any argument for using rank() as opposed to rank(mask=universe).

@ Zenothestoic

It seems the prudent approach is to use rank(mask=universe) even though the return is less.
Without the mask, I worry that the exceptional return may have been a happy accident and not likely to repeat going forward.

Regarding debt to equity ratio - I've also found also that high 'financial_leverage' gives good results.
I think the two metrics are more or less equivalent.

That said it's most likely the combination of large debt and high 'roic' that does the trick.
A company that generates high return on capital will do well to leverage it's capital at low (and historically decreasing) interest rates.

@ Steve Jost
Your logic sounds right. Lots of debt, but used profitably.

@ Steve Jost

For the private punter that sort of algo makes a lot of sense. Leverage is built in, but in such a way it can not bankrupt you. The return is high enough that you can devote a small amount of capital to it and still have it make you a decent amount of money over 5 or 10 years. And your capital employed is small enough that if it all goes horribly wrong it won't be a catastrophe.

I'm glad this sort of algo has made a return here. So much more interesting than what Big Steve wants for his Billions.

@steve

Thank You for posting the notebook. Using mask=universe is something I have been experimenting a lot lately. I am trying to understand why choosing mask results in different returns. It should not. For example - Lets assume a scenario. Lets say you rank against the whole universe of 9000 securities (i.e. not using mask). Now lets pick one security in that universe - X. Lets say X is having highest fcf and hence has a rank of 9000. So it will be on top. Now lets assume you rank against Q500US but the stock X is not in Q500US. According to our screen - it will be excluded. Lets continue this further - say a stock Y is ranked as 8999 in whole universe and it also happens to be in Q500US, so it will end up in our selection. Now lets say had you use mask=ranking then the rank would be 500 and it would still end up in our selection. Therefore, using mask should not differ the results.

But the question is why it happens in the above algo - I think the answer lies in this line --> quality = (roic +ltd_to_eq +value)

adding different rank scores messes up the whole ranking. If one instead use quality = (roic +ltd_to_eq +value).rank(mask=universe) the result will be exactly same. Hence, using mask in ranking does not change the result.

This is the most plausible explanation I can come up with. I might be missing something. Let me know please if this sounds logical.

@ Nadeem
I agree, if it's just one factor it doesn't matter if you rank over the universe or over the entire population.
If you combine several factors (as in this script) than it seems to me that it does matter which way you do it.
You don't want stocks outside of your universe to influence the weighting placed on the factors.

I think Dan Whitnable maybe said it the best in comments that he added to his source code.
Note that he included (mask=universe) for the various factors but commented out that part of the code.
I think this was commented out only so that his back test would match the result of previous versions.

    # Get the fundamentals we are using.  
    # Rank relative to others in the base universe (not entire universe)  
    # Rank allows for convenient way to scale values with different ranges  
    cash_return = ms.cash_return.latest.rank() #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)  
    roic = ms.roic.latest.rank() #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(ascending=True) #, mask=universe)  
    # Create value and quality 'scores'  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  

In the spirit of co-operation I wanted to report that I am having better results by doing away with the skipped period of 10 days in the momentum calculation (which people leave out because stocks are claimed to mean revert over the period). Frankly, I have never been impressed by the argument. Yes, I tend to use 10 days in my mean reversion system tests, but the benefits of NOT leaving out the past 10 days in TF calculations seem sound.

The other change I have made is to use a more sensitive MA crossover of 10 / 100 for the SPY permission filter.

The algo is now reaching Guy Fleury proportions.

For a really ritzy and risky shoot the lights out type system, I'm just using debt to equity as a factor - the higher the better.

Using 5 stocks creates 38% cagr over the 2003 to 2019 test period. Still the occasional leverage spike to iron out -average leverage 1.06. Vol 30, max DD 37%. Universe = Q3000US(). Weekly rebalancing.
I won't bother to post the algo at this stage since I still have to deal with a few matter such as the leverage problem.

But it is certainly all beginning to look highly amusing.

This trading strategy has about the same structure as many others on Quantopian: select some stocks, rank them on some criteria and rebalance periodically. Use some minimal (lagging) protection, i.e.: a 50–200 SMA crossover for this stock to bond switcher. A simple technique that has been around for ages.

Would it have been reasonable in 2003 to do so? Definitely, we were just getting out of the aftermath of the Dot.com bubble. A lot of developers (and portfolio managers) had bad memories of that debacle. So, yes, going forward, they would have put some kind of protection which could have very well been some variant and serving the same purpose: to limit the impact of drawdowns and volatility. It would also appear that it is easier to sell to management an automated trading strategy with some kind of protection than without.

This trading strategy is very hard to play with high stakes and a limited number of stocks. However, for a smaller account, it should do quite fine, as long as it stays limited.

The strategy has built-in scalability by design (up to a limit).

Playing 5 stocks on a \(\$\)10M account is not that reasonable. It starts with a \(\$\)2M bet size. I do not think that many here are ready for that whatever the results of some backtests. However, as you increase the number of stocks you see a decline in overall performance. This is quite reasonable too. The strategy tries to pick stocks that are already performing above market averages, and therefore, should provide on average an above-average performance. The thing is that as you add new stocks, they have a lower-ranked expectancy than those already selected. And this will tend to lower bet size and overall performance.

The question becomes: is this still acceptable over the long term?

It is all a question of confidence. In which scenario would you put your money on the table for some 16+ years? A backtest can give you an indication of what could have been. It does not give you what will be. However, based on the behavior of a particular trading strategy, you can ascertain that going forward would be much like what the trading strategy did in the past. You will not have the exact numbers, but you can still make some reasonable approximations. Sometimes, relatively accurate, considering that, otherwise, you would not even have a clue as to what is coming your way.

It is only Quantopian who is looking to place £10m into a strategy. I will be placing a mere £10k.

So yes, I entirely agree. And yes, as you expand to 20 stocks and beyond the returns decrease but that is what you would have to do as the capital grew.
Trend following is old as the hills. The only difference with this strategy is the accidental discovery of the "wrong" use of the debt to equity ratio.

The big advantage of this strategy is that for a small amount of capital, large gains may be possible for some period without the use of leverage in the trading account.

The leverage is applied by the corporates themselves and is therefore not a direct risk to your trading account ~ it is non-recourse borrowing as regards the trader.

Incidentally, and not surprisingly, you will find returns are also pretty high using 10 and 20 stocks, so for the smaller account as the equity grows, you could employ the greater capital in this way.

All in all, this is a strategy for the small player looking for large returns from a leveraged play without the risks of taking on borrowings on his own balance sheet.

And agreed as to the future. Quo vadis. As in life, so in the markets.

Now, Guy – I have shared an exact strategy with the community. How about you share the code to one of your adaptations of this strategy so that we can see how you achieve your outsize returns?

Alpha Decay Compensation

The following chart is based on Dan Whitnable's version of the program at the top of this thread. All the tests were done using \(\$\)10M as initial capital. The only thing I wanted to demonstrate was that the structure of the program itself will dictate some of its long-term behavior. And as such, one thing we could do was make an estimated as to the number of trades that will be taken based on how many stocks will be traded.

There are 17 consecutive tests presented on that chart. Each test having the number of stocks incremented as the BT number increased. The Q3000US was used instead of the Q500US universe. There was a 2\(\%\) CAGR advantage in doing so. No leverage was used. Nonetheless, the strategy did use some at times (up to 1.6) for short periods of time. This mainly due to the slippage factor. On average the leverage was at 1.0, some 95+\(\%\) of the time.

An analysis of the data can help better understand the overall behavior of the trading strategy and plan for what you would like to see or might prefer as initial setting. I have not made any change to the logic of Whitnable's version except from BT # 24 where I commented out the no slippage line of code. Moved the rebalance function to beginning of month and beginning of day to allow more trades to be executed. On the last test you had 240 stocks, yet a rebalance generated 16,486 transactions in one day. That is about 3 hours for average trade execution. So, yes, there is a lot of slippage.

I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for.

The strategy, as you increase the number of stocks, does generate a lot of slippage and there is a cost to that. I prefer having a picture net of costs. Therefore, the Quantopian default settings for commissions and slippage were in force.

Some observations on the above chart.

It takes about 30+ stocks to have a diversified portfolio. The more stocks you add, the more the portfolio's average price movement will start to resemble the market average indices. It is only if a trading strategy can generate some positive alpha that it can exceed market averages.

As the number of stocks increases, we see the total return increase up to BT # 25 with its 17.8\(\%\) CAGR. After that, we have the total CAGR decrease as the number of stocks increases.

As explained in my previous post, this is expected since as we increase the number of stocks we are adding lower-ranked stocks having lower CAGR expectancies. The result is reducing the overall portfolio performance.

The more you want to make this a diversified portfolio (having it trade more stocks), the more we have a reduction in the overall performance. It is still positive, it is still above market averages and it does generate some positive alpha.

What I find interesting is the actual vs estimated number of trades columns. The actual number of trades come from the tearsheets. Whereas, the estimate, is just that, an estimate based on the behavior of this type of portfolio rebalancing strategy. There is a direct relationship between the number of trades and the number of stocks to be traded as the following chart illustrates.

Participation Prize

A subject that is not discussed very often here. There is a participation prize to play the game. Since the rebalance is on a monthly basis, as the number of stocks grows, the average holding duration will too. Jumping by close to month multiples. Some attribute the gains as coming from their alpha generation when all it is is partly market average background.

The estimated free x_bar is a measure of what the market offers just for holding some positions over some time interval. If you hold SPY for 20 years, you should expect to get SPY's CAGR over that period. The same goes for holding stocks months at the time. And over the past 10 years, in an upmarket, just participating would have generated a profit on the condition you made a reasonable stock selection.

One cannot call the estimated free x_bar as alpha generation. It is a source of profit for sure, but that is not alpha per se. Alpha is what is above the market average. The stuff that exceeds the benchmark average total return. As the number of stock increases, we can see the proportion of the estimated free x_bar increase in percentage over the actual x_bar. It is understandable, the average net profit per trade is decreasing (actual x_bar) while the average duration increases and the turnover rate is decreasing (x_bar is the average net profit per trade, refer to the long equation in a previous post).

Having the total return decrease after BT # 25 can also be interpreted as alpha decay. This is due to the very structure of the program itself. No compensation is applied for this return degradation, and it should continue simply by adding more stocks to the portfolio, or adding more time. And this becomes a rather limiting factor. The more you want to scale this thing up by adding more stocks, more time, the more the alpha with disintegrate. All that is needed is to compensate for the phenomena.

I have this free and old 2014 paper, which is still valid today, that deals with how to compensate for this return decay (see https://alphapowertrading.com/index.php/publications/papers/263-fix-fraction-2). It should help anyone solve that problem, and thereby, achieve higher returns. The solution does not require much. However, the first step is to understand the problem, and then apply the solution. The paper does provide the explanations and equations needed to address the return decay problem.

Thank You, Zenothestoic for your continued work on this. I look forward to seeing your final version.

@Nadeem Ahmed
Thank you for your kind words!

@Guy Fleury
"I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for."

Then just for once Guy, why don't you do so? It would be most interesting. I am sure everyone would enjoy your adaptation of this system.

Here is Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights) and some other improvements.

# Quality companies in an uptrend (Dan Whitnabl version with bonds weights  fixed by Vladimir)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 20  
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  

def make_pipeline(context):  
    universe = Q500US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    # record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)  
    longs = shorts = 0  
    for position in context.portfolio.positions.itervalues():  
        if position.amount > 0: longs += 1  
        elif position.amount < 0: shorts += 1  
    record(long_count = longs, short_count = shorts)  

And its performance with parameters of my code.
You may compare to the results of my 37 lines code.

Click to load notebook preview

@Vladimir
Wonderful, thank you for that I will look closely tomorrow. I too have been working on putting all the ranking and masking into Pipeline but so far it does not seem to relate at all to my former versions. I shall look to correct the errors over the next few days And in the meantime will look with great interest at your code.

@Vladimir
By the way I quite like the idea of NOT equal weighting each month but adding the % of existing holdings and new 20% weightings up and then normalizing if they come to over 1. As suggested by @ Viridian Hawk. So as to let profits run.

Dear All,

I have been following this thread since the start and thank you all for contributing.

I am Web Developer wanting to learn Python and algo trading.

I am new to Qauntopian and want to learn as much as possible.

I have backtested the Dan Whitnabl version with fixed bonds weights strategy posted by Vladimir.

Here is the results.

Thanks
Ashish

Clone Algorithm
50
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 20  
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  

def make_pipeline(context):  
    universe = Q500US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    # record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)  
    longs = shorts = 0  
    for position in context.portfolio.positions.itervalues():  
        if position.amount > 0: longs += 1  
        elif position.amount < 0: shorts += 1  
    record(long_count = longs, short_count = shorts)  
There was a runtime error.

I'm finding a huge difference between using the momentum calculation within pipeline:

ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest  
indebted = ltd_to_eq_rank.top(50,mask=universe)  
mom =Returns(inputs=[USEquityPricing.close],window_length=126,mask=indebted)  

and Pandas used on the output of the pipeline:

quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]  

I suppose it could be the fill method and limit to the fill method: I guess I need to look at Zipline to see exactly what is going on. And it has nothing to do with the deduction of the mean reversion 10 days...I have accounted for that.

There again I could be overlooking something simple and obvious.

I will revert to the research notebook to compare the outputs and understand the differences.

The source code tells me that Returns uses:

window_safe = True  

Whereas the Pandas interpretation is looking at the adjusted Close, I believe I am right in saying the Returns factor is looking some sort of normalized prices. Which I confess puzzles me. Hey ho. More work to be done. And then of course the necessity to re-code and move to Quantconnect or get my own data.

A trading strategy is just that: a trading strategy. It can easily be expressed as a payoff matrix:
$$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H} \cdot \Delta \mathbf{P}) = F_0 + n \cdot \bar x = F_0 \cdot (1 +r_m(t) +\alpha_t(t) - exp_t(t))^t$$ where \(g(t) = r_m(t) +\alpha_t(t) – exp_t(t)\).

You intend to play small, you could look at the equation this way: $$\mathsf{E}[\hat F(T)] = F_0 + n \cdot \bar x = F_0 \cdot (1 + g(t))^t$$ where \(F_0\), the initial capital can play a major role.

As long as a trading strategy is scalable, sustainable, and marketable it does not care so much about how much you put on the table (\(F_0\)). However, because it is a compounding return game that can be made to last, (\(F_0\)) will matter a lot.

Using a small stake, even if the strategy is profitable, is like really wasting a strategy's potential. Say you get a 20\(\%\) long-term CAGR, you might get something like this depending on the initial capital:

\(\mathsf{E}[\hat F(T)] = 10,000 \cdot (1 + 0.20)^{20} = 383,376\)

\(\mathsf{E}[\hat F(T)] = 100,000 \cdot (1 + 0.20)^{20} = 3,833,760\)

\(\mathsf{E}[\hat F(T)] = 1,000,000 \cdot (1 + 0.20)^{20} = 38,337,600\)

\(\mathsf{E}[\hat F(T)] = 10,000,000 \cdot (1 + 0.20)^{20} = 383,375,999\)

Putting up more initial capital does not require any trading skills, none at all.

However, it does require finding ways to either raise the cash or have it allocated to the strategy in some way. Nonetheless, 20 years wasted can also be considered as non-productive time, and that time has no reset button.

If a trading strategy has good long-term prospects, why waste its potential by constraining it to a small initial capital? Doesn't your strategy deserve better? And don't you?

@Guy Fleury
I give up. I just give up.
For heavens sake either produce some code or just let it be.
No offence meant but what is SO difficult about coming up with some code like the rest of us do?

@Anthony, there is no need to post the algo. Like I said in my previous post: I used Dan Whitnable's version that is already posted. Changed the Q500US to Q3000US. Anybody can do that. Changed the scheduled rebalancing to month_start and market_open. For each test context.TARGET_SECURITIES and context.TOP_ROE_QTY were incremented as per the # of stock column in the above chart. Almost forgot, commented out the set_slippage line starting with BT # 24 due to high slippage. That's it. No other changes.

Here is Vladimir's latest version with the same changes.

Clone Algorithm
65
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights)  
# From Vladimir's corrected version
# Made ajustments to resemble the last test using 240 stocks.
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 240  
    context.TOP_ROE_QTY = 250 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_start(days_offset = 1),  
        time_rules.market_open(minutes = 5)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_start(days_offset = 1),  
        time_rules.market_open(minutes = 5)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  

def make_pipeline(context):  
    universe = Q3000US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    # record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)  
    longs = shorts = 0  
    for position in context.portfolio.positions.itervalues():  
        if position.amount > 0: longs += 1  
        elif position.amount < 0: shorts += 1  
    record(long_count = longs, short_count = shorts)
There was a runtime error.

@Guy Fleury

"I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for."

So what is the point in simply re-posting someone else's code unaltered?

I'm really not trying to be difficult Guy but you talk of enhancers and yet we never see your code for any enhancements. So perhaps you could post one of your 90% CAGR tests with code.

Unless the only enhancement is leverage over and above the few changes you refer to above? Perhaps you do not have any further enhancements to the code other than upping the leverage?

Somehow there is a disconnect here. Perhaps I am simply misunderstanding you.

But if you DO have further enhancements to Dan's code(or Vladimir's) perhaps you could post them?

If you do NOT have any further enhancements to contribute and your 90% CAGR was simply based on leverage then would you be very kind and say so?

Skip_days.

The argument is that stocks are mean reverting over 10 days hence the following code:

 context.momentum_skip_days = 10  
 prices = data.history(df.index,"close", 180, "1d")  
 #Calculate the momentum of our top ROE stocks  
 quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]  

The argument does not seem to stand up in a few of the tests I have been running.

Using 10 skip_days and 5 stocks gives a 2261% total return
Using 2 skip_ days and 5 stocks gives a 25000% return.

So the return increases as you reduce the number of excluded days. In other words in my tests using my parameters, it is best to NOT to exclude a period in the momentum calculation. Here is an example using only 2 skipped days.

Of course my tests have used 5 stocks only.

But using 20 stocks with the attached code provides similar evidence.
10 skipped days = total return of 3845%
1 skipped day = total return of 5121%

So what's with the much talked of mean reversion period of 10 days?

Is it nonsense or am I missing something?

You will notice that leverage has not yet been sorted out in this algo. It averages just above 1 but sometimes goes up to 1.2 ish. I will get around to normalizing the weights at some stage. Which will presumably reduce returns.

Clone Algorithm
100
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US, Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor,Returns
import numpy as np 


def initialize(context):  
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')  
    
    #Schedule Functions
    schedule_function(
        trade, 
        #date_rules.month_end(days_offset=0), 
        date_rules.week_end(),
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        trade_bonds, 
        #date_rules.month_end(days_offset=0), 
        date_rules.week_end(),
        time_rules.market_close(minutes=20)
    )
 
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5
    context.bonds = sid(23870)
    
    #Other parameters
    #context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 2
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()
    #ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)
    ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest
    indebted = ltd_to_eq_rank.top(50,mask=universe)
    mom =Returns(inputs=[USEquityPricing.close],window_length=126,mask=indebted)
    mom_av = SimpleMovingAverage(inputs=[mom],window_length=20,mask=indebted)    
    strong = mom.top(5)

    pipe = Pipeline(columns={'ltd_to_eq_rank': ltd_to_eq_rank, 'mom': mom,'mom_av':mom_av},screen=indebted)
    return pipe
        
def trade(context, data):
    log.info(get_datetime(tz=None))
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    #top_n_by_momentum  = algo.pipeline_output('pipeline')
    #security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma10 = data.history(context.spy , "close", 10, "1d").mean()
    spy_ma100 = data.history(context.spy , "close", 100, "1d").mean()

    if spy_ma10 >= spy_ma100:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    #top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    #prices = data.history(top_n_roe.index,"close", 180, "1d")    
    prices = data.history(df.index,"close", 180, "1d")  
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #quality_momentum = prices[:].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x, 'percentage to buy',(1.0 / context.Target_securities_to_buy))

            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    if percent_bonds_to_buy <0:
        percent_bonds_to_buy=0
    order_target_percent(context.bonds , percent_bonds_to_buy)
    print('percent_bonds_to_buy',percent_bonds_to_buy)
There was a runtime error.

Anthony,

The latest algo you have used in your research still have uncontrolled leverage problem.
Here are the results of your latest setup in my version of the algo:

----------------------------------------------------------------------------

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF');
MOM = 126; EXCL = 2; N_Q = 50; N = 5; MA_F = 10; MA_S = 100; LEV = 1.0;

----------------------------------------------------------------------------

with commented # set_slippage(slippage.FixedSlippage(spread = 0.0)).
Why cheat yourself?

Click to load notebook preview

Hi old & new Q friends. For me, coming back after 6 months away, it is nice to see people sharing & contributing to this algo. Thanks Chris for starting the thread. The idea of "Quality Stocks in Uptrend" is exactly what i like to try to achieve in my own very small-time personal investing. Now i ask if we can add a useful additional dimension besides "Quality" and "Uptrend" for individual stocks? Good quality stocks will KEEP going up if the demand for them continues from the overall investment community, and this is related in part to what is the investing "fashion" at the time. Sometimes it is big caps, or sometimes small caps, sometimes high earnings growth stocks, sometimes value such as low-PE stocks, or low-debt stocks, or sometimes particular industries or market segments or some other specific factors. Now anyone who has looked at Markov Chains & transition probabilities for different market regimes or different types of stocks will have noticed that generally there is a high degree of persistence in market behavior. If we identify what is currently the dominant "investment fashion" in the market at any given time, we can observe that this usually has a tendency to persist, at least for a while. As the saying goes: "A rising tide tends to lift all boats", so i now propose adding an additional component or dimension to the mix when seeking the best investment opportunity, based on THREE legs: 1) Quality, 2) Uptrend (as you already have), and now also 3) The leading "investment fashion" group at any given time.

As i see it, "Investment fashion" could potentially be any of the following:
a) Any of the "Investing Styles" as defined in Q, namely: Momentum, Size, Value, Short Term Reversal, Volatility.
b) Any of the 11 different Economic Sectors as defined & used in Q.
c) Other possibilities such as correlation with interest rates or with those commodities showing the greatest price appreciation, etc.

So, without going to code just yet (my python skills are not great), i envisage using what we already have in this thread, and then defining & ranking the currently leading "Investment Fashion" as per a), b), c) above and adding this in, to further enhance the selection criteria for the best potential returns.

Comments, please. Anyone care to try coding up this additional component or dimension of "Investment Fashion" (in an adaptive way but with some lag), to improve the mix even further?

Vladimir

Many thanks for that. There is no specific setting for "leverage" in your code is there? I have spent most of my time trying to work out the Q API to be honest and intended to fix leverage later by normalizing the weights.

I need to work through your code to understand it.

I can't match your results using your code. Could you post the full version? I get a total return of around 7000% using those settings on your code.

I am nervous about using Optimise to be honest: I would prefer to see exactly how it is done. But I daresay its harmless enough.

Yes, quite agree re slippage.

Is your MOM calculation being done within Pipeline as per the code you posted above, or outside it? I can not achieve those results calculating MOM within pipeline. My results are achieved by calculating MOM outside pipeline.

Also have you adjusted your code to re-balance weekly as I did or are you rebalancing "date_rules.month_end(days_offset = 7)" as per your code above?

Anthony,

Happy backtesting.

Anthony,

I am nervous about using Optimise to be honest...

order_optimal_portfolio(opt.TargetWeights(wt), [opt.MaxGrossExposure(LEV)])  

doing almost the same as this:

for sec, weight in wt.items(): order_target_percent(sec, weight)  

Vladimir
Very generous and thank you
A

This is how I would approach the over-leverage/shorting issue. I've also simplified the code, while (hopefully) keeping the original logic intact. (It does briefly over-leverage, probably due to a halted stock, but it's not bad.) I'm not sure why the performance lags Zenothestoic's 7301% return version so much, since I tried to match the parameters from that one. Can anyone spot the difference?

Clone Algorithm
122
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original by: Christopher Cain, CMT & Larry Connors
Posted here: https://www.quantopian.com/posts/new-strategy-presenting-the-quality-companies-in-an-uptrend-model-1

Refactored by Viridian Hawk:
- Moved all stock selection logic into pipeline.
- Simplified bond allocation logic and fixed over-leveraging issue.
"""

import quantopian.algorithm as algo
from quantopian.pipeline              import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data         import Fundamentals
from quantopian.pipeline.filters      import Q3000US
from quantopian.pipeline.factors      import SimpleMovingAverage, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 5 #Number to buy

BONDS_FUND = sid(23870) # 7-10yr Treasuries

#HOLD_POSITIONS_FOR_X_DAYS = 20 ## Not implimented yet.
    
    
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        rebalance, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        record_vars, 
        date_rules.every_day(), 
        time_rules.market_close()
    )
    
    # Rolling portfolios
    context.daily_weights = []    
    
 
def make_pipeline():
    # Base universe set to the Q500US
    universe = Q3000US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True, mask=m)
    value = (Fundamentals.cash_return.latest.rank(mask=m) + Fundamentals.fcf_yield.latest.rank(mask=m)).rank()
    
    quality = (
        roic + 
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)
    
    return Pipeline(columns={},screen=m)


def rebalance(context, data):
    # Get daily pipeline output (screened stocks)
    df = algo.pipeline_output('pipeline')
    security_list = df.index
    
    # Trend Following Regime Filter (50-day/200-day crossover)
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()
    TF_filter = spy_ma50 >= spy_ma200
    
    # Construct today's weights
    todays_weights = {}
    ## Equities
    for s in security_list:
        ### If we're in a bear market, do not add new equities positions,
        ### only continue to hold current ones if they continue to satisfy screener.
        if TF_filter or s in context.portfolio.positions:
            todays_weights[s] = 1.0 / TOP_N_RELATIVE_MOMENTUM_TO_BUY
    ## Bonds
    todays_weights[BONDS_FUND] = 1.0 - sum(todays_weights.values())
    
    # Rebalance to today's target weights
    for s, w in todays_weights.items():
        order_target_percent(s, w)
    for s in context.portfolio.positions:
        if s not in todays_weights:
            order_target(s, 0)
            
            
def record_vars(context, data):
    longs = shorts = 0
    for stock in context.portfolio.positions:
        if context.portfolio.positions[stock].amount > 0:
            longs += 1
        elif  context.portfolio.positions[stock].amount < 0:
            shorts += 1
    record(longs = longs)
    record(shorts = shorts)
    record(l = context.account.leverage)
There was a runtime error.

Anyways, the reason I did the above was to illustrate how if you approach portfolio construction as the creation of a todays_weights dictionary, then it becomes very easy to take it one step further and control the holding period via rolling portfolios. Here I show how to do 20 rolling portfolios that are each held for 20-days, thereby maintaining the same turnover rate as a monthly-rebalance while diversifying away from day-of-the-month overfit noise risk.

As I suspected, in the case of this strategy, it doesn't make much difference because the quarterly data is super low frequency and turnover was already ridiculously low. However, this will create more uniform turnover, instead of monthly spikes. It also allows you to hold more positions at once without resorting to holding positions with weaker alpha signals. Generally, this should lower volatility (via diversification and lower signal-name risk) without sacrificing alpha.

This technique is super useful if you are dealing with an alpha signal that fluctuates faster than the ideal holding period. It also gives you much more granularity than rebalance_weekly and rebalance_monthly. You can do 10-day or 30-day holding periods.

The only drawback is that it doesn't keep track of a position's gains/losses and always rebalances back to its original target weight. So on the long side it'll work against you on momentum stocks but to your advantage if the positions tend to mean revert.

Clone Algorithm
122
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original by: Christopher Cain, CMT & Larry Connors
Posted here: https://www.quantopian.com/posts/new-strategy-presenting-the-quality-companies-in-an-uptrend-model-1

Refactored by Viridian Hawk:
- Moved all stock selection logic into pipeline.
- Simplified bond allocation logic and fixed over-leveraging issue.
"""

import quantopian.algorithm as algo
from quantopian.pipeline              import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data         import Fundamentals
from quantopian.pipeline.filters      import Q3000US
from quantopian.pipeline.factors      import SimpleMovingAverage, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 5 #Number to buy

BONDS_FUND = sid(23870) # 7-10yr Treasuries

HOLD_POSITIONS_FOR_X_DAYS = 20
    
    
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        rebalance, 
        date_rules.every_day(), 
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        record_vars, 
        date_rules.every_day(), 
        time_rules.market_close()
    )
    
    # Rolling portfolios
    context.daily_weights = []    
    
 
def make_pipeline():
    # Base universe set to the Q500US
    universe = Q3000US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True, mask=m)
    value = (Fundamentals.cash_return.latest.rank(mask=m) + Fundamentals.fcf_yield.latest.rank(mask=m)).rank()
    
    quality = (
        roic + 
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)
    
    return Pipeline(columns={},screen=m)


def rebalance(context, data):
    # Get daily pipeline output (screened stocks)
    df = algo.pipeline_output('pipeline')
    security_list = df.index
    
    # Trend Following Regime Filter (50-day/200-day crossover)
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()
    TF_filter = spy_ma50 >= spy_ma200
    
    # Construct today's weights
    todays_weights = {}
    ## Equities
    for s in security_list:
        ### If we're in a bear market, do not add new equities positions,
        ### only continue to hold current ones if they continue to satisfy screener.
        if TF_filter or s in context.portfolio.positions:
            todays_weights[s] = 1.0 / TOP_N_RELATIVE_MOMENTUM_TO_BUY
    ## Bonds
    todays_weights[BONDS_FUND] = 1.0 - sum(todays_weights.values())
    
    # Rolling portfolios
    ## Add today's weights to rolling list
    context.daily_weights.append(todays_weights)
    if len(context.daily_weights) > HOLD_POSITIONS_FOR_X_DAYS:
        context.daily_weights.pop(0)
    ## Average the rolling portfolios (if there is a more pythonic way to accomplish this, please share)
    combined_weights = {}
    for weights in context.daily_weights:
        for s, w in weights.items():
            if s not in combined_weights:
                combined_weights[s] = 0 # initialize
            combined_weights[s] += w / len(context.daily_weights)
    
    # Rebalance to target weights
    for s, w in combined_weights.items():
        order_target_percent(s, w)
    for s in context.portfolio.positions:
        if s not in combined_weights:
            order_target(s, 0)
            
            
def record_vars(context, data):
    longs = shorts = 0
    for stock in context.portfolio.positions:
        if context.portfolio.positions[stock].amount > 0:
            longs += 1
        elif  context.portfolio.positions[stock].amount < 0:
            shorts += 1
    record(longs = longs)
    record(shorts = shorts)
    record(l = context.account.leverage)
There was a runtime error.

The following is based on @Anthony's latest version where I changed the number of stocks to 240 and moved the rebalancing to the beginning of the week_start and market_open to make it the same as in @Vladimir's version. Also commented out the no slippage line.

The surprise is the smoothness of the equity curve and how it passed the financial crisis with barely a dip, considering that period's market turmoil.

For some reason, the strategy did not trade the 240 stocks as requested but only about 50-53. Nonetheless, for a long only strategy, it is a remarkable equity curve with low volatility and low drawdowns.

Now, the problem becomes keeping the smoothness of the equity curve and raising its outcome. But first, there is a need to know why it did not trade the 240 stocks.

Clone Algorithm
65
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights)  
# From Vladimir's corrected version
# Made ajustments to resemble the last test using 240 stocks.
# From BT#40 Zero's version
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US, Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor,Returns
import numpy as np 
 
 
def initialize(context):  
    
    #set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')  
    
    #Schedule Functions
    schedule_function(
        trade, 
        #date_rules.month_end(days_offset=0), 
        date_rules.week_start(),
        time_rules.market_open(minutes=5)
    )
    schedule_function(
        trade_bonds, 
        #date_rules.month_end(days_offset=0), 
        date_rules.week_start(),
        time_rules.market_open(minutes=5)
    )
 
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 240
    context.bonds = sid(23870)
    
    #Other parameters
    #context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 2
    context.top_n_relative_momentum_to_buy = 240 #Number to buy
    
 
def make_pipeline():
 
    # Base universe set to the Q500US
    universe = Q3000US()
    #ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)
    ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest
    indebted = ltd_to_eq_rank.top(50,mask=universe)
    mom =Returns(inputs=[USEquityPricing.close],window_length=126,mask=indebted)
    mom_av = SimpleMovingAverage(inputs=[mom],window_length=20,mask=indebted)    
    strong = mom.top(5)
 
    pipe = Pipeline(columns={'ltd_to_eq_rank': ltd_to_eq_rank, 'mom': mom,'mom_av':mom_av},screen=indebted)
    return pipe
        
def trade(context, data):
    log.info(get_datetime(tz=None))
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    #top_n_by_momentum  = algo.pipeline_output('pipeline')
    #security_list = df.index
 
    ############Trend Following Regime Filter############
    
    spy_ma10 = data.history(context.spy , "close", 10, "1d").mean()
    spy_ma100 = data.history(context.spy , "close", 100, "1d").mean()
 
    if spy_ma10 >= spy_ma100:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]
 
    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    #top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    #prices = data.history(top_n_roe.index,"close", 180, "1d")    
    prices = data.history(df.index,"close", 180, "1d")  
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #quality_momentum = prices[:].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    for x in context.portfolio.positions:
        if (x.sid == context.bonds):
            pass
        elif x not in top_n_by_momentum:
            order_target_percent(x, 0)
            print('GETTING OUT OF',x)
    
    for x in top_n_by_momentum.index:
        if x not in context.portfolio.positions and context.TF_filter==True:
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))
            print('GETTING IN',x, 'percentage to buy',(1.0 / context.Target_securities_to_buy))
 
            
def trade_bonds(context, data):
    amount_of_current_positions=0
    if context.portfolio.positions[context.bonds].amount == 0:
        amount_of_current_positions = len(context.portfolio.positions)
    if context.portfolio.positions[context.bonds].amount > 0:
        amount_of_current_positions = len(context.portfolio.positions) - 1
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
    if percent_bonds_to_buy <0:
        percent_bonds_to_buy=0
    order_target_percent(context.bonds , percent_bonds_to_buy)
    print('percent_bonds_to_buy',percent_bonds_to_buy)
There was a runtime error.

Guy, i believe it's not trading 240 stocks and only 50 because of this constraint in the code: indebted = ltd_to_eq_rank.top(50,mask=universe

Changed N_ Q t o 60

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF', 'TLT');
MOM = 126; EXCL = 2; N = 5; N_Q = 60; MA_F = 10; MA_S = 100; LEV = 1.0;

Click to load notebook preview

@Vladimir

For my taste, I would not invest in the US long bond. The 7 to 10 year is already ritzy enough and may or may not offer protection in a stock market crash. In my testing, the long bond has sometimes been 90% correlated to stocks and had a VAST draw down in the early 1980s Volker interest rate rise regime.

In fact I might even down grade to IEI.

But who am I to say....

Super return anyway!

Could someone please help me in getting this code right. I am trying to weight the portfolio according the strength of rank rather than equally weighting them. I am trying to use - wt_stk = output.quality / output.quality.sum() line instead of original - wt_stk = LEV/len(stocks). But it gives me a runtime error - TargetWeights() expected a value with dtype 'float64' or 'int64' for argument 'weights', but got 'object' instead.

@Nadeem -- I think you need to do something more like wt_stk = output.quality[s] / output.quality.sum() Note the [s]

Clone Algorithm
64
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original by: Christopher Cain, CMT & Larry Connors
Posted here: https://www.quantopian.com/posts/new-strategy-presenting-the-quality-companies-in-an-uptrend-model-1

Refactored by Viridian Hawk:
- Moved all stock selection logic into pipeline.
- Simplified bond allocation logic and fixed over-leveraging issue.
"""

import quantopian.algorithm as algo
from quantopian.pipeline              import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data         import Fundamentals
from quantopian.pipeline.filters      import Q3000US
from quantopian.pipeline.factors      import SimpleMovingAverage, Returns
import numpy as np 


TOP_N_ROE_TO_BUY = 50 #First sort by ROE
RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
MOMENTUM_SKIP_DAYS = 10
TOP_N_RELATIVE_MOMENTUM_TO_BUY = 5 #Number to buy

BONDS_FUND = sid(23870) # 7-10yr Treasuries

#HOLD_POSITIONS_FOR_X_DAYS = 20 ## Not implimented yet.
    
    
def initialize(context):
    set_slippage(slippage.FixedSlippage(spread = 0.0))
    
    algo.attach_pipeline(make_pipeline(), 'pipeline')    
    
    #Schedule Functions
    schedule_function(
        rebalance, 
        date_rules.month_end(days_offset=0), 
        time_rules.market_open(minutes=30)
    )
    schedule_function(
        record_vars, 
        date_rules.every_day(), 
        time_rules.market_close()
    )
    
    # Rolling portfolios
    context.daily_weights = []    
    
 
def make_pipeline():
    # Base universe set to the Q500US
    universe = Q3000US()
    m = universe

    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True, mask=m)
    value = (Fundamentals.cash_return.latest.rank(mask=m) + Fundamentals.fcf_yield.latest.rank(mask=m)).rank()
    
    quality = (
        roic + 
        ltd_to_eq +
        value
    )
    m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)
    
    quality_momentum = Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p()
    m &= quality_momentum.top(TOP_N_RELATIVE_MOMENTUM_TO_BUY, mask=m)
    
    return Pipeline(columns={'weight':quality.rank() },screen=m)


def rebalance(context, data):
    # Get daily pipeline output (screened stocks)
    df = algo.pipeline_output('pipeline')
    security_list = df.index
    
    # Trend Following Regime Filter (50-day/200-day crossover)
    spy_hist = data.history(symbol('SPY'), "close", 200, "1d")
    spy_ma50 = spy_hist[-50:].mean()
    spy_ma200 = spy_hist.mean()
    TF_filter = spy_ma50 >= spy_ma200
    
    # Construct today's weights
    todays_weights = {}
    ## Equities
    for s in security_list:
        ### If we're in a bear market, do not add new equities positions,
        ### only continue to hold current ones if they continue to satisfy screener.
        if TF_filter or s in context.portfolio.positions:
            todays_weights[s] = df.weight[s] / df.weight.sum() * len(df) / TOP_N_RELATIVE_MOMENTUM_TO_BUY
    ## Bonds
    todays_weights[BONDS_FUND] = 1.0 - sum(todays_weights.values())
    
    # Rebalance to today's target weights
    for s, w in todays_weights.items():
        order_target_percent(s, w)
    for s in context.portfolio.positions:
        if s not in todays_weights:
            order_target(s, 0)
            
            
def record_vars(context, data):
    longs = shorts = 0
    for stock in context.portfolio.positions:
        if context.portfolio.positions[stock].amount > 0:
            longs += 1
        elif  context.portfolio.positions[stock].amount < 0:
            shorts += 1
    record(longs = longs)
    record(shorts = shorts)
    record(l = context.account.leverage)
There was a runtime error.

@Nadeen Ahmed

I dealt with the weighting as follows. This allows stocks to run but also allows new stocks to come in.

    context.stock_weights = pd.Series(index=top_n_by_momentum.index , data=0.0)  
    context.bond_weights = pd.Series(index=[context.bonds], data=0.0)  

    for x in context.portfolio.positions:  
        if x in top_n_by_momentum and (x.sid != context.bonds):  
            a=context.portfolio.positions[x].amount  
            b=context.portfolio.positions[x].last_sale_price  
            c=context.portfolio.portfolio_value  
            s_w=(a*b)/c  
            context.stock_weights.set_value(x,s_w)  
        if (x not in top_n_by_momentum) and (x.sid != context.bonds):  
            order_target_percent(x, 0)  
    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            context.stock_weights.set_value(x,1.0 / context.Target_securities_to_buy)

    if context.stock_weights.sum()>1:  
        stocks_norm=(1.00/context.stock_weights.sum())  
        context.stock_weights=context.stock_weights*stocks_norm  
        context.bond_weights.set_value(context.bonds,0.0)  
    else:  
        context.bond_weights.set_value(context.bonds,1-context.stock_weights.sum())  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    for index, value in total_weights.iteritems():  
        order_target_percent(index, value)  

@Viridian Hawk

This is how I would approach the over-leverage/shorting issue. I've also simplified the code, while (hopefully) keeping the original logic intact. (It does briefly over-leverage, probably due to a halted stock, but it's not bad.) I'm not sure why the performance lags Zenothestoic's 7301% return version so much, since I tried to match the parameters from that one. Can anyone spot the difference?

It is because I calculate momentum OUTSIDE pipeline. This whole muck up about Window_Safe as mentioned in one of my posts above is what makes the difference.

Prices are adjusted for splits/consolidations/dividends anyway so I don't understand why this Window_Safe junket. Also if you Window_Safe for Return() I don't understand why Q does NOT Window_Safe for a moving average.

Anyway life is too short to worry about it. I always base my systems on adjusted prices and don't muck about any further. I have no idea what this additional "normalisation" is all about and I mistrust what I can not see. Hence my dislike of optimise also.

But perhaps I am just an ignorant Luddite.

Since a) I can't be arsed to place weekly trades manually and b) there is no live trading on Quantopian, I'm off to try and replicate this on Quantconnect.

Anyone got any other ideas vis a vis the simplest way to automate?

I absolutely do NOT want to have to muck about obtaining stock and fundamental data and then load it all in the cloud. And then work out how to link it all up to a broker.

I have a habit of disappearing down rabbit holes for months, and I have a feeling if I tried to go it alone, I would never re-emerge.

I understand this may be a novice question, but I am wondering about the quality measure used. Is there some “look forward” bias due to back testing vs. live trading? The quality factor used for the algo gathers the “latest” measure (roe, ltd_to_eq, etc.). Assuming most measures are based on quarterly data the Morningstar database reports the measure for the end of the quarter. Then, the back testing algo uses that data.

For example, when back testing the algo on January 31, 2019, it would pull the measures for December 31, 2018 (the latest end of quarter). However, in live trading we would most likely not know the December 31, 2018 measure on January 31, 2019. The company had probably not reported earnings by January 31. In real time if we screened the stocks on January 31, we would most likely get rankings based on the end of the 3rd quarter 2018.

Again, I am new to platform and perhaps the Morningstar database only reports figures that coincide with reporting dates? But that would seem odd. I have very limited Python skills but am wondering what the results look like if we lag the fundamental measures by a quarter? If my description of reporting date vs. database date is correct, I believe the way to accurately reflect live trading is to use the “known” measures. In some cases, it would be the “latest” as used in the algo. But in some cases, the correct fundamental measures would need to be lagged one quarter.
Your feedback is welcome. Thank you.

@John Sawyer, Quantopian timestamps data according to when it was actually available. On historical data that predates Quantopian's real-time collecting, I believe they add a conservative delay to ensure there is no look ahead bias. There is no need to delay data by one quarter.

Part of the reason why the original version over-leverages is this logic:

    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))  
            print('GETTING IN',x)  

New positions are given a 5% weight, but existing positions are allowed to grow above their original 5% allocation. Since most positions are expected to generate a return (gain weight), the sum of the 20 position weights will be greater than 100%.

The version I posted solves this problem by adjusting all positions back to equal-weight at each rebalance.

@Zeno, I figured out the discrepancy between my version and yours. Looks like momentum works just as fine inside of pipeline as out, and there is no window safe issue. What I missed was the change from sma50/200 for the trend filter to 10/100. That change contributes significant improvement! If in addition I allow over-leveraging as I described in my previous post, my results are starting to look pretty close to yours.

While I'll generally agree that a bear market filter is a good idea, I'm concerned that as soon as you start tweaking those settings, you're drifting into curve-fitting territory. Finding the ideal historical SMAs probably isn't going to be any more predictive than choosing one at random. Intuitively there are so many variables -- VIX, interest rates, volume, P/E, sentiment -- that simply looking at SMA crossovers is more like looking at the symptom than the cause. The reality is that it's going to behave erratically. Perhaps there is a more fundamental metric we can use to determine when high lt debt-to-equity companies start to underperform.

Weird. I must check inside the pipeline version again. Totally agree on curve fitting. I will trade this but it WILL be very eratic. Pity we can't see what happened in the tech crash of 2000.

As you can see I solved over leverage in a different way. I placed my code in this thread a few entries above. My version was more of a trend following version. No equal weighting.

@Viridian, there are still leverage spikes, even in your version. They occur when exiting from bonds. As if, at rebalancing, not all the bonds are sold. Note, that I tested with the no-slippage line commented out to give it a more realistic outcome. Also, initial capital was at $10M, and the number of stocks at 100.

@Guy -- At $10m book w/slippage enabled it's not surprising that not all orders are getting filled, especially as the account grows. If it's limited liquidity on the bond ETF that's causing problems, you could distribute your bond position between more liquid ETFs like TLT. Also, I'm aware my version over-leverages occasionally due to halted stocks as well, but I figure those aren't worth worrying about, since they aren't inflating the backtest returns in the same way over-leveraging on non-halted stocks does. Another consideration, many companies in the Russell 3000 aren't liquid enough for a $10m-$1t account, so I would do some market-cap weighting if you want to trade such a large book, though of course that will hurt returns. I wouldn't trade $10m with this strategy, but if you were to do so, it would be wise to put in some extra execution logic in order to minimize slippage. Alpha decay is very low, so you can take your time legging in and out of positions if need be.

Moved trade execution to market_open().
QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('TLT', 'IEF'); MIN = 1;
MOM = 126; EXCL = 2; N = 5; N_Q = 60; MA_F = 10; MA_S = 100; LEV = 1.0;

Click to load notebook preview

EXCL = 2;

I'm finding Zeno is right, best to eliminate the momentum exclusion altogether.

Moved trade execution to market_open()

Is the idea that in live trading you would use OPG/MOO orders? Otherwise spreads are too wide at open and Quantopian's fill model is going to be much too generous, especially on the small caps.

TBH, yes. I am intending to do just that. I'm currently mucking about trying to get Quantconnect up and running. It's not laziness but I know I would loathe having to put the orders in manually. I also want to work on a few other filters (other than indebtedness) to see if there is something else I can complement this with. Momentum again, but hopefully with a different filter to ensure I end up with different stocks.

We've been juicing this algorithm's results without a hold-out. I'm going to put it on my calendar to check back in 6-months to a year and see whether any of what we did improved the OOS performance.

A possible alternative is to go back and year by year work out parameters which produce "average results". In a sense that would be OOS.

Probably as good as looking at the optimal parameters in six months time.

In other words if you want to trade 5 stocks, keep that as a constant and fiddle around to find the average best days to re-balance and so on. Don't pick the worst or the best but somewhere in the middle.

We know well of course that the future will only vaguely resemble the past, and in that regard, no amount of hold out guarantees future results. No does the use of average parameters of course, but perhaps it may prove a useful exercise.

So you might end up trading parameters which have produced results off the very bottom and off the very top. Somewhere in the middle. You may have noticed that with some of the parameters, gradual changes occasioned quite smooth correlated moves in the equity curve.

For instance reducing the momentum exclusion from 10 incrementally down to 0 produced a steadily increasing equity curve. So its probably as worth fiddling around with this as waiting six months.

@vladimir - A quick question. Are those results without default slippage and transaction cost? I used same variables with weekly re-balance and commented out 0 slippage cost line. I am getting only half of your return. 15923% to be precise.

@Nadeem,

Are those results without default slippage and transaction cost?
With commented # set_slippage(slippage.FixedSlippage(spread = 0.0)). Why cheat yourself?
I did not change anything in trade().
What is your definition of quality in pipeline?

wow, so i m getting only half of your result. I must be missing something crucial. I am using algo from Anthony. It is using only the following

indebted = ltd_to_eq_rank.top(60,mask=universe)

and then choosing top 5 momentum from those 60.

amazing to find signal in this sort of noise.

Click to load notebook preview

Mr V Hawk
Would you be kind enough to interpret that chart for me? I am not clear what it is purporting to show?

:-)

Blue dots are forward 60-day gains, red dots are forward 60-day losses. Size of dot is size of gain/loss. X axis is the raw debt-to-equity value, and y axis is the 125-day return (which this algo has been using as "momentum"). First chart shows how noisy this data is. Second chart is more in the region the algo acts on and you can kind of see more blue in the top quadrant. I was just curious if viewing the data this would give any insights. I don't think so, but mostly it's just pretty to look at.

Many thanks. Back to square one then I guess and either taking a punt or waiting through an OOS period. No guarantees either way of course!

@Viridian, your two charts are what you should have expected to find. The same kind of results as was found in the late '60s and thereafter, that we still have to contend with today.

However, those two charts do say a lot. The short-term forward mean return is close to 0.0. However, even from a visual inspection, we can see a slight upward edge from the mostly bell-shaped distribution. There is a signal in there, but it is faint. Nonetheless, it does carry the long-term market average upward drift.

The chart also says how difficult it is to capture the positive side of those dots since a lot of randomness will have to be addressed. And by the very nature of the distribution, high hit rates will be difficult to achieve.

Discriminating those dots becomes a statistical problem in a tumultuous and quasi-unpredictable sea of variances.

Why choose a 60 day return? Might not 20 days be more appropriate for monthly trading?

@Viridian, in reference to your post, yes, agree. I had the same observations.

I am testing the limits and uses of this type of trading strategy. Currently, my interest is two-fold. How does it behave when you scale it up? And, a much more interesting question: how much of it can you anticipate?

This strategy can be viewed as a fixed fraction of equity position sizing with equal weighing monthly scheduled rebalancing, like many on this site. Its move to safety is a switch to bonds on a SPY 50-200 SMA crossover. On small quantities of stocks (5 to 20) and on relatively small initial capital (10k to 100k) it appears to be doing fine. However, most of those simulations here are on the same 5 or 20 stocks as they are using the same stock selection process. Nonetheless, the bet sizes start relatively small (2k to 20k for the 5-stock scenario and from 5k to 20k for the 100k scenario). The bet size varies from 5% to 20% of equity which, in general, would tend to make the proposition riskier. It could be viewed as a high portfolio concentration, especially at the 20% bet sizing level.

The nature of the problem changes as you increase the initial capital further. And if you want to play at higher levels, you will need your trading strategy to adapt to such an environment.

The ability of a trading strategy to scale up should almost be a prerequisite. You want to know if it can scale or not since it is designed to grow, and at an exponential rate at that. Also, since the strategy's stock selection process always results in the same ranking for the same stocks, you want to know how it will handle more stocks and/or a different selection in order to spread market risks to more than just 5 to 20 stocks.

The projected number of trades using the top 100-ranked stocks, as per the chart in a prior post, was 11,320. The backtest using your version of the program came in with 10,517 trades. Almost within its anticipated range of 10,613 to 11,497 over the trading interval (that is within 1% of the lower range's reach). Likewise, the projected CAGR was from 15.10% to 15.94%. The backtest came in with 15.03%.

What is remarkable here is that we can make such a projection (I have formulas and equations for that), not for a week or a month ahead, but for, in this case, a 16.7-year interval. And, these projections fell pretty close to the actual simulation result (look at the cited post above).

Putting these projections at play for big long-term targeted return funds opens up an interesting door for this type of strategy. As a matter of fact, any trading strategy where you could project that far into the future with a reasonable approximation could greatly benefit from those structural scaling capabilities.

Hello I would like to give my little contribution to this also by putting the stock picking logic (first selection by ROE then more selection by momentum) inside each single sector, and the logic of sectors selection is basically this one.
It does not make huge money but I like the idea of sectors diversification, and also i like to compare fundamentals inside each sector separately beacuse i think it gives more robustness to the algo, and maybe someone could improve this.
Hope to be not too much “off topic” in regard to the original algo.

Clone Algorithm
81
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Momentum, Value, and a little bit of mean reverting by Giuseppe Moriconi, modified by Vladimir  
# https://www.quantopian.com/posts/momentum-value-and-a-little-bit-of-mean-reverting  
from quantopian.pipeline.factors import SimpleMovingAverage, Returns  
from quantopian.algorithm import attach_pipeline, pipeline_output  
from quantopian.pipeline.classifiers.morningstar import Sector 
from quantopian.pipeline.factors      import SimpleMovingAverage, Returns
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.filters import QTradableStocksUS  
from quantopian.pipeline.data import Fundamentals  
from quantopian.pipeline import CustomFactor  
from quantopian.pipeline import Pipeline  
import pandas as pd  
import numpy as np  
import math

def initialize(context):  
    context.equity_allocation = 0  
    context.hedge_allocation = 0  
    context.sectors_weigth = {}  
    context.market_trend = 0  
    context.sectors_to_buy = []  
    context.sectors_weight = {}  
    context.sectors_number_to_buy = []  
    context.sectors_dict = {  
                        symbol('XLB'): 101,      # XLB Materials  
                        symbol('XLY'): 102,      # XLY Consumer Cyclical  
                        symbol('XLF'): 103,    # XLF Financials  
                        symbol('XLP'): 205,      # XLP Consumer Defensive  
                        symbol('XLV'): 206,      # XLV Healthcare  
                        symbol('XLU'): 207,      # XLU Utilities  
                        symbol('XLE'): 309,      # XLE Energy  
                        symbol('XLI'): 310,      # XLI Industrials  
                        symbol('XLK'): 311,      # XLK Tech  
                        }  
    context.sectors_list = context.sectors_dict.keys()  
    context.bonds = [symbol('TLT'),              # TLT  
                     symbol('TIP'),              # TIP, IEF  
                     ]  
    schedule_function(rebalance, date_rules.month_start(), time_rules.market_open(hours=1),)  
    schedule_function(logging, date_rules.every_day(), time_rules.market_open(hours=1),)  
    schedule_function(record_vars, date_rules.month_start(), time_rules.market_open(hours=1),)

    
    ##########################################
    #########        PIPELINE        #########
    ##########################################
    base_universe = QTradableStocksUS()  
    m = base_universe
    #TOP_N_ROE_TO_BUY = 200 #First sort by ROE
    RELATIVE_MOMENTUM_LOOKBACK = 126 #Momentum lookback
    MOMENTUM_SKIP_DAYS = 10

    sector = Sector()                                            # sector code   
    #returns = Returns(window_length = 120, mask=base_universe)   # returns  
    #volatility = Volatility_Daily_Annual()  
    market_cap = Fundamentals.market_cap.latest # std  
    market_cap_screen = market_cap > 1e9 
    momentum = (Returns(window_length=MOMENTUM_SKIP_DAYS+RELATIVE_MOMENTUM_LOOKBACK, mask=m).log1p() - Returns(window_length=MOMENTUM_SKIP_DAYS, mask=m).log1p())
    
 
    roic = Fundamentals.roic.latest.rank()
    ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(ascending=True, mask=m)
    value = (Fundamentals.cash_return.latest.rank(mask=m) + Fundamentals.fcf_yield.latest.rank(mask=m))
    
    quality = (
        roic + 
        ltd_to_eq +
        value
    )
    #m &= quality.top(TOP_N_ROE_TO_BUY, mask=m)

    columns = {'sector': sector, 'mom': momentum, 'roe' : quality} #'vol' : volatility  
    screen = (base_universe & market_cap_screen)  
    pipe = Pipeline(columns, screen)  
    attach_pipeline(pipe, 'pipeline')

def rebalance(context, data): 

    stocks_to_buy = []  
    sectors_to_buy = []  
    sectors_number_to_buy = []  
    ########## ETF SELECTION #############  
    p = data.history(context.sectors_list, 'close', 252,'1d')  
    mean_20= p.rolling(20).mean().iloc[-1]  
    mean_240 =  p.rolling(240).mean().iloc[-1]  
    ratio = mean_20/mean_240  
    uptrending_sectors = ratio.where(ratio > 1.0).dropna()  
    last_month_performance = p.pct_change(periods= 30).iloc[-1]  
    #ascending = false significa in alto i maggiori valori  
    last_month_sorted = last_month_performance.sort_values(ascending = False)  
    last_month_sorted = last_month_sorted[last_month_sorted.index.isin(uptrending_sectors.index)]  
    uptrending_sectors_list = uptrending_sectors.index.tolist()  
    ######### EQUITY ALLOCATION BASED ON TREND ################  
    n = len(uptrending_sectors_list)  
    # -----------  MOST OF THE SECTORS ARE UPTRENDING ------------- #  
    if n >= 8:  

        market_trend = 2  
        equity_allocation = 0.8  
        #peggiori rendimenti ultimo mese  
        sectors_to_buy = last_month_sorted[-4:].index.tolist() 

    # -----------  MORE THAN HALF OF THE SECTORS ARE UPTRENDING ------------- #  
    elif 5 < n <  8: 

        market_trend = 1  
        equity_allocation = 0.6  
        sectors_to_buy = last_month_sorted[-3:].index.tolist()  
    # -----------  LESS THAN HALF OF THE SECTORS ARE UPTRENDING ------------- #  
    elif 2 < n <= 5:  

        market_trend = 0  
        equity_allocation = 0.4  
        sectors_to_buy = last_month_sorted[:2].index.tolist()  
    # -----------  FEW OF THE SECTORS ARE UPTRENDING ------------- #  
    elif 1 < n <= 2:  

        market_trend = -1  
        equity_allocation = 0.2  
        sectors_to_buy = last_month_sorted.index.tolist()  
    # -----------  NO SECTOR UPTREND  ------------- #  
    elif n <= 1:  
        market_trend = -2  
        equity_allocation = 0  
        sectors_to_buy = []  
    hedge_allocation = 1.0 - equity_allocation  
    for k, v in context.sectors_dict.iteritems():  
        if k in sectors_to_buy:  
            sectors_number_to_buy.append(v)  
    ########## PIPELINE FILTER BASED ON PREVIOUSLY SELECTED SECTORS #########  
    pipe = pipeline_output('pipeline')  

    grouped = pipe.groupby('sector')  
    for sector, group in grouped:  
        if sector in sectors_number_to_buy:  
            # first selection based on fundamentals: ROE
            group_roe = group.nlargest(50, 'roe')
            group_mom = group_roe.nlargest(3, 'mom')
            stocks_to_buy_for_this_sector =group_mom.index.tolist()  
            stocks_to_buy = stocks_to_buy + stocks_to_buy_for_this_sector  
    # Make global variables to plot lines etc  
    context.stocks_to_buy = stocks_to_buy  
    context.hedge_allocation = hedge_allocation  
    context.market_trend = market_trend  
    context.equity_allocation = equity_allocation  
    context.sectors_to_buy = sectors_to_buy  
    

     
    print '-------------REBALANCE-------------'  
    print 'equity allocation %s' %(context.equity_allocation)  
    print 'bonds allocation %s' %(context.hedge_allocation)  
    print 'sectors to buy %s' %( context.sectors_to_buy)  
    print 'portfolio positions %s' %(context.portfolio.positions.keys())  
    print 'stocks to buy: %s' %(context.stocks_to_buy)  
    
    for bond in context.bonds:  
        order_target_percent(bond, context.hedge_allocation/len(context.bonds))

    for stock in context.portfolio.positions :  
        if stock not in context.stocks_to_buy and stock not in context.bonds:  
             order_target_percent(stock, 0)  
    for stock in context.stocks_to_buy :  
        if get_open_orders(stock): continue  
        else:  
           order_target_percent(stock, context.equity_allocation / len(context.stocks_to_buy))  
def record_vars(context, data):  
    
    record(leverage=context.account.leverage, trend = context.market_trend, equity_allocation = context.equity_allocation, n_sectors = len(context.sectors_to_buy))  
    
def logging(context, data):
    cpp = context.portfolio.positions
    for s in cpp:
        print s.symbol
        print s.asset_name
        

class Volatility_Daily_Annual(CustomFactor):  
    inputs = [USEquityPricing.close]  
    window_length = 120  
    def compute(self, today, assets, out, close):  
        # [0:-1] is needed to remove last close since diff is one element shorter  
        daily_returns = np.diff(close, axis = 0) / close[0:-1]  
        out[:] = daily_returns.std(axis = 0) * math.sqrt(252)
There was a runtime error.

Sorry in advance for a newbie question but I cannot seem to find this answer anywhere. How do I get the current stock symbols for this model ? Do I have to write some code to output the current symbols ?

Thanks.

@Andy, probably the easiest is to run a "full backtest" through the most recent date and navigate to "Activity" -> "Positions" and you'll see the positions organized by date.

Sorry for some more newbie questions.. When I ran the backtest on the original Algo I noticed:

  1. It is a bit confusing to see that the cash becomes negative (e.g. -$92,363.61 on 2019-11-18). Why is that?
  2. There are a couple of leverage spikes of about 2x (e.g. on 04/04/10). Why does that happen? Does that mean we need to borrow (e.g. on margin) $100,000 for those days?

EDIT: 3. Oh, and one more thing. In actual trading, since the backtest only gives the previous days data we need to do the actual rebalancing the next day with our broker. I wonder if the backtest results of the Algo would change if this would be incorporated in the Algo? For example: Do the evaluation at closing at the last day of the month, and then do the actual rebalancing the following day (e.g. at 12 PM if the backtest data would be available then).
I don't know enough of Quantopian/Python yet to test if this would make a difference in performance.

(p.s. I am used to Tradestation and in strategies with daily data, I would evaluate after closing and place potential orders for the opening at the next day).

I have noticed that if you comment out the maket cap filter in pipeline the strategy performs better.
What are the main risks on doing that that one can encounter during live trading?

@Chris, @Joakim

according to “Berrnstein, style investing, 1995“ (book) the risk/return characteristic of quality measures in stocks favor low quality.
In their example they used the S&P quality rating, A+ to D, were A+ has the highest quality.

From 1986 to 1994
A+ achieved a mean return of 9.57% (yearly) with a std. dev of 14.83%.
C/D got a mean return of 19.28% with a std.dev. of 27.66.
The other ranks behave accordingly (B 13.91%)
(Source: Merrill Lynch Quantative Analysis)

This explains why the changes from Joakim boosted the performance.
It might be probably a good idea to combine low quality with momentum.

@ carsten

Generally saying: "combine low quality with momentum" is wrong.

It ist widely known (research by novy-marx, aqr, ...) that the quality factor (long/short) is a source of equity outperformance and it might work long only as well. As far as I know low D/E (leverage) is not the the favored choice for the quality factor, and if it used than only as part of a larger quality composite. Norges Bank has a study showing that. Most often profitabilty ratios are used for quality.

By the way high D/E in combination with small size and value is used by Rasmussen to replicate a cheaper and public version of private equity investing.

@rogoman

yes, this is what I read a lot.

BUT interestingly other publications state the opposite.
The book from Richard Bernstein was quite interesting in that point.
It's a bit old, but they show that low quality can be a source of alpha, as long as borrowing cost is low.
It always depends in which market cycle we are.
(The author was the Head of quantative equity at Merrill Lynch)

As an example, Tesla is not making any money at the moment but the return for their shareholders was not that bad since 2010.

I got the book, because I wanted to understand why my small, value, momentum did not work.
(it performed great some years ago in backtest but actually does not work the last years) So far I liked the book, unfortunately only in analog format, which is difficult to read...I like to read during commute
https://www.amazon.com/Style-Investing-Unique-Insight-Management/dp/047103570X

@ carsten

Factor strategies don't work all the time, they work on average. That might be a reason why they still work. If they had worked all the time, they would have been arbitraged away and we would not be talking about these factors/styles whatever you want to call them anymore.

Also: I dont know what you small, value, momentum strategy looks like but if you bought a couple of stocks of these characteristics, dont be surprised if they dont work at all. Again this is statistics, this stuff works on average not all the time and not all stocks or all coupe of stocks.

I would recommend this book as a starter : https://www.amazon.com/Your-Complete-Guide-Factor-Based-Investing/dp/0692783652/ref=sr_1_1?crid=3OPUCZY5UJ5JQ&keywords=factorbased+investing&qid=1577557634&sprefix=factor+based+%2Caps%2C236&sr=8-1

I'm intrigued by this strategy, and started messing around with it a bit in my spare time. Nothing to really improve on, but in the version I am testing, using the universe Q1500US performed better over the long-run. Still can't get close to @Vladimir 40.1% annual return, but closer than it was (Still seems to have a leverage issue, as cash drifts to $-65,718.98 on 2018-08-22 at one point even though leverage is at 1 [context.portfolio.cash], still looking into that...I think it is because some orders failed to fill...but not sure). 2018-07-20 13:00 WARN Your order for -6189 shares of OEC has been partially filled. 4165 shares were successfully sold. 2024 shares were not filled by the end of day and were canceled. But I would think order_optimal_portfolio should realize the sale didn't go through and not over order...anyone have insight into this?

EDIT: The negative cash is for sure from the unfilled orders - if I use set_slippage(slippage.FixedSlippage(spread = 0.0)) then it only goes to about $-300 starting with $10,000 which is definitely a weak point of only buying into 5 stocks - once you have significant capital you'll have trouble filling the orders in real life.

I'm interested in looking at optimizing the code with something like this: https://ntguardian.wordpress.com/2017/06/12/getting-started-with-backtrader/ (look at the Optimization section). Does anyone know if that is possible in Quantopian? It would be great to test all the variables and come up with the best outcome (instead of manually changing the variables and backtesting).

Clone Algorithm
137
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original by: Christopher Cain, CMT & Larry Connors
Posted here: https://www.quantopian.com/posts/new-strategy-presenting-the-quality-companies-in-an-uptrend-model-1
(Dan Whitnabl version with fixed bonds weights)  
(Nathan Wells modified for performace and logging)
"""
# Quality companies in an uptrend 
import quantopian.algorithm as algo
 
# import things need to run pipeline  
from quantopian.pipeline import Pipeline
 
# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns
 
# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms
 
# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd
 
def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]
 
    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 5
    context.TOP_ROE_QTY = 50 #First sort by ROE
 
    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20
 
    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()
 
    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  
 
def make_pipeline(context):  
    universe = Q1500US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow
 
    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum
                   )  
    return pipe
 
def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'   
    stocks_to_hold = df.query(rule).index  
    
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
    #print("Stocks to buy " + str(stocks_to_hold))
    print("Stocks to buy: " + str([ str(s.symbol) for s in stocks_to_hold ]) )
    
    #print("Bonds weight " + str(bond_weight))
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])
 
    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 
 
    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    #Log our holdings
    log.info( [ str(s.symbol) for s in sorted(context.portfolio.positions) ] )
    #print("Cash: " + str(context.portfolio.cash))
    # Record our weights for insight into stock/bond mix and impact of trend following  
    record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)
    longs = shorts = 0
    for stock in context.portfolio.positions:
        if context.portfolio.positions[stock].amount > 0:
            longs += 1
        elif  context.portfolio.positions[stock].amount < 0:
            shorts += 1
    record(longs = longs)
    record(shorts = shorts)
There was a runtime error.

Here's a notebook based on @Vladimir notebook for some more stats

Click to load notebook preview

@Vladimir.
Sorry for being a novice about this, but did you publish the modified Algorithm to generate the notebook with "Annual return 40.1%"?

Highest I can get so far is 18915.42 % - here's code for anyone else wanting to tinker with it.

Clone Algorithm
148
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.filters import Q3000US, Q500US, Q1500US
from quantopian.pipeline.factors import SimpleMovingAverage, CustomFactor,Returns
import numpy as np 
import pandas as pd


def initialize(context):  
    
    set_slippage(slippage.FixedSlippage(spread = 0.0)) 
    algo.attach_pipeline(make_pipeline(), 'pipeline')  
    
    #Schedule Functions
    schedule_function(
        trade, 
        #date_rules.month_end(days_offset=0), 
        date_rules.week_end(days_offset=0),
        time_rules.market_open()
    )
    #schedule_function(
    #    trade_bonds, 
    #    #date_rules.month_end(days_offset=0), 
    #    date_rules.week_end(),
    #    time_rules.market_close(minutes=20)
    #)
 
    #This is for the trend following filter
    context.spy = sid(8554)
    context.TF_filter = False
    context.TF_lookback = 126
    
    #Set number of securities to buy and bonds fund (when we are out of stocks)
    context.Target_securities_to_buy = 5
    context.bonds = sid(23870)
    
    #Other parameters
    #context.top_n_roe_to_buy = 50 #First sort by ROE
    context.relative_momentum_lookback = 126 #Momentum lookback
    context.momentum_skip_days = 2
    context.top_n_relative_momentum_to_buy = 5 #Number to buy
    
 
def make_pipeline():

    # Base universe set to the Q500US
    universe = Q3000US()
    #ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)
    ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest
    indebted = ltd_to_eq_rank.top(60,mask=universe)
    mom =Returns(inputs=[USEquityPricing.close],window_length=126,mask=indebted)
    mom_av = SimpleMovingAverage(inputs=[mom],window_length=20,mask=indebted)
    strong = mom.top(5)

    pipe = Pipeline(columns={'ltd_to_eq_rank': ltd_to_eq_rank, 'mom': mom,'mom_av':mom_av},screen=indebted)
    return pipe
        
def trade(context, data):
    #log.info(get_datetime(tz=None))
    # Get daily pipeline output
    df = algo.pipeline_output('pipeline')
    #top_n_by_momentum  = algo.pipeline_output('pipeline')
    #security_list = df.index

    ############Trend Following Regime Filter############
    
    spy_ma10 = data.history(context.spy , "close", 10, "1d").mean()
    spy_ma100 = data.history(context.spy , "close", 100, "1d").mean()

    if spy_ma10 >= spy_ma100:
        context.TF_filter = True
    else:
        context.TF_filter = False
    
    # TF_hist = data.history(context.spy , "close", 140, "1d")
    # TF_check = TF_hist.pct_change(context.TF_lookback).iloc[-1]

    # if TF_check > 0.0:
    #     context.TF_filter = True
    # else:
    #     context.TF_filter = False
    
    ############Trend Following Regime Filter End############ 
    
    #Grab top 50 stocks with best ROE
    #top_n_roe = df['roe'].nlargest(context.top_n_roe_to_buy)
    
    #DataFrame of Prices for our 500 stocks
    #prices = data.history(top_n_roe.index,"close", 180, "1d")    
    prices = data.history(df.index,"close", 180, "1d")  
    
    #Calculate the momentum of our top ROE stocks   
    quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]
    #quality_momentum = prices[:].pct_change(context.relative_momentum_lookback).iloc[-1]
    
    #Grab stocks with best momentum    
    top_n_by_momentum = quality_momentum.nlargest(context.top_n_relative_momentum_to_buy)
           
    context.stock_weights = pd.Series(index=top_n_by_momentum.index , data=0.0)  
    context.bond_weights = pd.Series(index=[context.bonds], data=0.0)  

    for x in context.portfolio.positions:  
        if x in top_n_by_momentum and (x.sid != context.bonds):  
            a=context.portfolio.positions[x].amount  
            b=context.portfolio.positions[x].last_sale_price  
            c=context.portfolio.portfolio_value  
            s_w=(a*b)/c  
            context.stock_weights.set_value(x,s_w)  
        if (x not in top_n_by_momentum) and (x.sid != context.bonds):  
            order_target_percent(x, 0)  
    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            context.stock_weights.set_value(x,1.0 / context.Target_securities_to_buy)

    if context.stock_weights.sum()>1:  
        stocks_norm=(1.00/context.stock_weights.sum())  
        context.stock_weights=context.stock_weights*stocks_norm  
        context.bond_weights.set_value(context.bonds,0.0)  
    else:  
        context.bond_weights.set_value(context.bonds,1-context.stock_weights.sum())  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    for index, value in total_weights.iteritems():  
        order_target_percent(index, value)  
        #Log Buy Order
        print("Buy: " + str(index) + "percentage: " + str(value))
    #Log our holdings
    log.info( [ str(s.symbol) for s in sorted(context.portfolio.positions) ] )
There was a runtime error.

Well, I got there...32,051.054%
I think in a different way than @Vladimir but still an interesting algorithm. Still have an issue with leverage, at one point cash went -$16,016 but given that the portfolio value was at $2.4 million by that point (starting with $10k in 2003) I don't think it effected things too much, but I could be wrong. But the trading day really seems to have a large impact as @Peter Harrington showed.

So while I'm sure this isn't real life it still is a very interesting algorithm, especially because of how simple it is. Thanks for posting it @Chris Cain and for all those who worked on it as well!

Click to load notebook preview

@Nathan,
Would you mind sharing your code for the 32,051.054% return?
TIA!

One more way to evaluate company's quality, Piotroski Score https://www.investopedia.com/terms/p/piotroski-score.asp

My latest article: Financing Your Stock Trading Strategy is about the trading strategy discussed here. I built it based on Dan Whitnable's version as presented above.

I view the 12 simulations presented in my article as an exploration phase of the limits, strengths, and weaknesses of that particular trading strategy. The principles and trading methods used could be applied to many other strategies.

It is not a one solution does it all. I always look at trading strategies as a matter of choices, trading methods, preferences, and risk averseness. But, that does not mean that no one can design a trading strategy that can outperform market averages. Trading is like any other business, there is always a cost associated with it and there are always risks to be taken.

I wanted to share this new article, not only for what it does, but for what it conveys as well. There is math behind a strategy's structure. It is expressed concisely in the progression of the payoff matrix equation presented which served as backdrop for these simulations.

There are some screenshots to decorate the article. Here are some of its headers:

  • The Basic Portfolio Equation
  • Reengineering For More
  • Financing Your Trading Strategy
  • Some Fundamentals Might Not Do What You Think
  • Scalable Strategy
  • Overriding The Ranking System
  • Increasing Leverage
  • Testing Methods
  • An Extended Strategy Payoff Matrix
  • Stopping Times

Hope it can help some by presenting a different perspective.

Article link: https://alphapowertrading.com/index.php/2-uncategorised/354-financing-your-stock-trading-strategy

awesome result @Nathan. Thank You for sharing and continue to motivate members to try and push the limits of algo. I think you have achieved the highest and most interesting result till now. The sharpe ratio of 1.48 is great!!!

Regarding the selection of companies based upon high dept and high roic: I guess the reason why this strategy shows such strong returns, especially in the last years, is because of the influx of cheap money, caused by the rate cuts. My opinion is, that this strategy will probably only work this well in the current type of market enviroment. The backtest hints at this, since the big returns only started to come in after around 2009. However, this is probably one of the better strategies to exploit the availability of cheap money without taking excess risk. I'm really intrigued by this approach!

@Kristof, a lot of the profit generated in that strategy comes from simply holding, on average, for a duration of about 5 months as if it was more like a participation prize. You get the profit because the portfolio had full exposure and the average 5-month position was positive in a generally up market.

There is some downside protection built in the selection process and the structure of the trading procedures on top of its declared move to safety bond switcher.

It becomes part of the reason why the strategy can benefit over the long term and why applying some leverage can benefit the bottom line as long as it is all paid for by the trading strategy itself which was demonstrated in my versions of this program in my previous posts.

This implies that most of the benefits of this trading strategy come from its structure since all the trades are made by the periodic rebalancing which comes on your preset schedule and not necessarily on the market's high or low price points. In fact, the program does not know the state of the market when rebalancing occurs. Its binary state is determined by an arbitrary and self-defined notion of a trend.

Nonetheless, I see alpha generation that can easily outpace many other trading methods. The 12 simulations presented in my last post were done using a slightly different set of stocks for each one of those tests. And still, the strategy successively outperformed in each of those simulations. One of my next steps will be to change the stock selection process altogether and see how the structure behaves.

For these reasons, I have a different perspective on the following:

“The backtest hints at this, since the big returns only started to come in after around 2009.”

The excess return was available almost from the start and for the duration of these 16.7-year simulations. In all those simulations, most of the equity came from the last few years simply because it is in a compounding return kind of game, and all it shows is this power of compounding over time.

However, the fundamentals on which the stock selection is made could benefit from better criteria. In my simulations, I downgraded the rankings of all fundamentals except one (which I will be attacking soon), and still obtained impressive results.

By assigning True to the Do_More option, the performance increases considerably. That option is part of the trading methodology. It has nothing to do with fundamentals, but everything to do with how you intend to play the game, or how aggressively you accept taking on incrementally risks. There, like in any other strategy, it becomes a matter of choice and preferences.

I am currently exploring how far this strategy can go. I've increased the number of stocks to trade to 300 over the same 16.7-year trading interval. The purpose is to reduce the impact of each bet. Each stock. while in the portfolio. will account for 1/300 of equity (0.33%). If a stock ever goes bankrupt, all the damage it could do will not exceed 0.33% of the portfolio. Due to the stock selection method, this trading strategy, even limited to the top 300 ranked stocks at anyone period, will nonetheless trade over 2,400 different stocks.

In its current state, the strategy makes over 100,000 trades. I could double that easily by setting to True another program option which has not been shown. The number of trades, just as the number of stocks traded are high enough to show, on average, statistical significance even after paying leveraging fees which are not negligible, but still part of the expenses of doing more business.

I didn't change much to the algorithm except I added another quality filter called ev_to_ebitda-( This reflects the fair market value of a company, and allows comparability to other companies as this is capital structure-neutral.), which I remember seeing from another algorithm. This combines the factors of Value and Quality together (somewhat).

I think if there is a regime switcher(Hidden Markov Model) so that different styles( ie Low Risk, Value, Volatility, Momentum, Quality, and Small Cap) can be cycled between that would be interesting or even combining each other (ie Quality- Momentum(Pretty much this thread), Value -Momentum ( https://www.quantopian.com/posts/value-momentum-and-trend)) can also be interesting.

Clone Algorithm
208
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original by: Christopher Cain, CMT & Larry Connors
Posted here: https://www.quantopian.com/posts/new-strategy-presenting-the-quality-companies-in-an-uptrend-model-1
(Dan Whitnabl version with fixed bonds weights)  
(Nathan Wells modified for performace and logging)
"""
# Quality companies in an uptrend 
import quantopian.algorithm as algo
 
# import things need to run pipeline  
from quantopian.pipeline import Pipeline
 
# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import ExponentialWeightedMovingAverage as EMA  
from quantopian.pipeline.factors import CustomFactor, Returns
 
# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms
 
# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd
 
def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]
 
    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 5
    context.TOP_ROE_QTY = 50 #First sort by ROE
 
    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20
 
    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()
 
    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  
 
def make_pipeline(context):  
    universe = Q1500US()  
    '''
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow
     '''
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close], 
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]
    
    
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close], 
                          window_length=context.TF_LOOKBACK)[context.SPY]
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow
    
    
    
    
    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    
    ent_to_eb = ms.ev_to_ebitda.latest.rank(mask=universe)
    value = (cash_return + fcf_yield+ ent_to_eb).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value 
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum
                   )  
    return pipe
 
def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'   
    stocks_to_hold = df.query(rule).index  
    
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
    #print("Stocks to buy " + str(stocks_to_hold))
    print("Stocks to buy: " + str([ str(s.symbol) for s in stocks_to_hold ]) )
    
    #print("Bonds weight " + str(bond_weight))
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])
 
    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 
 
    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    #Log our holdings
    log.info( [ str(s.symbol) for s in sorted(context.portfolio.positions) ] )
    #print("Cash: " + str(context.portfolio.cash))
    # Record our weights for insight into stock/bond mix and impact of trend following  
    record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)
    longs = shorts = 0
    for stock in context.portfolio.positions:
        if context.portfolio.positions[stock].amount > 0:
            longs += 1
        elif  context.portfolio.positions[stock].amount < 0:
            shorts += 1
    record(longs = longs)
    record(shorts = shorts)
There was a runtime error.

Hi @Viridian,

illustrate how if you approach portfolio construction as the creation
of a todays_weights dictionary, then it becomes very easy to take it
one step further and control the holding period via rolling
portfolios. Here I show how to do 20 rolling portfolios that are each
held for 20-days, thereby maintaining the same turnover rate as a
monthly-rebalance while diversifying away from day-of-the-month
overfit noise risk.

Thanks for sharing, very useful!

One question, when you moved the stock selection into Pipeline with progressive masks and used the Returns Factor, why the .log1p()?

Thanks in advance.

why the .log1p()?

Because you can't linearly subtract percentage changes. If something for example goes up 60% ($100+($100*0.6)=$160) and then down 40% ($160-($160*0.4)=$96), it is not the same as being up 20% ($100+($100*0.2)=$120) overall.

I think converting to log solves this problem.

I have been trying to modify this algorithm to have a dynamic TARGET_SECURITIES based on the context.portfolio.portfolio_value. For example:

TARGET_SECURITIES = math.floor(context.portfolio.portfolio_value / 100000.00)

I have no idea if this is a good or bad thing to do but I can't seem to make it work because TARGET_SECURITIES is used in the Pipeline code, which appears to only be run once? Anyone know of a way to code this?

Hi @Vladimir (and other contributors that have been playing with this great contribution from @Chris),

I've been playing with your optimized version of Dan Whitnable code. I've used that one as in terms of code structure is super clear.

I'm far from achieving anything closer to your results and I just would like to know if I'm missing something big or it's just a matter of parameter optimization.
I have a doubt about a couple of parameters you mentioned, it might be related to that (among other things I guess):

I'm using:
date_rules.month_end(days_offset = 7)
time_rules.market_open()
QTU = Q3000US()
MKT = symbol('SPY')
BONDS = symbols('TLT', 'IEF')
MOM = 126; Momentum lookback
N = 5; Number of stocks to finally trade
N_Q = 60; Number fo stocks filtered by the quality factor
MA_F = 10; Fast moving average
MA_S = 100; Slow moving average
LEV = 1.0; Leverage passed as constrain to the optimizer

Regarding my doubts, what does parameters mean?
MIN = 1; Is this related to bonds? How?
EXCL = 2; Is this related to the momentum skip days? If I use 2 instead of 10 performance drops significantly.

I'm attaching the backtest with the code and the notebook.

Thanks in advance!

Clone Algorithm
170
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with bonds weights  fixed by Vladimir)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 5 
    context.TOP_ROE_QTY = 60 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 100  
    context.TF_CURRENT_LOOKBACK = 10

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_open()  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_open()  
    )   

def make_pipeline(context):  
    universe = Q3000US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality) 
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    constraints = []
    constraints.append(opt.MaxGrossExposure(1.0))
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = constraints
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())
There was a runtime error.

And here is the notebook with my results...

Click to load notebook preview

Thomas Wiecki recently posted about how linearly combining factors produces no additional alpha over the individual factors, which got me to thinking about the hypothesis behind this strategy. Here the authors have used progressive filtering/masking to implement a factor combination hypothesis, and it appears to work -- momentum (which is mostly useless on its own) does seem to significantly improve the quality factor.

I realize that there was no intention here of satisfying the Quantopian Allocation criteria, but I started to wonder how one would go about implementing a combination hypothesis such as "quality companies in uptrend" in a fashion that ranks the entire QTU universe of stocks. As Tomas explains, quality.zscore + momentum.zscore obviously does not do it (that only gives you the average between the two factors, not the additive quality of them working in symbiosis).

Any ideas?

(My hunch is that in this particular case there is too much noise once you stray from the extreme factor values, so it won't work.)

Hi @Viridian,

Definitely don't have an answer to your question but could add that in this book, to combine 2 factors and test a strategy using 5 quintiles, what the author does is:
1- It ranks all companies in our Backtest Universe by "Main Factor". From lowest to highest, in case we want the lowest values in the top quintile.
2- It selects the top 20% of this ranked list. The 20% of companies with the lowest "Main Factor" values. If we start with 2.000 companies, this step should select about 400 companies.
3- It then ranks the 400 companies that passed the "Main Factor" test by "Secondary Factor", again from lowest to highest supposing lowest is better.
4- It selects the top 20% of this ranked list- those with the lowest "Secondary Factor" values. If we started with 400 companies in step 2, we should end up with about 80 companies at the end of step 4.
5- Steps 1 to 4 are repeated until we have formed portfolios for the top quintile for each month (or whatever time frame we are interested in) to be tested.

This emphasizes the "Main Factor". Somehow, reminded me what is being done in this strategy, the progressive filtering you mentioned.

Hey All,

This is a fantastic discussion and a good topic brought up by Viridian.

What we are discussing here, in essence, is what is sometimes called “sequential” vs “non-sequential” ranking/filtering methodologies when it comes to quant factors.

Non-sequential basically means you use all stocks in a universe and rank them (using some methodology) in all the factors you want to use. An example would be taking the Q3000 universe then ranking each stock by a quality factor and a momentum factor then taking the top N based on the combined rankings.

Sequential is what we are doing here, and what the original algo did as well. Sequential means you rank by one or more factors first, filter the universe using that, then filter it again using the next factor.

In the original algo, we first filtered by quality then by momentum.

Some interesting things to note about this method. First of all, the first factor to filter by will have a larger impact on the portfolio. As such, I view “Quality Companies in an Uptrend” strategy to be mostly a quality strategy. It then uses momentum as a secondary factor which helps performance. Time-series momentum (trend following) is also applied in our trend-following regime rule, this is most to manage risk.

In my research, I have had much better success with sequential factor strategies as opposed to non-sequential.

Sequential strategies is also useful when you are trying to create a portfolio of strategies to take advantage of the diversification this offers. Again, keep in mind that the first factor you filter by in a sequential strategy will have the most impact on the strategy.

For example, maybe you have one sequential strategy that uses Value as a first factor then another sequential strategy that uses Quality as the first factor. It stands to reason that these should have lower correlation to each other than two squential straegies that use quality as a first factor. My research has shown that to be true. Combining them into a portoflio can then less risk and lead to better risk-adjusted returns (Sharpe).

Great discussion here, thanks again to all that have investigated this algo, made changes, cleaned up code and provided thought leadership.

Chrisopther Cain, CMT

@Marc,

Try to run your code with this parameters:

QTU = Q3000US();  
MKT = symbol('SPY');  
BONDS = symbols('TLT', 'IEF');

MOM = 126;       # Momentum lookback  
EXCL = 2;        # Momentum skeep days  
N_Q = 60;        # Number quality stocks  
N = 5;           # Number to buy  
MA_F = 10;       # Fast Moving Average  
MA_S = 100;      # Slow Moving Average  
LEV = 1.0;       # Target Leverage  
MIN = 1;         # Minute to start trading


Hi @Vladimir,

Thanks for the clarification. The difference is impressive. Specially when using just long term debt to equity as the quality factor.

Click to load notebook preview

Very interesting conversation. And just want to reiterate what @Zenothestoic said a while back:

"Incidentally it is good to see some [more] ideas coming through which
do not follow the stifling criteria for the Quantopian competitions.
It makes for a much more interesting forum. I was getting very [fed] up
with the 'neutral everything' approach."

Haha. It is a bummer that Quantopian has shut off all live-trading and even paper trading. I've tried porting this to QuantConnect, but have had little success in mirroring the results. If anyone is interesting in exploring that, shoot me a message.

Hi @Nathan,

Have you checked pylivetrader? It's a port of Zipline for Alpaca. I was messing a bit with it a while ago for paper trading. However, as far as I know, there where some core version upgrades to the Alpaca API and not sure if the port is still supported.

If you are a US citizen you can use Alpaca for proper trading too.

@Marc Thanks, that is interesting. I had seen it but hadn't looked too in depth. I'll try and see if I can come up with something that works.

@Marc Looks like there is no access to the MorningStar Fundamentals?
https://github.com/alpacahq/user-docs/blob/master/content/alpaca-works-with/quantopian-to-pipeline-live.md

The Quantopian platform allows you to retrieve various proprietary
data sources through pipeline, including Morningstar fundamentals.
Previously, IEX was used by pipeline-live to supply equivalents to
these, but recent changes to the IEX API have made this less possible
for most use cases. The alternative at the moment is the Polygon
dataset, which is available to users with funded Alpaca brokerage
accounts and direct subscribers of Polygon's data feed. If you want to
get started with Polygon fundamentals, please see the repository's
readme file for more info on what Polygon information is currently
available through pipeline-live.

Did you find any way around that?

Hi @Nathan, I knew there were several changes since I used it but was not aware of this one.
Apparently with Alpaca API v2 you can access fundamental data but haven't checked it.

Durability And Scalability

Two of the most important traits of any stock trading strategy should be: durability and scalability. The first so that the strategy does not blow up in your face during the entire trading interval, and the second so that a portfolio can grow big.

A stock trading strategy should operate in a compounding return environment. The objective is to obtain the highest possible long-term CAGR within your own set of constraints.

The portfolio's payoff matrix equation is quite simple:

\(F(t) = F_0 + Σ (H ∙ \Delta P) = E[F_0 ∙ (1 + r_m)^t]\)

where \(r_m\) is the average expected market return and where the final outcome will be shaped by \(F_0\) and \(t\). One variable says what you started with while the other how long you managed it. It does not say what strategy you took to get there, only that you needed one. It could be about anything as long as you participated in the game (H ≠ 0).

This is a crazy concept: you can win, IF you play. You have no control over the price matrix P, but you do have total control over H, the trading strategy itself. You can buy with your available funds any tradable stock at any time at almost any price for whatever reason in almost any quantity you want short of buying the whole company.

You already know that as time increases (20+ years), \(r_m\) will tend to be positive with an asymptotic probability approaching 1.00. The perfect argument for: you play, you win, and in all probability, you get the expected average market return over the period just for your full participation in the game.

The Problem Comes When You Want To Have More!

You might need to reengineer your trading strategy. For example, the same strategy as I last illustrated in this thread was put to the test with the following initial state: $50 million as initial capital, same time interval (16.7 years), and 400 stocks.

The first thing I want to see when doing a backtest analysis is the output of the round_trips=True section which tells the number of trades and the average net profit per trade. The reason is simple: the payoff matrix equation also has for expression:

\(F(t) = F_0 + Σ (H ∙ \Delta P) = F_0 + n ∙ x_{bar} = F_0 ∙ (1 + r_m)^t\)

where n is the number of trades and \(x_{bar}\) is the average net profit per trade. Therefore, these numbers become the most important numbers of a trading strategy. Whatever you can do to increase those two numbers will have an impact on your overall performance as long as \(x_{bar}\) is positive. Because n cannot be negative, a positive \(x_{bar}\) is a sufficient condition to have a positive rate of return.

The following chart is part of the round_trips=True option of the backtest analysis.

Backtest Section: round_trips=True

The numbers are impressive. One, it shows that 400 stocks can indeed make a lot of trades over the years. The average net profit per trade increased with time to make its average a lot higher than where it started. It is understandable, the average net profit per trade is on an exponential curve due to the inherent structure of the trading strategy with the last few years having the most impact.

The number of stocks to trade might have been limited to 400 at any one time (or 0.25% of equity), but due to the nature of the selection process, some 2,698 different stocks were traded over those 16.7 years with an average holding period of 126 days (about 5.7 months' time).

Instead of using bonds, I made the portfolio go short in periods of market turmoil. The strategy still managed, on average, to generate profits on those shorts.

The gross leverage came in at 1.50. However, the net liquidation value which is a rough estimate net of leveraging costs was more than enough of a reward to warrant going for it.

Why can such results be achievable? The reason is simple: compounding.

Every dollar of profit made is being compounded repeatedly, again and again. With the skills you brought to the game, you changed the above equation into:

\(F(t) = F_0 + Σ (H_a ∙ \Delta P) = F_0 + n ∙ x_{bar} = F_0 ∙ (1 + r_m + \alpha_a)^t\)

And it is the alpha you added to the game that is making such a difference, especially since it is also compounding over the entire time interval.

If you do not push your trading strategy, how will you ever know its limits? Those limits are the ones you do not want to exceed. And most of those limits might be due to your preferences and your averseness to losses. You want to do more than the other guy, then you will have to do more (a recurring mantra in my books).

Even if the numbers are big, it is still not the limit. For instance, increasing gross leverage by 5% in the above strategy would result in a gross leverage of 1.57.

Evidently, there would be progressively higher leveraging fees to pay. However, it would result in higher overall profits (adding about 6B more to net profits compared to the 0.5B more in leveraging fees). Max drawdown increased from -29.05% to -30.21%, while volatility went from 0.27 to 0.28. On the Sharpe ratio front, it stayed the same at 1.46 while the beta went from 0.44 to 0.46. The point being that if you could tolerate a drawdown of -29.05% why would you not stand for a -30.21% drawdown when you have no means to determine with any accuracy how far down the next drawdown will be?

The above numbers analyzing the state of the portfolio were marginal incremental moves except for the added overall profits. All of it is extracted by the same trading strategy where it was “requested” to do more. It incurred higher expenses, for sure, but it also delivered much higher profits (12 times more than the added costs).

Now, the strategy does face some problems in need of solutions. One is when going short. It does so on the stocks that the strategy considers as the best prospects for profit. That should be changed to a better short selection process. Another problem is the bet size. It is on an exponential curve and at some point will trade huge number of shares even if each stock will only get 0.25% of equity.

Therefore, as the strategy progresses in time, there will be a need for a scaling in and out of position procedure. I would prefer one of the sigmoid type. This has not yet been designed in that strategy, but I do see it will be needed, not so much in the beginning, but it will get there. So, I should plan for that too and have the strategy take care of those two potential problems.

However, there is more than ample time (years) to solve the second one.

Understandably, no one should be surprised if I am not providing the code.

@Guy,

Just a friendly reminder of the below post from Jamie McCorriston. I believe the purpose of this post was for collaboration - not for someone to write a long monologue on how they improved the strategy without actually sharing the code.

@Guy Fleury: Multiple participants in this thread have expressed
frustration with the sharing of screenshots instead of attaching a
backtest. Please refrain from sharing screenshots built on top of the
shared work in this thread. You are entitled to keep your work
private, so if you don't want to share, that's fine. But please don't
share screenshots in this thread as it seems the intent of the thread
is to collaborate on improving the algorithm.

@Joakim (and @Guy),

I'm no mathematician, so the stuff @Guy is posting doesn't make much sense to me. But I do believe the essence of what he is saying is that this strategy can be scaled to use more than 5 stocks and give amazing returns if someone is willing to use leverage. And while he is talking above me (and maybe most on here), that was interesting information. Liquidity in 5 stocks is going to be an issue in real-life, so by adding a leverage of 1.5 with 50 stocks, you can still get good returns if, for example, you start with $5k instead of $10k (and keep the other $5k in the bank cover yourself for a margin call). I'm not really familiar with using leverage (so I might not be understanding all this correctly). And that way with 50 stocks at 1.5 leverage I can backtest a return of 30,000% turning $5k into $1.5mil (2003-2019) verses about 5,000% return on $10k with 50 stocks resulting in a portfolio of $500k.
That is interesting.

Also, the point about dealing with bear situations in a more intelligent way is true - that would increase the return for this algorithm. So even though he didn't provide code, he did provide an idea that we could add to the strategy an benefit from.

I personally don't like leverage, but maybe others do. So I wouldn't say @Guy hasn't contributed. True, no code (except slightly modified from others - but really that's all I posted as well), but he does have ideas.

Some think that this strategy is operating on ranked-fundamentals. Well, not so much. It is mostly playing market noise.

The stock selection process is just there to pick 400 stocks. Changing the ranking method will give quite different results as was illustrated in my last article. It is understandable especially in the scheduled periodic rebalance procedures where the weight's 7th decimal can be the deciding factor. When one of the weights changes, it will prompt all other weights to “readjust” as if in a domino effect. The rebalance occurs not because the fundamentals changed (their values changed about every 3 months), but because the 7th decimal of one of the weights changed. And that becomes playing on market noise a lot more than playing on ranked-fundamentals.

When designing a trading strategy we should look at where it is ultimately going, especially in a compounding environment where time might turn out to be the most important ingredient of all. It is your trading strategy H that is running the show, so do make the most of it.

Such a trading strategy is designed to accommodate large institutional sized players and the very rich to make them even richer. Nonetheless, looking at the equations presented and my latest articles, the strategy can be scaled down just as it was easily scaled up as is illustrated in the last shown equation in my article.

You want to find out more like equations, explanations and charts, follow the link below:

https://alphapowertrading.com/index.php/2-uncategorised/355-durability-and-scalability

Okay I am pulling my hair out, can someone tell me the difference between the following:

def make_pipeline():  
    top_quality = quality.top(N_Q, mask=universe)  
    top_quality_momentum = momentum.top(N, mask=top_quality)  
    pipe = Pipeline(  
        columns={  
            'trend_up': trend_up,  
            'screen': top_quality_momentum  
        },  
        screen=top_quality_momentum  
    )  
    return pipe

def rebalance(context, data):  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    rule = 'screen & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  

VS

def make_pipeline():  
    top_quality = quality.top(N_Q, mask=universe)  
    pipe = Pipeline(  
        columns={  
            'trend_up': trend_up,  
            'score': momentum  
        },  
        screen=top_quality  
    )  
    return pipe

def rebalance(context, data):  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    rule = 'trend_up or (not trend_up & index in @current_holdings)'  
    stocks_to_hold = df.sort_values('score', ascending=False).head(N).query(rule).index  

I have been trying to move the number of stocks to choose N into rebalance so that it can be calculated (scaled) on the fly, and I can't do this in make_pipeline. I would think the two are equivalent, and when I test multiple dates in Notebook they are (IE same 5 stocks, but different order). However, during backtest this makes a huge difference, and I can't figure out why.

@Jacob Champlin -- I can't spot what you're doing wrong. It looks correct. If you scroll up in this thread you can see a working example where I did precisely what you're trying to do (move all the filtering logic into pipeline), and it worked correctly.

@Viridian and Everyone

I am almost ashamed to admit this. When I copied the code into a new algorithm, I forgot to change the Initial Capital to 10k. I can't tell you how long I tried to debug the difference. Sorry if I wasted anyone's time.

Hi @Marc and @Vladimer,

Thanks so much for your contribution to this algo and for @Chris Cain for originally posting!

I have been backtesting the code from @Marc's post with the parameters @Vladimer posted and I cant get it to perform the way you guys could ? Is it something in the parameters or am i missing something else ?

I have only modified the quality stocks to only include "long term debit to equity"

#quality = roic + ltd_to_eq + value  
quality = ltd_to_eq  

Any help would be much appreciated

Clone Algorithm
7
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with bonds weights  fixed by Vladimir)  
import quantopian.algorithm as algo
 
# import things need to run pipeline  
from quantopian.pipeline import Pipeline
 
# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns
 
# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms
 
# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd
 
 
def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('TLT'), symbol('IEF')]
 
    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 5 
    context.TOP_ROE_QTY = 60 #First sort by ROE
 
    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 100  
    context.TF_CURRENT_LOOKBACK = 10
 
    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 2 
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()
 
    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_open(minutes = 1)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_open(minutes = 1)  
    )   
 
def make_pipeline(context):  
    universe = Q3000US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow
 
    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    #quality = roic + ltd_to_eq + value  
    quality = ltd_to_eq 
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality) 
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe
 
def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])
 
    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 
 
    # Execute the order_optimal_portfolio method with above objective and any constraint  
    constraints = []
    constraints.append(opt.MaxGrossExposure(1.0))
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = constraints
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())
There was a runtime error.

@Donald, have a look

Clone Algorithm
170
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Quality companies in an uptrend (Dan Whitnabl version with bonds weights  fixed by Vladimir)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    set_commission(commission.PerTrade(cost=0.00))
    set_slippage(slippage.FixedSlippage(spread=0.00))
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 5 
    context.TOP_ROE_QTY = 60 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 100  
    context.TF_CURRENT_LOOKBACK = 10

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 2 
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()
    
    MIN = 1

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.week_start(),  
        time_rules.market_open(minutes = MIN)  
    )  
    schedule_function(  
        trade,  
        date_rules.week_start(),  
        time_rules.market_open(minutes = MIN)  
    )   

def make_pipeline(context):  
    universe = Q3000US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    #quality = roic + ltd_to_eq + value 
    quality = ltd_to_eq
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality) 
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdi