Back to Community
Fundamental Mean Reversion Algorithm

Hi All,

Can anyone give me some ideas about how I could achieve positive returns and possible beat the market using a mean reversion strategy similar to this? I threw this together rather quickly, but essentially, I want to use fundamental data (such as a p/e ratio) as a metric to measure when to go long and when to short. So for example, take a set number of assets with the highest p/e ratios and short them while simultaneously going long on assets with low p/e ratios. Again, this isn't the best strategy out there but I am a bit perplexed as to which fundamental ratios I should use as well as how to implement it to achieve my desired result. I would appreciate it if anyone would be willing to collaborate on this with me.

Thanks,

Rohit

Clone Algorithm
26
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a template algorithm on Quantopian for you to adapt and fill in.
"""
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import BusinessDaysSincePreviousEvent

def initialize(context):
    
    # Define leverage variables
    context.long_leverage = 0.5
    context.short_leverage = -0.5
    
    # Rebalance on the first trading day of each week at market open
    schedule_function(rebalance,
                      date_rules.week_start(days_offset=0),
                      time_rules.market_open())
    
    # Record tracking variables at the end of each day
    schedule_function(record_vars,
                      date_rules.every_day(),
                      time_rules.market_close(minutes=1))
    
    attach_pipeline(make_pipeline(), 'fundamentals_pipeline')
    
def make_pipeline():
    
    # Create a dollar_volume factor using default inputs and window_length
    dollar_volume = AverageDollarVolume(window_length=1)
    
    # Define high dollar-volume filter to be the top 5% of stocks by dollar volume.
    high_dollar_volume = dollar_volume.percentile_between(95, 100)
    
    # Latest p/e ratio
    pe_ratio = morningstar.valuation_ratios.pe_ratio.latest
    
    # Top 25 and bottom 25 stocks ranked by p/e ratio
    top_pe_stocks = pe_ratio.top(50, mask = high_dollar_volume)
    bottom_pe_stocks = pe_ratio.bottom(50, mask = high_dollar_volume)
    
    pipe_columns = {
            'pe_ratio':pe_ratio,
            'dollar_volume':dollar_volume,
            'bottom_pe_stocks':bottom_pe_stocks,
            'top_pe_stocks':top_pe_stocks
            }
    
    # List of tradable securities
    pipe_screen = (top_pe_stocks | bottom_pe_stocks)
    
    pipe = Pipeline(columns=pipe_columns,screen=pipe_screen)

    return pipe

def before_trading_start(context, data):
    
    # Pipeline_output returns a pandas DataFrame with the results of our factors
    # and filters
    context.output = pipeline_output('fundamentals_pipeline')
    
    # Sets the list of securities we want to long as the securities with a 'True'
    # value in the low_returns column
    context.longs = context.output[context.output['bottom_pe_stocks']]
    
    # Sets the list of securities we want to short as the securities with a 'True'
    # value in the high_returns column
    context.shorts = context.output[context.output['top_pe_stocks']]
    
    # A list of the securities that we want to order today
    context.my_stocks = context.longs.index.union(context.shorts.index).tolist()
    
    # A set of the same securities, sets have faster lookup
    context.stock_set = set(context.my_stocks)
    
def compute_weights(context):
    """
    Compute weights to our long and short target positions.
    """

    # Set the allocations to even weights for each long position, and even weights
    # for each short position.
    long_weight = context.long_leverage / len(context.longs)
    short_weight = context.short_leverage / len(context.shorts)
    
    return long_weight, short_weight

def rebalance(context,data):
    """
    This rebalancing function is called according to our schedule_function settings.
    """

    long_weight, short_weight = compute_weights(context)

    # For each security in our universe, order long or short positions according
    # to our context.long_secs and context.short_secs lists.
    for stock in context.my_stocks:
        
        stock_price = data.current(stock, "price")
        
        if data.can_trade(stock):
            if stock in context.longs.index:
                order_target_percent(stock, long_weight, style=StopOrder(stock_price - (stock_price * 0.10)))
            elif stock in context.shorts.index:
                order_target_percent(stock, short_weight, style=StopOrder(stock_price + (stock_price * 0.10)))

    # Sell all previously held positions not in our new context.security_list.
    for stock in context.portfolio.positions:
        if stock not in context.stock_set and data.can_trade(stock):
            order_target_percent(stock, 0)

def record_vars(context, data):
    """
    This function is called at the end of each day and plots certain variables.
    """

    # Check how many long and short positions we have.
    longs = shorts = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1
        if position.amount < 0:
            shorts += 1

    # Record and plot the leverage of our portfolio over time as well as the
    # number of long and short positions. Even in minute mode, only the end-of-day
    # leverage is plotted.
    record(leverage = context.account.leverage, long_count=longs, short_count=shorts)

    
    
    
    
    
    
    
    
    
    
    
    
    
    
There was a runtime error.
16 responses

Here's what I have been able to come up with regarding a long-short strategy based on fundamentals.

The strategy is not fully beta neutral. Making it so crashes the returns, unfortunately.

I would gladly collaborate on developing something that would be potentially Q-fund worthy.

It would be great if you could recast the algo into one using Pipeline. I have never mastered the tool.

Clone Algorithm
27
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month

import numpy as np
import pandas as pd
import datetime

        
def initialize(context):
    
    set_asset_restrictions(security_lists.restrict_leveraged_etfs)
    
    schedule_function(rebalance,
        date_rule=date_rules.month_start(),
        time_rule=time_rules.market_open(minutes = 20))

    schedule_function(buy,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_open(minutes = 30))

    schedule_function(display,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_close())
    
    context.start = True
    context.last_month = -1

    
def before_trading_start(context, data):
    
    month = get_datetime().month
    if context.last_month == month:
        return
    context.last_month = month

    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic,
            fundamentals.valuation_ratios.pcf_ratio)
        .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.scorel = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    context.scorel += df['ev_to_ebitda'].rank(ascending=True, na_option='bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    context.scorel += df['sales_yield'].rank(ascending=False, na_option='top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    context.scorel += df['roic'].rank(ascending=False, na_option='top')

    # price-to-cash-flow ratio, in-order (lower is better), nan goes last
    # context.scorel += df['pcf_ratio'].rank(ascending=True, na_option='bototm')
    
    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic,
            fundamentals.valuation_ratios.pcf_ratio)
    .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.scores = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    # context.scores += df['ev_to_ebitda'].rank(ascending=True, na_option='bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    # context.scores += df['sales_yield'].rank(ascending=False, na_option='top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    # context.scores += df['roic'].rank(ascending=False, na_option='top')
    
    # price-to-cash-flow ratio, in-order (lower is better), nan goes last
    context.scores += df['pcf_ratio'].rank(ascending=True, na_option='bottom')
    
    
def rebalance(context, data):

    P = data.history(context.scorel.index, 'price', 100, '1d')
    V = data.history(context.scorel.index, 'volume', 100, '1d')
    
    w = P * V
    w = w.mean()
    w = w[w > 10E+06]
    
    context.scorel = context.scorel[w.index]   
    context.longs = context.scorel.dropna().order().head(20).index 
    
    P = data.history(context.scores.index, 'price', 100, '1d')
    V = data.history(context.scores.index, 'volume', 100, '1d')
    
    w = P * V
    w = w.mean()
    w = w[w > 10E+06]
    
    context.scores = context.scores[w.index] 
    context.shorts = context.scores.dropna().order().tail(20).index
    
    
def buy(context,data):
    
    for s in context.portfolio.positions:
        if (s in context.longs) or (s in context.shorts):
            continue
        if not data.can_trade(s):
            continue
        order_target(s, 0)    

    if get_open_orders():
        return  
    
    for s in context.longs:
        if not data.can_trade(s):
            continue
        order_target_percent(s, 0.6 / len(context.longs))

    for s in context.shorts:
        if not data.can_trade(s):
            continue
        order_target_percent(s, -0.4 / len(context.shorts))

            
def display(context,data):
    
    record(leverage = context.account.leverage,
           exposure = context.account.net_leverage)
    
                                 
def handle_data(context, data):
    
    if context.start:
        rebalance(context, data)
        buy(context, data)
        display(context, data)
        context.start = False

There was a runtime error.

In case you are actually interested in trading something like this, I can contribute a long-only version with an added simple market-condition filter, which switches to bonds when times are bad.

Clone Algorithm
188
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month

import numpy as np
import pandas as pd
import datetime

        
def initialize(context):

    set_asset_restrictions(security_lists.restrict_leveraged_etfs)
    
    schedule_function(rebalance,
        date_rule=date_rules.month_start(),
        time_rule=time_rules.market_open(minutes = 20))

    schedule_function(buy,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_open(minutes = 30))

    schedule_function(display,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_close())

    context.m = sid(24744)
    context.bonds = [sid(23921)]
    
    context.start = True
    context.last_month = -1

    
def before_trading_start(context, data):
    
    month = get_datetime().month
    if context.last_month == month:
        return
    context.last_month = month

    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic)
        .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.score = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    context.score += df['ev_to_ebitda'].\
    rank(ascending = True, na_option = 'bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    context.score += df['sales_yield'].\
    rank(ascending = False, na_option = 'top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    context.score += df['roic'].\
    rank(ascending = False, na_option = 'top')
    
    
def rebalance(context, data):

    P = data.history(context.score.index, 'price', 100, '1d')
    V = data.history(context.score.index, 'volume', 100, '1d')
    
    w = P * V
    w = w.mean()
    w = w[w > 10E+06]
    
    context.score = context.score[w.index]   
    context.longs = context.score.dropna().order().head(20).index   
    
    P = data.history(context.m, 'price', 200, '1d')
    if P.tail(2).median() < P.tail(200).median() or\
       P.tail(10).median() < P.tail(100).median():
        context.longs = context.bonds 

        
def buy(context,data):
    
    for s in context.portfolio.positions:
        if s in context.longs:
            continue
        if not data.can_trade(s):
            continue
        order_target(s, 0)    
    
    if get_open_orders():
        return    
    
    for s in context.longs:
        if not data.can_trade(s):
            continue
        order_value(s, context.portfolio.cash / len(context.longs))

        
def display(context,data):
    
    record(leverage = context.account.leverage,
           exposure = context.account.net_leverage)
    
                                 
def handle_data(context, data):
    
    if context.start:
        rebalance(context, data)
        buy(context, data)
        display(context, data)
        context.start = False

There was a runtime error.

Looks very nice Tim. For preference I would prefer to use shorter dated bonds. An interesting idea would be to trade the chosen Googenheim equal weighted 500 on a fundamental basis rather than individual stocks. Something I keep meaning to look at more closely. Especially as regards ML algos. Use fundamental data from the individual stocks to predict movement in the index itself. Personally I couldn't be bothered to trade a multiplicity of individual stocks but then I trade manually so perhaps that is not so surprising!

Thanks Anthony, your idea of using the fundamentals to switch between an index and a bond is a very interesting one. Here's a first (rather miserable) implementation. It sums up the fundamental scores of the individual stocks to form a score for the RSP. It then compares this value to the one from the previous month.

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month

import numpy as np
import pandas as pd

        
def initialize(context):

    set_asset_restrictions(security_lists.restrict_leveraged_etfs)
    
    schedule_function(rebalance,
        date_rule=date_rules.month_start(),
        time_rule=time_rules.market_open(minutes = 20))

    schedule_function(buy,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_open(minutes = 30))

    schedule_function(display,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_close())

    context.market = sid(24744)
    context.bonds = sid(23921)
    
    context.start = True
    context.last_month = -1

    context.longs = None
    context.oldscore = 0
    
    
def before_trading_start(context, data):
    
    month = get_datetime().month
    if context.last_month == month:
        return
    context.last_month = month

    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic)
        .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.score = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    context.score += df['ev_to_ebitda'].\
    rank(ascending = True, na_option = 'bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    context.score += df['sales_yield'].\
    rank(ascending = False, na_option = 'top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    context.score += df['roic'].\
    rank(ascending = False, na_option = 'top')
    
    
def rebalance(context, data):
       
    score = context.score.sum()
   
    context.longs = [context.market]
    
    if score < context.oldscore:
        context.longs = [context.bonds] 

    context.oldscore = score
    
    
def buy(context,data):
    
    for s in context.portfolio.positions:
        if s in context.longs:
            continue
        if not data.can_trade(s):
            continue
        order_target(s, 0)    
    
    if get_open_orders():
        return    
    
    for s in context.longs:
        if not data.can_trade(s):
            continue
        order_value(s, context.portfolio.cash / len(context.longs))

        
def display(context,data):
    
    record(leverage = context.account.leverage,
           exposure = context.account.net_leverage)
    
                                 
def handle_data(context, data):
    
    if context.start:
        rebalance(context, data)
        buy(context, data)
        display(context, data)
        context.start = False
    


There was a runtime error.

Here's a version of the long-only algo that uses 7-10y bonds (IEF). Is that what you had in mind, Anthony, or something with an even shorter maturity?

Clone Algorithm
188
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month

import numpy as np
import pandas as pd

        
def initialize(context):

    set_asset_restrictions(security_lists.restrict_leveraged_etfs)
    
    schedule_function(rebalance,
        date_rule=date_rules.month_start(),
        time_rule=time_rules.market_open(minutes = 20))

    schedule_function(buy,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_open(minutes = 30))

    schedule_function(display,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_close())

    context.m = sid(24744)
    context.bonds = [sid(23870)]
    
    context.start = True
    context.last_month = -1

    context.longs = []
    
    
def before_trading_start(context, data):
    
    month = get_datetime().month
    if context.last_month == month:
        return
    context.last_month = month

    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic)
        .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.score = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    context.score += df['ev_to_ebitda'].\
    rank(ascending = True, na_option = 'bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    context.score += df['sales_yield'].\
    rank(ascending = False, na_option = 'top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    context.score += df['roic'].\
    rank(ascending = False, na_option = 'top')
    
    
def rebalance(context, data):

    P = data.history(context.score.index, 'price', 100, '1d')
    V = data.history(context.score.index, 'volume', 100, '1d')
    
    w = P * V
    w = w.mean()
    w = w[w > 10E+06]
    
    context.score = context.score[w.index]   
    context.longs = context.score.dropna().order().head(20).index   
    
    P = data.history(context.m, 'price', 200, '1d')
    if P.tail(2).median() < P.tail(200).median() or\
       P.tail(10).median() < P.tail(100).median():
        context.longs = context.bonds 

        
def buy(context,data):
    
    for s in context.portfolio.positions:
        if s in context.longs:
            continue
        if not data.can_trade(s):
            continue
        order_target(s, 0)    
    
    if get_open_orders():
        return    
    
    for s in context.longs:
        if not data.can_trade(s):
            continue
        order_value(s, context.portfolio.cash / len(context.longs))

        
def display(context,data):
    
    record(leverage = context.account.leverage,
           exposure = context.account.net_leverage)
    
                                 
def handle_data(context, data):
    
    if context.start:
        rebalance(context, data)
        buy(context, data)
        display(context, data)
        context.start = False

There was a runtime error.

Tim
IEF is fine but for my money I would be using probably 1 to 3 year. Over a few hundred years there is comparatively little difference in cagr over the yield curve so I wouldn't bother with the volatility of long dated securities.

But it's a nice algo and obviously the fundamental analysis implicit in it makes sense.

I am torn at the moment. I am half tempted to produce algos for Q, Numerai and the rest of the gang and half tempted to say WTF.

As well as my own trading and research of course.

Who cares if I disagree with their analysis of markets - give the punters what they have asked for!

For me Quantopian is all I know, I am afraid. I have only been doing this for a couple of years and never with real money, I must admit. But as a physicist, I find this sort of modelling fascinating. Because physics is essentially model building and here we have a small, well defined (financial) universe with its own (undiscovered) rules (laws) and lost of data ....

The shorter the maturity of the bonds, the lower the volatility of the results of the algo, it seems, just as you have predicted, Anthony. However, at least in the short term the return seems to diminish as well. I The use of SHY (1-3y) result in a behaviour close to staying in cash during market downturns, it appears.

Clone Algorithm
188
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month

import numpy as np
import pandas as pd

        
def initialize(context):

    set_asset_restrictions(security_lists.restrict_leveraged_etfs)
    
    schedule_function(rebalance,
        date_rule=date_rules.month_start(),
        time_rule=time_rules.market_open(minutes = 20))

    schedule_function(buy,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_open(minutes = 30))

    schedule_function(display,
        date_rule=date_rules.every_day(),
        time_rule=time_rules.market_close())

    context.m = sid(24744)
    context.bonds = [sid(23911)]
    
    context.start = True
    context.last_month = -1

    context.longs = []
    
    
def before_trading_start(context, data):
    
    month = get_datetime().month
    if context.last_month == month:
        return
    context.last_month = month

    df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda,
            fundamentals.valuation_ratios.sales_yield,
            fundamentals.operation_ratios.roic)
        .filter(fundamentals.company_reference.primary_exchange_id.in_(["NYSE", "NYS"]))
        .filter(fundamentals.operation_ratios.total_debt_equity_ratio != None)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.valuation.shares_outstanding != None)  
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCBB") # no pink sheets
        .filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 103) # exclude financial sector
        .filter(fundamentals.asset_classification.morningstar_sector_code != 207) # exclude utilities sector
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(((fundamentals.valuation.market_cap*1.0) / (fundamentals.valuation.shares_outstanding*1.0)) > 1.0)  # stock price > $1
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation.market_cap >= 5.0E+09)
        .filter(~fundamentals.company_reference.standard_name.contains(' LP')) # exclude LPs
        .filter(~fundamentals.company_reference.standard_name.contains(' L P'))
        .filter(~fundamentals.company_reference.standard_name.contains(' L.P'))
        .filter(fundamentals.balance_sheet.limited_partnership == None) # exclude LPs
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(500)).T
    
    context.score = pd.Series(np.zeros(len(df.index)), index = df.index)
    
    # EV/EBITDA, in-order (lower is better), nan goes last
    context.score += df['ev_to_ebitda'].\
    rank(ascending = True, na_option = 'bottom')
    
    # sales yield, inverse (higher is better), nan goes last
    context.score += df['sales_yield'].\
    rank(ascending = False, na_option = 'top')
    
    # return on invested capital, inverse (higher is better), nan goes last
    context.score += df['roic'].\
    rank(ascending = False, na_option = 'top')
    
    
def rebalance(context, data):

    P = data.history(context.score.index, 'price', 100, '1d')
    V = data.history(context.score.index, 'volume', 100, '1d')
    
    w = P * V
    w = w.mean()
    w = w[w > 10E+06]
    
    context.score = context.score[w.index]   
    context.longs = context.score.dropna().order().head(20).index   
    
    P = data.history(context.m, 'price', 200, '1d')
    if P.tail(2).median() < P.tail(200).median() or\
       P.tail(10).median() < P.tail(100).median():
        context.longs = context.bonds 

        
def buy(context,data):
    
    for s in context.portfolio.positions:
        if s in context.longs:
            continue
        if not data.can_trade(s):
            continue
        order_target(s, 0)    
    
    if get_open_orders():
        return    
    
    for s in context.longs:
        if not data.can_trade(s):
            continue
        order_value(s, context.portfolio.cash / len(context.longs))

        
def display(context,data):
    
    record(leverage = context.account.leverage,
           exposure = context.account.net_leverage)
    
                                 
def handle_data(context, data):
    
    if context.start:
        rebalance(context, data)
        buy(context, data)
        display(context, data)
        context.start = False

There was a runtime error.

Anthony, allow me a question, please.

Since you prefer to use ETFs in order to trade in a small number of instruments, rather than in individual stocks, do you mostly do some sort of tactical asset allocation with them, then?

Hi Tim,

I really liked the first backtest you posted and I would be interested in collaborating with you on something like that. What is the best way to reach you?

Tim
Yes indeed simple asset allocation strategies. A classic example would be the old 60/40 stock bond split with period rebalancing but adding in a momentum element. For instance if stock momentum has been positive over the last month allocate 60/40 on the rebalance, if not then 100% bonds for the following period. In backtesting it usually gives similar performance to the 60/40 spilt with lower vol and DD.

However, at least in the short term the return seems to diminish as well.

Over the long term very little performance on bonds is from price movement and over the cycles that will net out to close to zero. The vast majority of the return is from coupon. People argue that markets change and that back testing over short periods is valid but you won't see what I have just described unless you go into deep history.

Trouble is the HF industry and indeed the active fund management industry is so fixated on the short term. Their ridiculously short term track records are pointless. You really need to look at deep history and even then it might not have much predictive value. But better than a few years back to 2002.

Very short term history is much more applicable to HFT an intra day stuff where indeed markets have changed dramatically. Over the long, long term I don't believe much matters except economic growth for investment success.

But as a physicist, I find this sort of modelling fascinating. Because physics is essentially model building and here we have a small, well defined (financial) universe with its own (undiscovered) rules (laws) and lost of data ....

Wish I had been a physicist. But I imagine as a physicist you will be asking yourself some very deep questions as to predictability. I am deeply fascinated by the topic in all senses. Science, in my mind, is probably the only human pursuit worth the bother. The answer to the question "why" .... or at least "how"...

Thank you for your insightful thoughts, Anthony.

Do you think that in additon to profiting from the economic growth, one may nevertheless be able also to avoid major market downturns, either by insight, based on fundamental data, or by using some sort of technical analysis, such as the popular 200-day MA -- or is this just an illusion?

Hi Rohit,

I think the easiest way to communicate is by using the Q messaging system.

I'll create a copy if the algorithm and give you collaborator access to it.

Do you think that in addition to profiting from the economic growth, one may nevertheless be able also to avoid major market downturns, either by insight, based on fundamental data, or by using some sort of technical analysis, such as the popular 200-day MA -- or is this just an illusion?

According to my backtests such simple methods have indeed worked since 1709 in the UK and at least since 1870 in the US. Based on monthly or quarterly re-balancing and a simple 30 day momentum lookback.

I would expect it to continue to work (if one can be bothered). But many would argue against that.