Back to Community
Acquirer's Multiple, Based on "Deep Value" #Fundamentals

This algorithm seeks to replicate the strategy set forth in Tobias Carlisle's "Deep Value" See his Google Talk for a great summary of the strategy.

In "Deep Value" and in his first book, "Quantitative Value", Tobias demonstrates that the Enterprise Multiple or "Acquirer's Multiple", defined as Enterprise Value/EBIT outperforms all other value ratios. The books and their back-tests show that when ranking stocks based on the Enterprise Multiple, Value stocks (those with the lowest ratio of EV to EBIT) outperform Glamour stocks on the same spectrum.

As with many value strategies, this one holds no more than 30 stocks at a time and there are some additional filters, including a minimum market cap in the 40th percentile. Re-balancing is performed annually using data available as of the previous month, ideally. Hopefully future fundamentals API changes will allow more exact querying of variables on different timescales.

I'm not convinced that I am 100% faithfully replicating the strategy outlined in the book, but the results seem promising. One compromise I made was to allow negative EV/EBIT ratios. I think Tobias ruled these out, but they seem to contribute significantly to the performance.

Feedback is welcome!

Clone Algorithm
692
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Filter the top 500 companies by market cap 
    2. Filter for companies where EV/EBITDA is lowest
    3. Rank by EV/EBITDA (lowest is best)
    4. Buy top 30 ranked stocks each month
    5. Rebalance at the end of June
"""

import pandas as pd
import numpy as np
import datetime

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    
    # Dictionary of orders and order dates
    context.orders = {}
    
    context.fundamental_df = None
    
    context.max_holdings = 30
    context.rebalance_month = 6
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    log.info("Rebalancing")
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        order_target_percent(stock, 0)
    
    # Only order 2 stocks and don't overlap with existing positions
    order_cnt = 0
    for stock in context.fundamental_df:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            order_cnt += 1
            #oid = order_target_percent(stock, context.max_holdings/100.)#int(100./24.)/100.)
            try:
                oid = order_percent(stock, int(100./float(context.max_holdings))/100.)
            except:
                log.info("WTF?")
            #if 'acq_mult' not in context.fundamental_df[stock]:
            #    pass
            try:
                log.info("Bought into %s with Acq Mult of %f: %r" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid))
            except:
                pass
            
            # Save the order info for later
            # FIXME: We are saving the order date, not the fill date, gotta figure that out
            context.orders[oid] = dict(stock=stock, date=exchange_time)
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    #if True: #not exchange_time.day == 30:
    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        #log.info(exchange_time.day)
    #if  context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):
        num_stocks = 30
    
        # Setup SQLAlchemy query to screen stocks based on PE ration
        # and industry sector. Then filter results based on 
        # market cap and shares outstanding.
        # We limit the number of results to num_stocks and return the data
        # in descending order.
        fundamental_df = get_fundamentals(
            query(
                # put your query in here by typing "fundamentals."
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit,
                fundamentals.balance_sheet.preferred_stock,
                #fundamentals.balance_sheet.debt_total,
                fundamentals.balance_sheet.current_debt,
                fundamentals.balance_sheet.long_term_debt,
                fundamentals.balance_sheet.minority_interest,
                fundamentals.balance_sheet.cash,
                fundamentals.balance_sheet.cash_and_cash_equivalents
            )
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)#200e6) #1.8e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            # Try allowing a negative EV.  It might outperform
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebit != None)
            #.filter(fundamentals.income_statement.ebit >= 0.0)
            #.filter(fundamentals.valuation.enterprise_value != None)
            #.filter(fundamentals.valuation.enterprise_value >= 0.0)
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) >= 0.0)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) <= 5.0)
            #(1.0 / 0.03))
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda <= 5.) #(1.0 / 0.03))
            #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
            .order_by(fundamentals.valuation.market_cap.asc())
            .limit(num_stocks*100)
        )
        
        #log.info(fundamental_df)
                         
        # Add the Acquirer's Multiple score
        temp =  fundamental_df.T
        temp['acq_mult'] = temp['ev_to_ebitda']
        #temp['ev'] = temp['market_cap'] + temp['long_term_debt'] + temp['current_debt'] + temp['minority_interest'] + temp['preferred_stock'] - temp['cash_and_cash_equivalents'] #temp['cash']
        #temp['acq_mult'] = temp['ev'] / temp['ebit']
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        #temp = temp[temp['acq_mult'] <= 5.0]
        temp.sort('acq_mult', ascending=True, inplace=True)
        temp = temp.head(200)
        fundamental_df = temp.T
            
        ## Scalp the last 5
        #fundamental_df = fundamental_df.T.head(5).T
            
        #for stock in fundamental_df:
        #    log.info("The %s stock is %f" % (str(stock), fundamental_df[stock]['acq_mult']))

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.
36 responses

Your choice of Starting Time and Month of Rebalancing on backtesting is having so much influence on your end result.
Don't believe me? Then, try start backtest while market in (very) bullish stage, and this algorithm will vanish (end up behind from performance's index or just become trend following strategy).

Maybe because this algorithm is using value/fundamental strategy, and AFAIK these model of analysis eventually work contraversion/opposite from general market condition, although (usually) would became success on long term when market had go through bull/bear cycle. So, In my opinion, when market is in bearish then we may try this algorithm, of course after figuring out the bottom end/rebound phase and May the FORCE be with us...

This show backtesting start from 2 years ago, 100% with same algorithm..

Clone Algorithm
23
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Filter the top 500 companies by market cap 
    2. Filter for companies where EV/EBITDA is lowest
    3. Rank by EV/EBITDA (lowest is best)
    4. Buy top 30 ranked stocks each month
    5. Rebalance at the end of June
"""

import pandas as pd
import numpy as np
import datetime

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    
    # Dictionary of orders and order dates
    context.orders = {}
    
    context.fundamental_df = None
    
    context.max_holdings = 30
    context.rebalance_month = 6
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    log.info("Rebalancing")
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        order_target_percent(stock, 0)
    
    # Only order 2 stocks and don't overlap with existing positions
    order_cnt = 0
    for stock in context.fundamental_df:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            order_cnt += 1
            #oid = order_target_percent(stock, context.max_holdings/100.)#int(100./24.)/100.)
            try:
                oid = order_percent(stock, int(100./float(context.max_holdings))/100.)
            except:
                log.info("WTF?")
            #if 'acq_mult' not in context.fundamental_df[stock]:
            #    pass
            try:
                log.info("Bought into %s with Acq Mult of %f: %r" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid))
            except:
                pass
            
            # Save the order info for later
            # FIXME: We are saving the order date, not the fill date, gotta figure that out
            context.orders[oid] = dict(stock=stock, date=exchange_time)
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    #if True: #not exchange_time.day == 30:
    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        #log.info(exchange_time.day)
    #if  context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):
        num_stocks = 30
    
        # Setup SQLAlchemy query to screen stocks based on PE ration
        # and industry sector. Then filter results based on 
        # market cap and shares outstanding.
        # We limit the number of results to num_stocks and return the data
        # in descending order.
        fundamental_df = get_fundamentals(
            query(
                # put your query in here by typing "fundamentals."
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit,
                fundamentals.balance_sheet.preferred_stock,
                #fundamentals.balance_sheet.debt_total,
                fundamentals.balance_sheet.current_debt,
                fundamentals.balance_sheet.long_term_debt,
                fundamentals.balance_sheet.minority_interest,
                fundamentals.balance_sheet.cash,
                fundamentals.balance_sheet.cash_and_cash_equivalents
            )
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)#200e6) #1.8e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            # Try allowing a negative EV.  It might outperform
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebit != None)
            #.filter(fundamentals.income_statement.ebit >= 0.0)
            #.filter(fundamentals.valuation.enterprise_value != None)
            #.filter(fundamentals.valuation.enterprise_value >= 0.0)
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) >= 0.0)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) <= 5.0)
            #(1.0 / 0.03))
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda <= 5.) #(1.0 / 0.03))
            #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
            .order_by(fundamentals.valuation.market_cap.asc())
            .limit(num_stocks*100)
        )
        
        #log.info(fundamental_df)
                         
        # Add the Acquirer's Multiple score
        temp =  fundamental_df.T
        temp['acq_mult'] = temp['ev_to_ebitda']
        #temp['ev'] = temp['market_cap'] + temp['long_term_debt'] + temp['current_debt'] + temp['minority_interest'] + temp['preferred_stock'] - temp['cash_and_cash_equivalents'] #temp['cash']
        #temp['acq_mult'] = temp['ev'] / temp['ebit']
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        #temp = temp[temp['acq_mult'] <= 5.0]
        temp.sort('acq_mult', ascending=True, inplace=True)
        temp = temp.head(200)
        fundamental_df = temp.T
            
        ## Scalp the last 5
        #fundamental_df = fundamental_df.T.head(5).T
            
        #for stock in fundamental_df:
        #    log.info("The %s stock is %f" % (str(stock), fundamental_df[stock]['acq_mult']))

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.

Starting time can have a big influence on value algorithms. They can outperform in bear markets and for long periods of time. That's why many people don't adhere to them. This algorithm also happens to be buying the most unloved companies. The theory is that it will outperform in the long run. The choice of starting month and day is based on the algorithm in the book, which identifies this as an academic standard. You can easily adapt to monthly re-balancing or annual re-balancing on another date.

Your more recent backtest distorts the results a bit because you start so far ahead of the first re-balancing day. So, the algorithm holds cash and then it becomes difficult to compare the results. Here's the same algorithm with the start date moved up to 06/25/2013 so that the start dates of the benchmark and algorithm are much closer. The benchmark still outperforms in the long-run, but the apparent disparity is less egregious. I think this effect has been discussed deeply in another thread: Returns.

Clone Algorithm
22
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Filter the top 500 companies by market cap 
    2. Filter for companies where EV/EBITDA is lowest
    3. Rank by EV/EBITDA (lowest is best)
    4. Buy top 30 ranked stocks each month
    5. Rebalance at the end of June
"""

import pandas as pd
import numpy as np
import datetime

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    
    # Dictionary of orders and order dates
    context.orders = {}
    
    context.fundamental_df = None
    
    context.max_holdings = 30
    context.rebalance_month = 6
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    log.info("Rebalancing")
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        order_target_percent(stock, 0)
    
    # Only order 2 stocks and don't overlap with existing positions
    order_cnt = 0
    for stock in context.fundamental_df:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            order_cnt += 1
            #oid = order_target_percent(stock, context.max_holdings/100.)#int(100./24.)/100.)
            try:
                oid = order_percent(stock, int(100./float(context.max_holdings))/100.)
            except:
                log.info("WTF?")
            #if 'acq_mult' not in context.fundamental_df[stock]:
            #    pass
            try:
                log.info("Bought into %s with Acq Mult of %f: %r" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid))
            except:
                pass
            
            # Save the order info for later
            # FIXME: We are saving the order date, not the fill date, gotta figure that out
            context.orders[oid] = dict(stock=stock, date=exchange_time)
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    #if True: #not exchange_time.day == 30:
    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        #log.info(exchange_time.day)
    #if  context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):
        num_stocks = 30
    
        # Setup SQLAlchemy query to screen stocks based on PE ration
        # and industry sector. Then filter results based on 
        # market cap and shares outstanding.
        # We limit the number of results to num_stocks and return the data
        # in descending order.
        fundamental_df = get_fundamentals(
            query(
                # put your query in here by typing "fundamentals."
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit,
                fundamentals.balance_sheet.preferred_stock,
                #fundamentals.balance_sheet.debt_total,
                fundamentals.balance_sheet.current_debt,
                fundamentals.balance_sheet.long_term_debt,
                fundamentals.balance_sheet.minority_interest,
                fundamentals.balance_sheet.cash,
                fundamentals.balance_sheet.cash_and_cash_equivalents
            )
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)#200e6) #1.8e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            # Try allowing a negative EV.  It might outperform
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebit != None)
            #.filter(fundamentals.income_statement.ebit >= 0.0)
            #.filter(fundamentals.valuation.enterprise_value != None)
            #.filter(fundamentals.valuation.enterprise_value >= 0.0)
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) >= 0.0)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) <= 5.0)
            #(1.0 / 0.03))
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda <= 5.) #(1.0 / 0.03))
            #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
            .order_by(fundamentals.valuation.market_cap.asc())
            .limit(num_stocks*100)
        )
        
        #log.info(fundamental_df)
                         
        # Add the Acquirer's Multiple score
        temp =  fundamental_df.T
        temp['acq_mult'] = temp['ev_to_ebitda']
        #temp['ev'] = temp['market_cap'] + temp['long_term_debt'] + temp['current_debt'] + temp['minority_interest'] + temp['preferred_stock'] - temp['cash_and_cash_equivalents'] #temp['cash']
        #temp['acq_mult'] = temp['ev'] / temp['ebit']
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        #temp = temp[temp['acq_mult'] <= 5.0]
        temp.sort('acq_mult', ascending=True, inplace=True)
        temp = temp.head(200)
        fundamental_df = temp.T
            
        ## Scalp the last 5
        #fundamental_df = fundamental_df.T.head(5).T
            
        #for stock in fundamental_df:
        #    log.info("The %s stock is %f" % (str(stock), fundamental_df[stock]['acq_mult']))

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.

Yes, I understand. However, the point is actually I completely agree with you, about #value based algorithm. This algo should run on long term to be outperform the market, when the real #value will show itself.

And maybe I'm just still having the influence or some sphere when testing this algorithm with standard Quantopian Open Contest which is starting now. That's why I try to start simulating a backtest, exactly 2 years from now (Jan'13 --> Jan'15).

I'm just saying, this algo maybe looks not an attractive choice for short term running period(e.g: several months), moreover with the contest daily on judgement result, which is surely not an advantage point for any #value algo.

This is cool. Thanks for coding it up, Dan. AM is absurdly simple--rank on one factor EV/EBIT and buy the cheapest 30--but performs remarkably well. In a Russell 1000 universe (~ largest 1000 stocks) it has outperformed over the last two years starting 1/1/2013 and rebalancing after one year cumulatively returning 85.9% (CAGR 35.3%) vs 44.4% (CAGR 19.9%) for the R1000 TR. These are unusually good results. The last 10 years are more meaningful. Over the last 10 years (starting 1/1/2005) and measured on a rolling one-year basis in a Russell 1000 universe it has outperformed by an average of 9.6 percent and in 78 of 118 start date opportunities but underperformed in the following 40 one-year periods by an average of 6.3 percent:
4/16/12
3/19/12
2/21/12
1/23/12
12/27/11
11/28/11
10/31/11
9/6/11
8/8/11
7/11/11
6/13/11
5/16/11
4/18/11
3/21/11
2/22/11
11/29/10
10/4/10
9/7/10
8/9/10
2/22/10
1/25/10
9/8/08
8/11/08
7/14/08
6/16/08
4/21/08
1/28/08
12/31/07
12/3/07
11/5/07
10/8/07
9/10/07
8/13/07
7/16/07
6/18/07
5/21/07
12/4/06
11/6/06
4/24/06
3/27/06
2/27/06
1/30/06
1/3/06

Tobias,
So depending on the start date the algorithm will underperform the index, 1 in 3 chances?

Did I get that right ?

Over the last 10 years, yes.

Calendar based rebalancing will always affect results based on start time. That's why performance based rebalancing is a more reliable mode of security mashup.

Here's a backtest that goes back as far as the Quantopian data goes. Ideally, we'd be able to run backtests over multiple rolling time-frames automatically.

Clone Algorithm
692
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Filter the top 500 companies by market cap 
    2. Filter for companies where EV/EBITDA is lowest
    3. Rank by EV/EBITDA (lowest is best)
    4. Buy top 30 ranked stocks each month
    5. Rebalance at the end of June
"""

import pandas as pd
import numpy as np
import datetime

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    
    # Dictionary of orders and order dates
    context.orders = {}
    
    context.fundamental_df = None
    
    context.max_holdings = 30
    context.rebalance_month = 6
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    log.info("Rebalancing")
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        order_target_percent(stock, 0)
    
    # Only order 2 stocks and don't overlap with existing positions
    order_cnt = 0
    for stock in context.fundamental_df:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            order_cnt += 1
            #oid = order_target_percent(stock, context.max_holdings/100.)#int(100./24.)/100.)
            try:
                oid = order_percent(stock, int(100./float(context.max_holdings))/100.)
            except:
                log.info("WTF?")
            #if 'acq_mult' not in context.fundamental_df[stock]:
            #    pass
            try:
                log.info("Bought into %s with Acq Mult of %f: %r" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid))
                
                # Save the order info for later
                # FIXME: We are saving the order date, not the fill date, gotta figure that out
                context.orders[oid] = dict(stock=stock, date=exchange_time)
            except:
                pass
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    #if True: #not exchange_time.day == 30:
    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        #log.info(exchange_time.day)
    #if  context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):
        num_stocks = 30
    
        # Setup SQLAlchemy query to screen stocks based on PE ration
        # and industry sector. Then filter results based on 
        # market cap and shares outstanding.
        # We limit the number of results to num_stocks and return the data
        # in descending order.
        fundamental_df = get_fundamentals(
            query(
                # put your query in here by typing "fundamentals."
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit,
                fundamentals.balance_sheet.preferred_stock,
                #fundamentals.balance_sheet.debt_total,
                fundamentals.balance_sheet.current_debt,
                fundamentals.balance_sheet.long_term_debt,
                fundamentals.balance_sheet.minority_interest,
                fundamentals.balance_sheet.cash,
                fundamentals.balance_sheet.cash_and_cash_equivalents
            )
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)#200e6) #1.8e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            # Try allowing a negative EV.  It might outperform
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebit != None)
            #.filter(fundamentals.income_statement.ebit >= 0.0)
            #.filter(fundamentals.valuation.enterprise_value != None)
            #.filter(fundamentals.valuation.enterprise_value >= 0.0)
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) >= 0.0)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) <= 5.0)
            #(1.0 / 0.03))
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda <= 5.) #(1.0 / 0.03))
            #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
            .order_by(fundamentals.valuation.market_cap.asc())
            .limit(num_stocks*100)
        )
        
        #log.info(fundamental_df)
                         
        # Add the Acquirer's Multiple score
        temp =  fundamental_df.T
        temp['acq_mult'] = temp['ev_to_ebitda']
        #temp['ev'] = temp['market_cap'] + temp['long_term_debt'] + temp['current_debt'] + temp['minority_interest'] + temp['preferred_stock'] - temp['cash_and_cash_equivalents'] #temp['cash']
        #temp['acq_mult'] = temp['ev'] / temp['ebit']
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        #temp = temp[temp['acq_mult'] <= 5.0]
        temp.sort('acq_mult', ascending=True, inplace=True)
        temp = temp.head(200)
        fundamental_df = temp.T
            
        ## Scalp the last 5
        #fundamental_df = fundamental_df.T.head(5).T
            
        #for stock in fundamental_df:
        #    log.info("The %s stock is %f" % (str(stock), fundamental_df[stock]['acq_mult']))

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.

Nice one, Dan. Almost double the return of the market over the full 13-year period.

Tobias, thank you for the presentation @ Google and sharing your research. I became aware of you first at that. I'd really like to mix in some x-factor along with the algorithm. By x-factor, I actually mean parameters derived from classics (Thornton O'Glove's Quality of Earnings, Schilit's Financial Shenanigans etc). My brain tells me that weeding out frauds and semi-frauds (companies that don't outright fudge earnings but conveniently transpose expenses and income from operating cash flows and investing cash flows), one should be able to minimize obvious risk.
I know this approach does go somewhat against your point of just following the model however my stubborn nature just tells me this would help. (I know it's not easy to tease out earnings quality from financial ratios but I've started down that path (mixed with an earnest reading through of 10Ks)
If I can mould that into an Algo here, I will.

Hey Sunil,

I just finished Tobias' first book Quantitative Value which gets into just those topics (ruling out frauds, etc.). The main algorithm described in the book is much more complex than the AM, but it would be interesting to see some of these additional filters tested against it. Some of this was done in the book, but I don;t recall if eliminating helped AM alone or not.

Quantitative Value is one of the best finance books I've read in the last couple of years. (Thanks Tobias and Wes!) Coding the QV algo is beyond my capabilities at the moment, but it's pretty clearly laid out in the book, so I hope someone will be able to do it here. If so, I am capable of breaking it. :-)

Hey Ron,

I haven't tried tackling it yet either, but I suspect the current fundamentals API is too limited at the moment to faithful reproduce many of the figures of merit. There's a lot of averaging of multiple past years' data and such that just isn't possible yet. It sounds like the Quantopian folks are eyeing this for the future though. I also recall another poster talking about trying to recreate the QV strategy here. Once the data is available, I'm all over it. I think the AM is easier, but QV lays out tons of useful tests that could be applied in other strategies.

I need to get back to working on this some more. In the meantime, here's the best results I've achieved in my past hacking.

Some of the changes I made:

  • Confirmed that negative EV is good (not really a change)
  • I exclude tickers that end in "_WI" - so-called "When Issued" shares that may indicate that a company is or was recently in the process of a bankruptcy or take-over. It would be good to understand this process better.
  • I try to only purchase a company once in a re-balance round - occasionally, two share classes of the same company show up in the universe, so I limit it to primary shares, as defined by the Morningstar API.
  • Only purchase common stock.
  • Exclude stocks trading under $5
  • I think I've improved re-balancing so that I am maximally utilizing cash available without using leverage. Need to dig on this more to be sure.

Things yet to come (not exhaustive):

  • Eliminate companies with negative EBIT or EBITDA. This may have to wait for a history-equivalent function for the fundamental data API.
  • Add an option to simulate commissions and slippage. This is a very inactive algorithm, so I suspect this will have little affect.
Clone Algorithm
692
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Filter the top 500 companies by market cap 
    2. Filter for companies where EV/EBITDA is lowest
    3. Rank by EV/EBITDA (lowest is best)
    4. Buy top 30 ranked stocks each month
    5. Rebalance at the end of June
"""

import pandas as pd
import numpy as np
import datetime

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    
    set_long_only()
    #set_max_order_count(30)
    
    # Dictionary of orders and order dates
    context.orders = {}
    
    context.fundamental_df = None
    
    context.max_holdings = 30
    context.rebalance_month = 6
    
    context.separate_sells = False
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    if context.separate_sells:
        # Sell a few days early, before buying new stocks
        schedule_function(rebalance_sell,
                          date_rule=date_rules.month_start(days_offset=15),
                          time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0

def has_orders(context):
    # Return true if there are pending orders.
    has_orders = False
    #for sec in context.secs:
    orders = get_open_orders() #(sec)
    if orders:
        for oo in orders:                  
            message = 'Open order for {amount} shares in {stock}'  
            #message = 'Open order for {stock}'  
            message = message.format(amount=orders[oo][0].amount, stock=oo.symbol)  
            log.debug(message)

            has_orders = True
            
    return has_orders

def cancel_orders(context):
    # Cancel all open orders
    #for sec in context.secs:
    orders = get_open_orders() #(sec)
    if orders:
        for oo in orders:                  
            cancel_order(oo)
            
    return True

def rebalance_sell(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    log.info("Selling")
    
    # Cancel open orders if they exist
    if has_orders(context):
        log.debug("Has open orders - doing nothing!")
        cancel_orders(context)
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        try:
            order_target_percent(stock, 0)
        except:
            log.error("Failed to sell %s maybe it is de-listed?" % (str(stock)))
            # FIXME Maybe show/test the security_end_date value to test if it is de-listed
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        record(pnl=context.portfolio.pnl)
        return
    
    log.info("Rebalancing")
    
    # Cancel open orders if they exist
    if has_orders(context):
        log.debug("Has open orders - doing nothing!")
        cancel_orders(context)
    
    if not context.separate_sells:
        # Sell present positions:
        for stock in context.portfolio.positions:
            try:
                order_target_percent(stock, 0)
            except:
                log.error("Failed to sell %s maybe it is de-listed?" % (str(stock)))
                # FIXME Maybe show/test the security_end_date value to test if it is de-listed
    else:
        orders = get_open_orders()  
        if orders:  
            for oo in orders:  
                log.debug(get_order(oo))
    
    # Only order 2 stocks and don't overlap with existing positions
    order_cnt = 0
    orders = []
    for stock in context.fundamental_df:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            #oid = order_target_percent(stock, context.max_holdings/100.)#int(100./24.)/100.)
            try:
                if data[stock].price < 5.0:
                    continue
                #oid = order_target_percent(stock, int(100./float(context.max_holdings))/100.)
                oid = order_target_percent(stock, float(1.0/float(context.max_holdings)))
                orders.append(stock)
                order_cnt += 1
                #log.info("Bought into %s with AM of %f: %r at a price of %f" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid, data[stock].price))
                #log.info("%d" % data[stock].price)
            except:
                log.error("Failed to buy into %s with AM of %f" % (str(stock), context.fundamental_df[stock]['acq_mult']))
            #if 'acq_mult' not in context.fundamental_df[stock]:
            #    pass
            try:
                
                
                # Save the order info for later
                # FIXME: We are saving the order date, not the fill date, gotta figure that out
                context.orders[oid] = dict(stock=stock, date=exchange_time)
            except:
                pass
    
    log.info("Bought: %s" % ', '.join([str(stock) for stock in orders]))
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    #if True: #not exchange_time.day == 30:
    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        #log.info(exchange_time.day)
    #if  context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):
    
        # Setup SQLAlchemy query to screen stocks based on PE ration
        # and industry sector. Then filter results based on 
        # market cap and shares outstanding.
        # We limit the number of results to num_stocks and return the data
        # in descending order.
        fundamental_df = get_fundamentals(
            query(
                # put your query in here by typing "fundamentals."
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit
            )
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            # Only pick active stocks
            .filter(fundamentals.share_class_reference.share_class_status == "A")
            # Only Common Stock
            .filter(fundamentals.share_class_reference.security_type == "ST00000001")
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)#200e6) #1.8e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            # Try allowing a negative EV.  It might outperform
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebitda != None)
            #.filter(fundamentals.income_statement.ebitda >= 0.0)
            #.filter(fundamentals.income_statement.ebit != None)
            #.filter(fundamentals.income_statement.ebit >= 0.0)
            #.filter(fundamentals.valuation.enterprise_value != None)
            #.filter(fundamentals.valuation.enterprise_value >= 0.0)
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda >= 0.)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) >= 0.0)
            #.filter((fundamentals.valuation.enterprise_value / fundamentals.income_statement.ebit) <= 5.0)
            #(1.0 / 0.03))
            #.filter(fundamentals.valuation_ratios.ev_to_ebitda <= 5.) #(1.0 / 0.03))
            #.order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
            .order_by(fundamentals.valuation.market_cap.desc())
            #.limit(num_stocks*100)
        )
        
        #log.info(fundamental_df)
        
        # Eliminate possible bankruptcies in progress
        remove = []
        for stock in fundamental_df:
            if str(stock.symbol).endswith("_WI"):
                remove.append(stock)
        for stock in remove:
            del(fundamental_df[stock])
                         
        # Add the Acquirer's Multiple score
        temp =  fundamental_df.T
        temp['acq_mult'] = temp['ev_to_ebitda']
        #temp['ev'] = temp['market_cap'] + temp['long_term_debt'] + temp['current_debt'] + temp['minority_interest'] + temp['preferred_stock'] - temp['cash_and_cash_equivalents'] #temp['cash']
        #temp['acq_mult'] = temp['ev'] / temp['ebit']
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        #temp = temp[temp['acq_mult'] <= 5.0]
        temp.sort('acq_mult', ascending=True, inplace=True)
        temp = temp.head(200)
        fundamental_df = temp.T
            
        ## Scalp the last 5
        #fundamental_df = fundamental_df.T.head(5).T
            
        #for stock in fundamental_df:
        #    log.info("The %s stock is %f" % (str(stock), fundamental_df[stock]['acq_mult']))

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.

Hey Dan & Tobias!

I've amended your model Dan to not make purchases when the market is highly priced (shiller CAPE). This reduces quite a lot of losses (except 2008). This and reducing the portfolio size down to only 3, not a typo, 3. Seem to have created an unbelievable result. Around 1,388% over the last 14 years (21% annualised return). The sharpe is now 3.26 and sortino is 4.5.

I tried this with 4,5,6,7,8 etc - after around 8 the returns drop quite a lot and the model becomes much more sticky to the SPY price trend.

Question for Dan/Tobias - would this actually work in real life? Have I made a mistake/missing something? (This seems to good to be true)

Please let me know your thoughts when you can!

Clone Algorithm
557
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""

    Acquirer's Multiple - from Tobias Carlisle's Deep Value
    
    Great talk here with indepth explanation why/how it outperforms everything else
    https://youtu.be/1r1vJZ80Z7I?t=35m18s
    
    - Filter the top companies by market cap over 2B or more
    - Rank by EV/EBITDA (lowest is best)
    - Only buy when market is fairly priced based on Shiller CAPE ratio
    - Buy top 3 ranked stocks each month
    - Rebalance at the end of June
    
"""

import pandas as pd
import numpy as np
import datetime

def diff_month(d1, d2):
    return (d1.year - d2.year)*12 + d1.month - d2.month

def initialize(context):
    
    context.cape = [43.772578146938,42.1856358879173,43.2207484399659,43.5285742885078,41.9660505033243,42.7819715670715,42.7580936182696,42.8695654944195,41.8980079248848,39.3696990442014,38.7821424567848,37.2742380044972,36.9788679970298,35.8346626514313,32.3258372361788,32.1739011683607,34.07464321714,33.0685344111128,32.1630386874444,31.4043187607802,27.6673925868625,28.5773731133601,30.0051038110568,30.4999532550205,30.277204433096,29.0857041520084,30.2921306409187,29.0058832531187,28.1281075086883,26.3876725411834,23.4631204674314,23.5887135288424,22.3650368012243,21.9562338636591,23.3483965027251,23.1014425376856,22.8983485766132,21.2141021234153,21.309719026991,22.4279395777309,23.5910804534815,24.8322232595311,24.8673291012688,24.6422514099322,25.2436867526062,25.6827560705797,25.9467982184201,26.6351705110815,27.6585403557366,27.6508620367402,26.8865303840359,26.9005775084449,25.9028142929438,26.4012853664749,25.6958886462686,25.1744622264778,25.6684067763577,25.4116556654893,26.465310814818,27.1448086947412,26.5872506979704,26.7448631281012,26.3391421310579,25.4089225691145,25.650230187183,26.068394871884,26.2878710912547,26.1043814109362,25.7301229901645,24.876538723648,25.931783309069,26.4438031142924,26.4687026266857,26.2496247635833,26.3278377786677,26.1472809438745,25.6506407087573,24.7495822416464,24.6967867668533,25.051393562011,25.6441564407974,26.5380402821017,26.9280202708565,27.2826897875717,27.2075366568071,27.3151814135166,26.2276055546509,26.9762683141891,27.5484904518513,27.4182627404106,27.4100881672043,26.1486071893123,26.7257430476969,27.320648130462,25.7290535794984,25.9555101052402,24.0223177608368,23.4952634018118,22.6068108422493,23.3560406432016,23.6964321166232,22.4168128022819,20.9072064626616,21.4016173600479,20.3627339460975,16.3873565487898,15.2596594057046,15.3760807474238,15.1746519368797,14.1221818019189,13.3236676568639,14.9818664530393,15.9963557552632,16.3841828162153,16.6946208169956,18.0940698015761,18.8319022648401,19.3580084434868,19.8127610799661,20.3223765002165,20.5278598014544,19.9205393066004,21.0046012097154,21.8048455996252,20.4800686384234,19.7420398537395,19.6686604707177,19.7702991743586,20.381395233204,21.2401276517594,21.7007238277606,22.3963797730442,22.978299430555,23.4898287032985,22.8993364301436,23.1439294472859,23.0594915060953,22.100831286611,22.6109817011566,20.0498527216605,19.6981145688777,20.1558247866888,20.3452467976458,20.5235754994317,21.2130080918034,21.7974359637175,22.0539439729047,21.7792469068249,20.9414674197435,20.5475040868561,20.9993412933806,21.4104284534429,21.7836903017277,21.5771096545288,20.8981620595737,21.2382611398456,21.9004754138218,22.0527243368619,22.4192071146026,22.5956553961056,23.4118417818424,22.9253331739153,23.4924601771596,23.3566490949161,23.4422871679606,23.8347378876314,24.642077092412,24.8618692964619,24.8596090936327,24.5909308778941,24.9560391539654,24.7863153969626,24.9432741099026,25.5580076235113,25.8175459761587,25.6176064217994,25.9184368926062,25.1627482830832,26.6068171471434,26.7940854825725,26.4922954203831,26.9955136993832,26.7286054529285,26.7913716801923,26.8061113796508,26.4958952927848,26.3811363363997,25.6936584170577,24.4967521704864,25.4914410460668,26.252992956326,26.0160922853194,24.354999690991,24.1926703858433]
    context.max_cape_to_allow_buying = 26.4
    
    set_long_only()
    
    context.orders = {}
    
    context.in_cash = 0.0 
    
    context.fundamental_df = None
    
    context.max_holdings = 3
    context.rebalance_month = 6
    
    # Rebalance monthly on the first day of the month at market open
    schedule_function(rebalance,
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    
    
def get_shares_held(context, stock):
    """ Return the number of shares of the given stoc that we hold """

    positions = context.portfolio.positions
    for position in positions:
        if stock == position:
            return positions[position].amount
        
    # Else
    return 0

def has_orders(context):
    # Return true if there are pending orders.
    has_orders = False
    #for sec in context.secs:
    orders = get_open_orders() #(sec)
    if orders:
        for oo in orders:                  
            message = 'Open order for {amount} shares in {stock}'  
            #message = 'Open order for {stock}'  
            message = message.format(amount=orders[oo][0].amount, stock=oo.symbol)  
            #log.debug(message)

            has_orders = True
            
    return has_orders

def cancel_orders(context):
    # Cancel all open orders
    #for sec in context.secs:
    orders = get_open_orders() #(sec)
    if orders:
        for oo in orders:                  
            cancel_order(oo)
            
    return True

def rebalance_sell(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        return
    
    # Cancel open orders if they exist
    if has_orders(context):
        cancel_orders(context)
    
    # Sell present positions:
    for stock in context.portfolio.positions:
        try:
            order_target_percent(stock, 0)
        except:
            log.error("Failed to sell %s maybe it is de-listed?" % (str(stock)))
    
def rebalance(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        record(pnl=context.portfolio.pnl)
        return
    
    # Cancel open orders if they exist
    if has_orders(context):
        log.debug("Has open orders - doing nothing!")
        cancel_orders(context)

    # Sell present positions:
    for stock in context.portfolio.positions:
        try:
            order_target_percent(stock, 0)
        except:
            log.error("Failed to sell %s maybe it is de-listed?" % (str(stock)))
    
    month_diff = diff_month(get_datetime(),datetime.datetime(2000,1,1))
    current_cape = context.cape[month_diff]
    
    log.info("CAPE %s" % current_cape)
    
    if current_cape < context.max_cape_to_allow_buying:
        # Only order 2 stocks and don't overlap with existing positions
        order_cnt = 0
        orders = []
        for stock in context.fundamental_df:
            if order_cnt >= context.max_holdings:
                break
            if stock not in context.portfolio.positions:
                
                try:
                    if data[stock].price < 5.0:
                        continue
                    
                    oid = order_target_percent(stock, float((1.0-context.in_cash)/float(context.max_holdings)))
                    
                    orders.append(stock)
                    order_cnt += 1
                    
                    log.info("Bought into %s with AM of %f: %r at a price of %f" % (str(stock), context.fundamental_df[stock]['acq_mult'], oid, data[stock].price))
                    
                except:
                    log.error("Failed to buy into %s with AM of %f" % (str(stock), context.fundamental_df[stock]['acq_mult']))
                    
                try:
                    context.orders[oid] = dict(stock=stock, date=exchange_time)
                except:
                    pass
        
        log.info("Bought: %s" % ', '.join([str(stock) for stock in orders]))
    
    
def before_trading_start(context): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

    if exchange_time.month != (context.rebalance_month - 1) and context.fundamental_df is not None:
        # Should check for specific realance day too...
        update_universe(context.fundamental_df.columns.values)
        return
    else:
        
        fundamental_df = get_fundamentals(
            query(
                fundamentals.valuation_ratios.ev_to_ebitda,
                fundamentals.valuation.market_cap,
                fundamentals.valuation.enterprise_value,
                fundamentals.income_statement.ebit
            )
            
            # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
            .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
            .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
            .filter(fundamentals.company_reference.country_id == "USA")
            .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
            .filter(fundamentals.share_class_reference.is_primary_share == True)
            # Only pick active stocks
            .filter(fundamentals.share_class_reference.share_class_status == "A")
            # Only Common Stock
            .filter(fundamentals.share_class_reference.security_type == "ST00000001")
            .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
            .filter(fundamentals.valuation.market_cap != None)
            .filter(fundamentals.valuation.market_cap >= 2e9)
            .filter(fundamentals.valuation.shares_outstanding != None)
            .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
            .order_by(fundamentals.valuation.market_cap.desc())
        )
        
        # Eliminate possible bankruptcies in progress
        remove = []
        for stock in fundamental_df:
            if str(stock.symbol).endswith("_WI"):
                remove.append(stock)
                
        for stock in remove:
            del(fundamental_df[stock])
                         
        # Add the Acquirer's Multiple score
        temp = fundamental_df.T
        
        ######################################################################
        # https://youtu.be/1r1vJZ80Z7I?t=35m18s
        # Acquirer's multiple is better to use EBIT rather than EBITDA   
        # but the back test has better results with EBITDA

        temp['acq_mult'] = temp['ev_to_ebitda']
        
        # Heres the EBIT calc to test out
        #temp['acq_mult'] = temp['enterprise_value'] / temp['ebit']
        
        #######################################################################
        
        temp.sort('acq_mult', ascending=True, inplace=True)
        
        temp = temp.head(200)
        
        fundamental_df = temp.T

        # Filter out our stocks
        context.stocks = [stock for stock in fundamental_df]

        # Update context.fundamental_df with the securities that we need
        context.fundamental_df = fundamental_df[context.stocks]
        
        update_universe(context.fundamental_df.columns.values)
    
def handle_data(context, data):
    pass
There was a runtime error.

Thanks for sharing. The result looks pretty amazing. I think we need to figure out how to get Shiller CAPE on-the-fly instead of hard-coded in the algo.

Any thought?

I agree - the best source I've found is the XLS on Yales website. It's constantly updated.

Can we read XLS files direct in Python?

I am using Yale's website as well. One way to work around might be to use build-in function of fetch_csv to read data from Dropbox as following.

fetch_csv('https://dl.dropboxusercontent.com/u/169032081/fetcher_sample_file.csv',  
               date_column = 'Settlement Date',  
               date_format = '%m/%d/%y')  

The only question I have is whether fetcher is supported for real-money trading yet? Another question is how to use fetcher to read-in excel file?

Hi all,

This looked interesting so I added a few things in as my first attempt at an algorithm. I haven’t seen any performance as good as the above results but I wanted to try adding some features. Specifically:
- I switched over to pipeline for the fundamentals data
- I added a filter based on this: https://www.quantopian.com/posts/how-to-filter-equity-types-or-classes-in-pipeline
- I found the CAPE data as a csv on quandl so I added a fetch_csv for it
- I added the Operating Earnings from http://acquirersmultiple.com/faq/
- I added a positivity filter
- It seems that Morningstar's ev_to_ebitda ratio uses the sum of the last 4 reported quarters for ebitda while their ebitda is just the last quarter value so I made an approximation of the annual ebitda by averaging the last year's daily values and multiplying by 4 (see, e.g. https://www.quantopian.com/posts/ev-slash-ebitda-value-then-momentum)
- I added a small check so the universe only updates at the end of the month

So far the results aren't nearly as good, and nothing comes close to using the built-in ev_to_ebitda ratio. I suspect it is because the filter using pipeline is not as capable (e.g. I can't filter out non-US stocks). Also, I still think there are some differences in how ev_to_ebitda is reported vs. estimating the ttm ebitda that cause very different results for some stocks.

And the backtest is pretty slow; I suspect there are some ways to access pipeline data more efficiently.

This is my first attempt at an algorithm and I'm fairly new to Python. Most of my ideas are based on reading posts from the forum. Hopefully some of this is useful. Let me know any feedback and ideas to improve the results. Thanks.

Clone Algorithm
63
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Original source: https://www.quantopian.com/posts/acquirers-multiple-based-on-deep-value-number-fundamentals
"""

import pandas as pd
import numpy as np
import time
from quantopian.pipeline import Pipeline
from quantopian.pipeline import CustomFactor
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import Latest, SimpleMovingAverage, Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.algorithm import attach_pipeline, pipeline_output


UniverseSize = 500


class UniverseFilter(CustomFactor):  
    """
    Filter out undesired stocks
    Source: https://www.quantopian.com/posts/how-to-filter-equity-types-or-classes-in-pipeline
    """
    window_length = 1
    inputs = [morningstar.share_class_reference.is_primary_share,
              morningstar.share_class_reference.is_depositary_receipt,
              morningstar.asset_classification.morningstar_sector_code,
              # morningstar.company_reference.country_id,
              # morningstar.share_class_reference.share_class_status,
              # morningstar.share_class_reference.security_type
              ]
    def compute(self, today, assets, out, is_primary_share, is_depositary_receipt, sector_code,
                # country_id, share_class_status, security_type
                ):
        criteria = is_primary_share[-1] # Only primary Common Stock
        criteria = criteria & (~is_depositary_receipt[-1]) # No ADR
        # criteria = criteria & (sector_code[-1] != 101) # No Basic Materials
        criteria = criteria & (sector_code[-1] != 103) # No Financials
        criteria = criteria & (sector_code[-1] != 104) # No Real Estate
        criteria = criteria & (sector_code[-1] != 207) # No Utilities
        # Exclude When Distributed(WD), When Issued(WI) and VJ (bankruptcy) and Halted stocks (V, H)
        def accept_symbol(equity):
            symbol = equity.symbol
            if symbol.endswith("_PR") or symbol.endswith("_WI") or symbol.endswith("_WD") or \
               symbol.endswith("_VJ") or symbol.endswith("_V") or symbol.endswith("_H"):
                return False
            else:
                return True
        # Only NYSE, AMEX and Nasdaq
        def accept_exchange(equity):
            exchange = equity.exchange
            if exchange == "NEW YORK STOCK EXCHANGE" or  exchange == "AMERICAN STOCK EXCHANGE" or \
               exchange.startswith("NASDAQ"):
                return True
            else:
                return False
        vsid = np.vectorize(sid)
        equities = vsid(assets)
        vaccept_symbol = np.vectorize(accept_symbol)
        accept_symbol = vaccept_symbol(equities)
        criteria = criteria & (accept_symbol)
        vaccept_exchange = np.vectorize(accept_exchange)
        accept_exchange = vaccept_exchange(equities)
        criteria = criteria & (accept_exchange)
        out[:] = criteria.astype(float)

class AquirersMultiple(CustomFactor):
    inputs = [morningstar.valuation.enterprise_value, morningstar.income_statement.ebitda]  # try ebit or ebitda
    window_length = 1
    def compute(self, today, assets, out, ev, denominator):
        out[:] = 1.0 * ev[-1] / denominator[-1]

class MarketCap(CustomFactor):
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding] 
    window_length = 1
    def compute(self, today, assets, out, close, shares):
        out[:] = close[-1] * shares[-1]


def preview_csv(df):
    # log.info(' \n %s ' % df.head())
    return df
def rename_csv_col(df):
    df.columns = map(str.lower, df.columns)
    # df = df.fillna(method='ffill')
    # log.info(' \n %s ' % df.head())
    return df


def initialize(context):
    
    set_long_only()
    
    context.pool = None
    context.is_month_end = False
    context.orders = {}
    context.in_cash = 0.0 
    context.max_holdings = 3
    context.rebalance_month = 6
    context.CAPE_critical = 26.4
    context.price_threshold = 5.0
    
    criteria = UniverseFilter()
    criteria_filter = (criteria > 0)
    mkt_cap = morningstar.valuation.market_cap.latest
    mkt_cap_calc = MarketCap()
    ev = morningstar.valuation.enterprise_value.latest
    revenue = morningstar.income_statement.total_revenue.latest
    cogs = morningstar.income_statement.cost_of_revenue.latest
    sga = morningstar.income_statement.selling_general_and_administration.latest
    da = morningstar.income_statement.depreciation_and_amortization.latest
    ebitda = morningstar.income_statement.ebitda.latest
    ebit = morningstar.income_statement.ebit.latest
    ev_to_ebitda = morningstar.valuation_ratios.ev_to_ebitda.latest
    ebitda_calc = 1.0 * ev / ev_to_ebitda
    # Operating Earnings: http://acquirersmultiple.com/faq/
    oibdp = revenue - cogs - sga - da
    # Approximate yearly value over trailing 12 months (ttm)
    # Note that start date of backtest must be > 252 days after 1/3/2002, which is when morningstar data starts being available
    revenue_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.total_revenue], window_length=252) * 4
    cogs_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.cost_of_revenue], window_length=252) * 4
    sga_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.selling_general_and_administration], window_length=252) * 4
    da_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.depreciation_and_amortization], window_length=252) * 4
    ebitda_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.ebitda], window_length=252) * 4
    ebit_ttm = SimpleMovingAverage(inputs=[morningstar.income_statement.ebit], window_length=252) * 4
    oibdp_ttm = revenue_ttm - cogs_ttm - sga_ttm - da_ttm
    positive_filter = (ebitda_calc > 0)
    am = ev_to_ebitda
    # am = AquirersMultiple()
    # am = 1.0 * ev / ebitda
    # am = 1.0 * ev / ebitda_ttm
    # am = 1.0 * ev / oibdp
    # am = 1.0 * ev / oibdp_ttm
    # mkt_cap_rank = mkt_cap.rank(ascending=False)
    # mkt_cap_filter = (mkt_cap_rank > 0) & (mkt_cap_rank <= 500)
    mkt_cap_filter = (mkt_cap >= 2e9)
    am_rank = am.rank(ascending=True)#, mask=positive_filter)
    am_filter = (am_rank < UniverseSize)
    
    pipe = Pipeline()
    # pipe.add(mkt_cap, 'mkt_cap')
    pipe.add(am, 'am')
    pipe.add(am_rank, 'am_rank')
    # pipe.add(am1, 'am1')
    
    pipe.set_screen(
        mkt_cap_filter
        & am_filter
        & criteria_filter
        # &
    )
    
    pipe = attach_pipeline(pipe, name='AM')


    # Shiller CAPE from quandl
    fetch_csv('https://www.quandl.com/api/v3/datasets/MULTPL/SHILLER_PE_RATIO_MONTH.csv',
              date_column='Date',
              date_format='%m/%d/%Y',
              symbol='SP500_CAPE',
              pre_func=preview_csv,
              post_func=rename_csv_col)
    
    schedule_function(rebalance,
                      # date_rule=date_rules.every_day(),
                      date_rule=date_rules.month_end(),
                      time_rule=time_rules.market_open())
    schedule_function(set_month_end,
                      date_rule=date_rules.month_end(1),
                      time_rule=time_rules.market_close())



def rebalance(context, data):
    context.is_month_end = False
        
    # print(context.pool.iloc[0, :])
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    if exchange_time.month != context.rebalance_month:
        # track how many positions we're holding
        record(num_positions = len(context.portfolio.positions))
        record(capital_used=context.portfolio.capital_used)
        record(pnl=context.portfolio.pnl)
        return
    
    # Cancel open orders if they exist
    if has_orders(context):
        log.debug("Has open orders - doing nothing!")
        cancel_orders(context)

    # Sell present positions
    # sold_stocks = []
    for stock in context.portfolio.positions:
        try:
            order_target_percent(stock, 0)
            # sold_stocks.append(stock)
        except:
            log.error("Failed to sell %s maybe it is de-listed?" % (str(stock)))
    
    # Only buy if CAPE is low enough
    if not can_buy(context, data):
        log.info("CAPE too high (%.2f); not buying" % (data['SP500_CAPE'].value))
        return

    # Only order max_holdings stocks and don't overlap with existing positions
    order_cnt = 0
    orders = []
    for stock in context.pool.index:
        if order_cnt >= context.max_holdings:
            break
        if stock not in context.portfolio.positions:
            try:
                if data[stock].price < context.price_threshold:
                    continue
                # if stock in sold_stocks:
                #     continue
                oid = order_target_percent(stock, float((1.0-context.in_cash)/float(context.max_holdings)))
                orders.append(stock)
                order_cnt += 1
                log.info("Bought into %s with AM of %f: %r at a price of %f" % (str(stock), context.pool['am'][stock], oid, data[stock].price))
            except:
                log.error("Failed to buy into %s with AM of %f" % (str(stock), context.pool['am'][stock]))
            try:
                context.orders[oid] = dict(stock=stock, date=exchange_time)
            except:
                pass
    log.info("Bought: %s" % ', '.join([str(stock) for stock in orders]))

    # log.info("Lowest AM: \n %s" % context.pool.iloc[0:20, :])
    # log.info("sold stocks = %s" % ', '.join([str(stock) for stock in sold_stocks]))
    # if 'value' in data['SP500_CAPE']:
    #    log.info('CAPE = %f' % data['SP500_CAPE'].value)

# This returns the global switch as to whether we can add any new positions,
# or only sell/rebalance positions.
def can_buy(context, data):
    latest = data['SP500_CAPE'].value
    return latest < context.CAPE_critical

def set_month_end(context, data):
    context.is_month_end = True

def has_orders(context):
    # Return true if there are pending orders.
    has_orders = False
    orders = get_open_orders()
    if orders:
        for oo in orders:                  
            message = 'Open order for {amount} shares in {stock}'  
            # message = 'Open order for {stock}'  
            message = message.format(amount=orders[oo][0].amount, stock=oo.symbol)  
            log.debug(message)
        has_orders = True
    return has_orders

def cancel_orders(context):
    # Cancel all open orders
    orders = get_open_orders() #(sec)
    if orders:
        for oo in orders:                  
            cancel_order(oo)
    return True

def before_trading_start(context, data):
    
    exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')
    if not(exchange_time.month == context.rebalance_month and context.is_month_end) \
       and context.pool is not None:
        # Only update universe on months where trading occurs
        return
    
    output = pipeline_output('AM').sort('am_rank')
    context.pool = output
    update_universe(output.index)
    # print("updating universe")

def handle_data(context, data):
    pass
There was a runtime error.

I've ran the algorithm (1) between Jun 2008 and October 2016. Up and until Jun 2014 the algorithm seems to perform as expected beating the market, but from that point on it goes down, from Jun 2015 and onwards it tanks and the end result underperform the market by 44%.

(1) I've modified the algorithm to calculate ebitda_ttm (using last 4 quarters). I believe the ev_to_ebitda that comes out of the box uses last quarter's ebitda times four.

Clone Algorithm
34
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Rank by EV/EBITDA (lowest is best)
    2. Buy top 30 ranked stocks each month
    3. Rebalance after 366 days
"""

import pandas as pd
import numpy as np
import datetime
import pytz
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.filters import CustomFilter
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters.morningstar import IsDepositaryReceipt

class IsUSAStock(CustomFilter):  
    inputs = [morningstar.company_reference.country_id]  
    window_length = 1
    
    def compute(self, today, assets, out, ref):  
       out[:] = ref[-1] == "USA"  
    
class IsPinkSheetExchange(CustomFilter):
    inputs = [ morningstar.share_class_reference.exchange_id ]
    window_length = 1
    
    def compute(self, today, assets, out, ref):
        out[:] = ref[-1] == "OTCPK"

class MarketCap(CustomFactor):
    inputs = [morningstar.valuation.shares_outstanding, USEquityPricing.close]
    window_length = 1

    def compute(self, today, assets, out, shares, close_price):  
       out[:] = shares * close_price

class EBITDA(CustomFactor):  
    # Pre-declare inputs and window_length  
    inputs = [morningstar.income_statement.ebitda]  
    window_length = 200  
    # Compute market cap value  
    def compute(self, today, assets, out, ebitda):  
        out[:] = ebitda[-1]+ebitda[-64]+ebitda[-127]+ebitda[-190]  

class EnterpriseValue(CustomFactor):  
    # Pre-declare inputs and window_length  
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding, morningstar.balance_sheet.long_term_debt,  
              morningstar.balance_sheet.long_term_debt_and_capital_lease_obligation,  
              morningstar.balance_sheet.cash_cash_equivalents_and_marketable_securities]  
    window_length = 1  
    # Compute market cap value  
    def compute(self, today, assets, out, close, shares,  
                long_term_debt, long_term_debt_and_capital_lease_obligation,  
                cash_cash_equivalents_and_marketable_securities):  
        out[:] = close[-1] * shares[-1] + long_term_debt_and_capital_lease_obligation[-1] - cash_cash_equivalents_and_marketable_securities[-1]        
        
# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
    context.max_num_of_securities = 30
    context.target_stocks = []
    context.last_rebalance = datetime.datetime(1900,1,1,0,0,0,0,pytz.utc)
    
    attach_pipeline(make_pipeline(context), 'deep_value')
    schedule_function(func=rebalance,              
                      date_rule=date_rules.week_start(),
                      time_rule=time_rules.market_open(),
                      half_days=True)

def make_pipeline(context):
    pipe = Pipeline()
    
    ebitda = EBITDA()
    ev = EnterpriseValue()
    ev_to_ebitda = ev/ebitda    
        
    pipe.add(ev_to_ebitda, 'ev_to_ebitda')  
  
    market_cap = MarketCap()
    sector = Sector()
    is_depositary_receipt = ~IsDepositaryReceipt()
  
    market_cap_filter = market_cap > 100e6
    ebitda_filter = ebitda > 0
    country_filter = IsUSAStock()
    exchange_filter = ~IsPinkSheetExchange()
    sector_filter = sector.eq(101) or \
                    sector.eq(102) or \
                    sector.eq(205) or \
                    sector.eq(206) or \
                    sector.eq(308) or \
                    sector.eq(309) or \
                    sector.eq(310) or \
                    sector.eq(311)
    
    pipe.set_screen(market_cap_filter and 
                    ebitda_filter and 
                    sector_filter and 
                    is_depositary_receipt and
                    exchange_filter and
                    country_filter)
    
    return pipe
    
def rebalance(context,data):
    now = get_datetime()
    
    record('cash', context.portfolio.cash, 
           'portfolio value', context.portfolio.portfolio_value,
           'positions value', context.portfolio.positions_value, 
           'positions', len(context.portfolio.positions))  
    
    if (now - context.last_rebalance).days < 366:
        return
    
    context.last_rebalance = now
    context.target_stocks = [ stock for stock in context.fundamentals] 
    for stock in context.portfolio.positions:
        oid = order_target(stock, 0)
        #log.info("Sold[%r] %s" % (oid, stock.symbol))
            
    for stock in context.target_stocks:
        for order in get_open_orders(stock):
            cancel_order(order)
        oid = order_target_percent(stock, 1.0/context.max_num_of_securities)
        #log.info("Bought[%r] %s " % (oid, stock.symbol))
   
def before_trading_start(context,data): 
    """
      Called before the start of each trading day. 
      It updates our universe with the
      securities and values found from fetch_fundamentals.
    """
    now = get_datetime()
    
    if (now - context.last_rebalance).days < 366:
        return
    
    stocks = pipeline_output('deep_value') \
             .sort_values('ev_to_ebitda',ascending=True) \
             .head(30)
    
    context.fundamentals = stocks.T
    
    # context.fundamentals = get_fundamentals(
    #         query(
    #              fundamentals.valuation_ratios.ev_to_ebitda,
    #         )
    #         # No Financials (103), Real Estate (104) or Utility (207) Stocks, no ADR or PINK, only USA
    #         .filter(fundamentals.asset_classification.morningstar_sector_code != 103)
    #         .filter(fundamentals.asset_classification.morningstar_sector_code != 104)
    #         .filter(fundamentals.asset_classification.morningstar_sector_code != 207)
    #         .filter(fundamentals.company_reference.country_id == "USA")
    #         .filter(fundamentals.share_class_reference.is_depositary_receipt == False)
    #         .filter(fundamentals.share_class_reference.is_primary_share == True)
    #         .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK")
    #         .filter(fundamentals.valuation.shares_outstanding != None)
    #         .filter(fundamentals.valuation.market_cap != None)
    #         .filter(fundamentals.valuation.market_cap > 2e9)
    #         .filter(fundamentals.income_statement.ebitda != None)
    #         .filter(fundamentals.income_statement.ebitda >= 0.0)
    #         .filter(fundamentals.valuation.shares_outstanding != None)
    #         .filter(fundamentals.valuation_ratios.ev_to_ebitda != None)
    #         .order_by(fundamentals.valuation_ratios.ev_to_ebitda.asc())
    #         .limit(context.max_num_of_securities)
    #     )


# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
    # Implement your algorithm logic here.

    # data[sid(X)] holds the trade event data for that security.
    # context.portfolio holds the current portfolio state.

    # Place orders with the order(SID, amount) method.

    # TODO: implement your own logic here.
    pass
    #order(sid(24), 50)
There was a runtime error.

Thanks Adam.

Maybe a little bit late, but for future users who want to use the algos listed above - be aware!!!:

The Algos listed above do not select deep value stocks, but instead the lowest EV-Ebitda ratio. The Algo therefore selects (nearly always) negative EV-Ebitda ratio, which is a result of negative Ebitda (not deep value and neither negative EV). The higher ranked stocks have a small negative EBITDA (i.e. EV=100 and EBITDA= -1 results in ratio -100, which is high ranked - ascending). It will never select good ratios in the range of 6-10.
With small effort you can filter out this error ...

Maybe just luck, that these algos turned out with these result. However, it shows if not a analysis is made on the selected stocks one tends just to look at performance of the algo (price) and not the intended goal. So you go astray....

Another thing, it would be nice if fundamentals were like a law of physics. I was surprised when I looked to determine the level of company inconsistency in reporting. There are 310 stocks whose ebitda changed only once over the last year according to this. Using FCount() from there, I'd be curious to know what you find adding a screen for ebitda value-change-frequency (first column here) above 4.0 for example. My stab at it in Mariano's code doubled the returns.

5.0 1123
4.0 714
3.0 7
2.0 4
1.0 310

In filters, instead of 'or' and 'and', use '|' and '&'. When I corrected those, returns went up further.

@CarloG mine filters ebitda > 0 and yet the results are the same.

ebitda > 0 and yet the results are the same

It is always positive. Pipeline preview is easy to add and shows column values. Here, I added an ebitda column.

    pipe.add(ev_to_ebitda, 'ev_to_ebitda')  
    pipe.add(ebitda,       'ebitda')  

Log:

                           min              mean               max  
      ebitda        55177000.0     571200633.333      2976000000.0  
ev_to_ebitda     1.66134270942     4.46710351926     5.65439201921  

@BlueSeahawk, made the changes you suggested but it doesn't seem to make much of a difference

Clone Algorithm
34
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Rank by EV/EBITDA (lowest is best)
    2. Buy top 30 ranked stocks each month
    3. Rebalance after 366 days
"""

import pandas as pd
import numpy as np
import datetime
import pytz
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.filters import CustomFilter
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.data import Fundamentals
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters.morningstar import IsDepositaryReceipt

class FCount(CustomFactor):
    def compute(self, today, assets, out, z):
        df   = pd.DataFrame(z, columns=assets)  
        changes = (df.fillna(0).diff() != 0).sum() 
        out[:] = changes.values                      

class IsUSAStock(CustomFilter):  
    inputs = [morningstar.company_reference.country_id]  
    window_length = 1
    
    def compute(self, today, assets, out, ref):  
       out[:] = ref[-1] == "USA"  
    
class IsPinkSheetExchange(CustomFilter):
    inputs = [ morningstar.share_class_reference.exchange_id ]
    window_length = 1
    
    def compute(self, today, assets, out, ref):
        out[:] = ref[-1] == "OTCPK"

class MarketCap(CustomFactor):
    inputs = [morningstar.valuation.shares_outstanding, USEquityPricing.close]
    window_length = 1

    def compute(self, today, assets, out, shares, close_price):  
       out[:] = shares * close_price

class EBITDA(CustomFactor):  
    inputs = [morningstar.income_statement.gross_profit, morningstar.income_statement.general_and_administrative_expense,
             morningstar.income_statement.ebitda]  
    window_length = 200  

    def compute(self, today, assets, out, gross_profit, sga, ebitda): 

        out[:] = ebitda[-1]+ebitda[-64]+ebitda[-127]+ebitda[-190]

class EnterpriseValue(CustomFactor):  
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding, morningstar.balance_sheet.total_debt,  
morningstar.balance_sheet.cash_cash_equivalents_and_marketable_securities, morningstar.balance_sheet.preferred_stock, morningstar.balance_sheet.minority_interest]  
    window_length = 1  

    def compute(self, today, assets, out, close, shares,  
                debt, cash, preferred_stock, minority_interest):  
        out[:] = (close[-1] * shares[-1]) + preferred_stock[-1] + debt[-1] + minority_interest[-1] - cash[-1]        
        
def initialize(context):
    context.max_num_of_securities = 30   
    context.target_stocks = []
    context.last_rebalance = datetime.datetime(1900,1,1,0,0,0,0,pytz.utc)
    
    attach_pipeline(make_pipeline(context), 'deep_value')
    schedule_function(func=rebalance,              
                      date_rule=date_rules.week_start(),
                      time_rule=time_rules.market_open(),
                      half_days=True)

def make_pipeline(context):
    pipe = Pipeline()
    
    ebitda = EBITDA()
    ev = EnterpriseValue()
    ev_to_ebitda = ev/ebitda    
    
    m = QTradableStocksUS() 
    f = Fundamentals.       roic
    fcount_filter = FCount(inputs=[ f ], window_length=252, mask=m) >4
    
    pipe.add(ev,'ev')
    pipe.add(ebitda, 'ebitda')
    pipe.add(ev_to_ebitda, 'ev_to_ebitda')  
  
    market_cap = MarketCap()
    sector = Sector()
    is_depositary_receipt = ~IsDepositaryReceipt()
  
    market_cap_filter = market_cap > 100e6
    ebitda_filter = (ebitda > 0) & (ebitda < 30e9)
    country_filter = IsUSAStock()
    exchange_filter = ~IsPinkSheetExchange()
    sector_filter = sector.eq(101) | \
                    sector.eq(102) | \
                    sector.eq(205) | \
                    sector.eq(206) | \
                    sector.eq(308) | \
                    sector.eq(309) | \
                    sector.eq(310) | \
                    sector.eq(311)
    
    pipe.set_screen(ebitda_filter &
                    market_cap_filter & 
                    fcount_filter &
                    sector_filter &
                    is_depositary_receipt &
                    exchange_filter &
                    country_filter)
    
    return pipe
    
def rebalance(context,data):
    now = get_datetime()
    
    record('cash', context.portfolio.cash, 
           'portfolio value', context.portfolio.portfolio_value,
           'positions value', context.portfolio.positions_value, 
           'positions', len(context.portfolio.positions))  
    
    if (now - context.last_rebalance).days < 366:
        return
    
    context.last_rebalance = now
    context.target_stocks = [ stock for stock in context.fundamentals] 
    for stock in context.portfolio.positions:
        oid = order_target(stock, 0)
        #log.info("Sold[%r] %s" % (oid, stock.symbol))
            
    for stock in context.target_stocks:
        for order in get_open_orders(stock):
            cancel_order(order)
        oid = order_target_percent(stock, 1.0/context.max_num_of_securities)
        #log.info("Bought %s ev_to_ebitda: %s " % (oid, stock.symbol, stock))
   
def before_trading_start(context,data): 

    now = get_datetime()
    
    if (now - context.last_rebalance).days < 366:
        return
    
    stocks = pipeline_output('deep_value') \
             .sort_values('ev_to_ebitda',ascending=True) \
             .head(context.max_num_of_securities)
    
    log.info(stocks)
    context.fundamentals = stocks.T
    
def handle_data(context, data):
    pass
There was a runtime error.

Try replacing roic from the original you copied, to ebitda. There were other changes you made, and in this I've merged some of them although not all. In particular you might want to re-add your new ebitda and ev factors. Also try FCount > 3, which would allow another 714 stocks in. Note that the fcount mask is QTradableStocksUS, so that is surely a major change too.

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
    Acquirer's Multiple Trading Strategy using Fundamental Data
    
    Adapted from Tobias Carlisle's "Deep Value"
    
    1. Rank by EV/EBITDA (lowest is best)
    2. Buy top 30 ranked stocks each month
    3. Rebalance after 366 days
"""

import pandas as pd
import numpy as np
import datetime
import pytz
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.filters import CustomFilter, QTradableStocksUS
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar, Fundamentals
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters.morningstar import IsDepositaryReceipt

class FCount(CustomFactor):
    def compute(self, today, assets, out, z):
        df   = pd.DataFrame(z, columns=assets)                 # df for some functionality
        changes = (df.fillna(0).diff() != 0).sum()             # filling nan w zero etc
        out[:] = changes.values                                # back to ndarray

class IsUSAStock(CustomFilter):  
    inputs = [morningstar.company_reference.country_id]  
    window_length = 1
    
    def compute(self, today, assets, out, ref):  
       out[:] = ref[-1] == "USA"  
    
class IsPinkSheetExchange(CustomFilter):
    inputs = [ morningstar.share_class_reference.exchange_id ]
    window_length = 1
    
    def compute(self, today, assets, out, ref):
        out[:] = ref[-1] == "OTCPK"

class MarketCap(CustomFactor):
    inputs = [morningstar.valuation.shares_outstanding, USEquityPricing.close]
    window_length = 1

    def compute(self, today, assets, out, shares, close_price):  
       out[:] = shares * close_price

class EBITDA(CustomFactor):  
    inputs = [morningstar.income_statement.ebitda]  
    window_length = 200  
    def compute(self, today, assets, out, ebitda):  
        out[:] = ebitda[-1]+ebitda[-64]+ebitda[-127]+ebitda[-190]  

class EnterpriseValue(CustomFactor):  
    inputs = [USEquityPricing.close, morningstar.valuation.shares_outstanding, morningstar.balance_sheet.long_term_debt,  
              morningstar.balance_sheet.long_term_debt_and_capital_lease_obligation,  
              morningstar.balance_sheet.cash_cash_equivalents_and_marketable_securities]  
    window_length = 1  
    def compute(self, today, assets, out, close, shares,  
                long_term_debt, long_term_debt_and_capital_lease_obligation,  
                cash_cash_equivalents_and_marketable_securities):  
        out[:] = close[-1] * shares[-1] + long_term_debt_and_capital_lease_obligation[-1] - cash_cash_equivalents_and_marketable_securities[-1]        
        
def initialize(context):
    context.max_num_of_securities = 30
    context.target_stocks = []
    context.last_rebalance = datetime.datetime(1900,1,1,0,0,0,0,pytz.utc)
    
    attach_pipeline(make_pipeline(context), 'deep_value')
    schedule_function(func=rebalance,              
                      date_rule=date_rules.week_start(),
                      time_rule=time_rules.market_open(),
                      half_days=True)

def make_pipeline(context):
    pipe = Pipeline()
    
    ebitda = EBITDA()
    ev = EnterpriseValue()
    ev_to_ebitda = ev/ebitda    
        
    pipe.add(ev_to_ebitda, 'ev_to_ebitda')  
    pipe.add(ebitda,       'ebitda')  
    pipe.add(ev,'ev')
  
    market_cap = MarketCap()
    sector = Sector()
    is_depositary_receipt = ~IsDepositaryReceipt()
  
    market_cap_filter = market_cap > 100e6
    ebitda_filter = ebitda > 0 
    #ebitda_filter = (ebitda > 0) & (ebitda < 30e9)

    country_filter = IsUSAStock()
    exchange_filter = ~IsPinkSheetExchange()
    fives = FCount(
        inputs=[ Fundamentals.ebitda ], 
        window_length=252, 
        mask=QTradableStocksUS()
    ) > 4
    sector_filter = sector.eq(101) | \
                    sector.eq(102) | \
                    sector.eq(205) | \
                    sector.eq(206) | \
                    sector.eq(308) | \
                    sector.eq(309) | \
                    sector.eq(310) | \
                    sector.eq(311)
    
    pipe.set_screen(market_cap_filter     & 
                    ebitda_filter         & 
                    sector_filter         & 
                    is_depositary_receipt &
                    exchange_filter       &
                    country_filter        &
                    fives
                    )
    
    return pipe
    
def rebalance(context,data):
    now = get_datetime()
    
    record('cash', context.portfolio.cash, 
           'portfolio value', context.portfolio.portfolio_value,
           'positions value', context.portfolio.positions_value, 
           'positions', len(context.portfolio.positions))  
    
    if (now - context.last_rebalance).days < 366:
        return
    
    context.last_rebalance = now
    context.target_stocks = [ stock for stock in context.fundamentals] 
    for stock in context.portfolio.positions:
        oid = order_target(stock, 0)
        #log.info("Sold[%r] %s" % (oid, stock.symbol))
            
    for stock in context.target_stocks:
        for order in get_open_orders(stock):
            cancel_order(order)
        oid = order_target_percent(stock, 1.0/context.max_num_of_securities)
        oid=oid  # suppressing warn
        #log.info("Bought %s ev_to_ebitda: %s " % (oid, stock.symbol))
   
def before_trading_start(context,data): 
    now = get_datetime()
    
    if (now - context.last_rebalance).days < 366:
        return
    
    stocks = pipeline_output('deep_value') \
             .sort_values('ev_to_ebitda',ascending=True) \
             .head(context.max_num_of_securities)
    
    #log.info(stocks)
    context.fundamentals = stocks.T
    
    if 'log_pipe_done' not in context:    # show pipe info once
        log_pipe(context, data, stocks, 4)
    
    
def log_pipe(context, data, z, num, details=None):
    ''' Log info about pipeline output or, z can be any DataFrame or Series
    https://www.quantopian.com/posts/overview-of-pipeline-content-easy-to-add-to-your-backtest
    '''
    # Options
    log_nan_only = 0          # Only log if nans are present
    show_sectors = 0          # If sectors, do you want to see them or not
    show_sorted_details = 1   # [num] high & low securities sorted, each column

    if 'log_init_done' not in context:
        log.info('${}    {} to {}'.format('%.0e' % (context.portfolio.starting_cash), 
                get_environment('start').date(), get_environment('end').date()))
    context.log_init_done = 1

    if not len(z):
        log.info('Empty')
        return

    # Series ......
    context.log_pipe_done = 1 ; padmax = 6
    if 'Series' in str(type(z)):    # is Series, not DataFrame
        nan_count = len(z[z != z])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        if (log_nan_only and nan_count) or not log_nan_only:
            pad = max(6, len(str(z.max())))
            log.info('{}{}{}   Series {}  len {}'.format('min' .rjust(pad+5),
                'mean'.rjust(pad+5), 'max' .rjust(pad+5),  z.name, len(z)))
            log.info('{}{}{} {}'.format(str(z.min()) .rjust(pad+5),
                str(z.mean()).rjust(pad+5), str(z.max()) .rjust(pad+5), nan_count
            ))
        return

    # DataFrame ......
    content_min_max = [ ['','min','mean','max',''] ] ; content = ''
    for col in z.columns:
        if col == 'sector' and not show_sectors: continue
        nan_count = len(z[col][z[col] != z[col]])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        padmax    = max( padmax, 6, len(str(z[col].max())) )
        content_min_max.append([col, str(z[col] .min()), str(z[col].mean()), str(z[col] .max()), nan_count])
    if log_nan_only and nan_count or not log_nan_only:
        content = 'Rows: {}  Columns: {}'.format(z.shape[0], z.shape[1])
        if len(z.columns) == 1: content = 'Rows: {}'.format(z.shape[0])

        paddings = [6 for i in range(4)]
        for lst in content_min_max:    # set max lengths
            i = 0
            for val in lst[:4]:    # value in each sub-list
                paddings[i] = max(paddings[i], len(str(val)))
                i += 1
        headr = content_min_max[0]
        content += ('\n{}{}{}{}{}'.format(
             headr[0] .rjust(paddings[0]),
            (headr[1]).rjust(paddings[1]+5),
            (headr[2]).rjust(paddings[2]+5),
            (headr[3]).rjust(paddings[3]+5),
            ''
        ))
        for lst in content_min_max[1:]:    # populate content using max lengths
            content += ('\n{}{}{}{}     {}'.format(
                lst[0].rjust(paddings[0]),
                lst[1].rjust(paddings[1]+5),
                lst[2].rjust(paddings[2]+5),
                lst[3].rjust(paddings[3]+5),
                lst[4],
            ))
        log.info(content)

    if not show_sorted_details: return
    if len(z.columns) == 1:     return     # skip detail if only 1 column
    if details == None: details = z.columns
    for detail in details:
        if detail == 'sector': continue
        hi = z[details].sort_values(by=detail, ascending=False).head(num)
        lo = z[details].sort_values(by=detail, ascending=False).tail(num)
        content  = ''
        content += ('_ _ _   {}   _ _ _'  .format(detail))
        content += ('\n\t... {} highs\n{}'.format(detail, str(hi)))
        content += ('\n\t... {} lows \n{}'.format(detail, str(lo)))
        if log_nan_only and not len(lo[lo[detail] != lo[detail]]):
            continue  # skip if no nans
        log.info(content)

There was a runtime error.

@Mariano: Sorry Mariano.
I might not been exact in my post. All algos, except yours, do not filter for positive ebitda (filter first introduced by you in Oct. 2016). All older algos in this thread did not select deep value stock, but just random stocks (not what the original intend was) with negative ev - ebitda ratio.
Of course your algo can still select negative ratio, but this stems from negative EV.

Here is my attempt, based on the comments in this thread. I'm using pipeline, the QTradeableStocksUS universe, and Q's Optimize API.

It picks the 30 stocks with the lowest EV/EBIT. Negative EBIT stocks are filtered out. Rebalances annually.

Backtest CAGR since 2003 is around 13.5%, pretty good for a strategy just picking one number and doing nothing the whole year. I could almost trade this.

Tobias' results in Quantative Value are 15.95% pa (1974 to 2011).

Clone Algorithm
56
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Rank stocks by 'The Acquirer's Multiple'

Picks the top 30 stocks with the lowest EV/EBIT. Equal weight for each stock.  Rebalances every year.

Instructions to Use
--------------------
Backtest start date should be on the first day of a month:
  - It will buy stocks on that day (or the first trading day after, if a holiday).
  - On the same date next year, it will sell all those stocks and buy new ones.

eg: 
  Start on 1st Jan, it will rebalance on 1st Jan (or first trading day of every Jan).
  Start on 1st April, it will rebalance on 1st April (or first trading day of every April).



Algorithim
-----------
1) Operates on QTradableStocksUS universe
2) Filters out tradable stocks (https://www.quantopian.com/posts/pipeline-trading-universe-best-practice).
3) Filters out financial companies.
4) Get TTM EBIT from past 4 quarterly results
5) Filters out any stocks with -ve EBIT
6) The stocks from lowest EV/EBIT to highest.  Buy the first NUM_STOCKS_TO_BUY stocks.
7) Rebalance eery year.


"""
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.filters.morningstar import IsPrimaryShare  
from quantopian.pipeline.data import Fundamentals  
import numpy as np
import pandas
import scipy.stats as stats
import quantopian.optimize as opt
from quantopian.optimize import TargetWeights

def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    # Call rebalance() every month, 1 hour after market open.
    algo.schedule_function(
        rebalance,
        algo.date_rules.month_start(days_offset=0),
        algo.time_rules.market_open(minutes=60),
    )

    # Record tracking variables at the end of each day.
    algo.schedule_function(
        record_vars,
        algo.date_rules.every_day(),
        algo.time_rules.market_close(),
    )

    # Stores which month we do our rebalancing.  Will be assigned the month of the first day we are run.
    context.month_to_run = -1
    
    context.NUM_STOCKS_TO_BUY = 30
    
    # Create our dynamic stock selector.
    algo.attach_pipeline(make_pipeline(context), 'pipeline')
    


        
# From Doug Baldwin: https://www.quantopian.com/posts/trailing-twelve-months-ttm-with-as-of-date
#
# Takes a quarterly factor and changes it to TTM.
class TrailingTwelveMonths(CustomFactor):  
    window_length=400  
    window_safe = True  # OK as long as we dont use per share fundamentals: https://www.quantopian.com/posts/how-to-make-factors-be-the-input-of-customfactor-calculation
    outputs=['factor','asof_date'] 
    
    # TODO: what if there are more than 4 unique elements.
    
    def compute(self, today, assets, out, values, dates):  
        out.factor[:] = [    # Boolean masking of arrays
            (v[d + np.timedelta64(52, 'W') > d[-1]])[  
                np.unique(  
                    d[d + np.timedelta64(52, 'W') > d[-1]],  
                    return_index=True  
                )[1]  
            ].sum()  
            for v, d in zip(values.T, dates.T)  
        ] 
        out.asof_date[:] = dates[-1]
 
def make_pipeline(context): 
    
    #  Convert quarterly fundamental data to TTM.
    (  
        ebit_ttm,  
        ebit_ttm_asof_date  
    ) = TrailingTwelveMonths(  
        inputs=[  
            Fundamentals.ebit,  
            Fundamentals.ebit_asof_date,  
        ]  
    )

    ev = Fundamentals.enterprise_value.latest
    
    # Base universe set to the QTradableStocksUS
    # Around 1600-2100 stocks
    # https://www.quantopian.com/posts/working-on-our-best-universe-yet-qtradablestocksus
    #
    # If we dont use it, there are around 4000 stocks.  But we must 
    # screen for price & liquidity ourselves. And I think Quantopian 
    # may only support their own stock universe(s) later.
    base_universe = QTradableStocksUS()
    
    # Filter tradable stocks.
    # https://www.quantopian.com/posts/pipeline-trading-universe-best-practice

    # Filter for primary share equities. IsPrimaryShare is a built-in filter.  
    primary_share = IsPrimaryShare()

    # Equities listed as common stock (as opposed to, say, preferred stock).  
    # 'ST00000001' indicates common stock.  
    common_stock = Fundamentals.security_type.latest.eq('ST00000001')

    # Non-depositary receipts. Recall that the ~ operator inverts filters,  
    # turning Trues into Falses and vice versa  
    not_depositary = ~Fundamentals.is_depositary_receipt.latest

    # Equities not trading over-the-counter.  
    not_otc = ~Fundamentals.exchange_id.latest.startswith('OTC')

    # Not when-issued equities.  
    not_wi = ~Fundamentals.symbol.latest.endswith('.WI')

    # Equities without LP in their name, .matches does a match using a regular  
    # expression  
    not_lp_name = ~Fundamentals.standard_name.latest.matches('.* L[. ]?P.?$')

    # Equities with a null value in the limited_partnership Morningstar  
    # fundamental field.  
    not_lp_balance_sheet = Fundamentals.limited_partnership.latest.isnull()

    # Equities whose most recent Morningstar market cap is not null have  
    # fundamental data and therefore are not ETFs.  
    have_market_cap = Fundamentals.market_cap.latest.notnull()

    ############################
    # Filters specific to EV.
    ############################
    
    # We cannot have -ve EBIT.  This will give is a -ve EBIT/EV, which is meaningless.
    negative_earnings = ebit_ttm < 0
    
    # Screen out financials.  EV is meaningless for companies that lend money out as part of their business.
    is_a_financial_company = (Fundamentals.morningstar_sector_code.latest.eq(103) )
    
    # Filter for stocks that pass all of our previous filters.  
    tradeable_stocks = (  
    primary_share  
    & common_stock  
    & not_depositary  
    & not_otc  
    & not_wi  
    & not_lp_name  
    & not_lp_balance_sheet  
    & have_market_cap  
    & ~negative_earnings
    & ~is_a_financial_company
    )
    
    # EV/EBIT ratio.  Lower values are better, like a PE ratio.
    # I use this instead of EBIT/EV (earnings yield) because 
    # earnings yield will not make sense if EV is -ve.
    ev_over_ebit = ev / ebit_ttm
    
    Lowest_Ev_over_ebit_stocks = ev_over_ebit.bottom(context.NUM_STOCKS_TO_BUY, mask=base_universe & tradeable_stocks)
    
    return Pipeline( 
        columns={  
            'ebit_ttm': ebit_ttm,  
            'ebit_ttm_asof_date': ebit_ttm_asof_date,
            'ev': ev,
            'ev_over_ebit': ev_over_ebit,
        },
        screen = Lowest_Ev_over_ebit_stocks
    )  

# Shorten my output dataframe, for logging.
def pretty_print_output(df, startRow, endRow):
    
    if endRow is None:
        endRow = len(df.index)
        
    columnNames = "Ticker   ebit date         ev  ev/ebit rank"
    ret = "Output:\n"
    ret = ret + columnNames + '\n'
    # Ticker Names.  From row index.
    tickerNames = []
    for t in list(df.index):
        t1 = str(t)
        tickerNames.append(t1[t1.find("[")+1:t1.find("]")])

    for i in range(startRow, endRow):
        ret = ret + "{:5}".format(tickerNames[i])
        ret = ret + "{:8.2f}".format( df.iat[i,0]/1000000.0 )  # ebit
        ret = ret +  ' ' + str(df.iat[i,1])[0:10] # asof date
        ret = ret + "{:10.2f}".format( df.iat[i,2]/1000000.0 ) # ev
        ret = ret + "{:6.2f}".format( df.iat[i,3] ) # ev over ebit
        ret = ret + "{:3.0f}".format( df.iat[i,4] ) # rank
        ret = ret + '\n';
    
    return ret
    
def before_trading_start(context, data):
    """
    Called every day before market open.
    """
    
    # Get pipeline output
    context.output = algo.pipeline_output('pipeline')
    context.output['ebit_ttm_asof_date'] =context.output['ebit_ttm_asof_date'].astype('datetime64[ns]') 

    # Sort: lowest ev/ebit first
    context.output = context.output.sort_values('ev_over_ebit');

    
    # These are the securities that we are interested in trading each day.
    context.security_list = context.output.index

        
def rebalance(context, data):
    """
    Execute orders according to our schedule_function() timing.
    """
    
    
    # We will rebalance on the 1st day of the month that we started running on.
    rebalanceNow = False;
    
    today = get_datetime('US/Eastern')
    if (context.month_to_run == -1):
        context.month_to_run = today.month
        print str("Rebalancing will be done on 1st day of month " + str( context.month_to_run ) )
        rebalanceNow = True;
    else:
        if (context.month_to_run == today.month):
            rebalanceNow = True;
        
    if (rebalanceNow):
        print "REBALANCING."
        
        # Rank the stocks withthe lowest EV/EBIT ratio first.
        context.output['ev_over_ebit_rank'] = context.output['ev_over_ebit'].rank(ascending=True)
        
        
        # Print to log.
        # Make sure that the same stocks being selected are being
        # bought, as Q's Optimize API is a black box.
        myStr = pretty_print_output(context.output, 0, 10)
        print myStr
        myStr = pretty_print_output(context.output, 10, 20)
        print myStr
        myStr = pretty_print_output(context.output, 20, None)
        print myStr
        
        # Use Q's order_optimal_portfolio API.
        # Use equal weights for all items in pipeline: https://www.quantopian.com/posts/help-needed-to-improve-this-sample-algo-to-use-the-new-order-optimal-portfolio-function
        context.weights = {}
        for sec in context.security_list:
            if data.can_trade(sec):
                context.weights[sec] = 0.99/context.NUM_STOCKS_TO_BUY
            
        objective=TargetWeights(context.weights)
        algo.order_optimal_portfolio(
            objective=objective,
            constraints=[],
        )

def record_vars(context, data):
    """
    Plot variables at the end of each day.
    """
    pass


def handle_data(context, data):
    """
    Called every minute.
    """
    pass


# TO research
 
#
#    - comission & slippage
#       https://www.quantopian.com/posts/commission-on-trades
There was a runtime error.

I was wondering if anyone has tested the strategy with quarterly re-balancing? on one of the forums Toby said that for someone without tax constraints, quarterly re-balancing may enhance returns.

Couple of suggestions:
1. Run each fundamental in http://quantopian.com/posts/fundamentals-updating-daily-vs-monthly-or-quarterly to see how consistent they are.
2. For viewing the pipeline values easily and catching nans: http://quantopian.com/posts/pipeline-preview-overview-of-pipeline-content-easy-to-add-to-your-backtest
3. In custom factors, try forward filling nans and see whether that appears to be helpful or not. This is one way: http://quantopian.com/posts/forward-filling-nans-in-pipeline-custom-factors
I've always found these things eye-opening

Thanks @Blue for your pointers. After looking at 1, I see some cases. eg: Pfizer: from 11th to 14th Feb 2011, Revenue is updated but EBIT is not. I also could not reconcile the EBIT value with the SEC filing.

I've enjoyed my time looking at Q. But when I find missing data for an S&P 500 company which is a household name, I'm not sure if its worth continuing. One piece of missing quarterly data - the above for example, means I cannot test Pfizer for a whole year starting from 14th Feb 2011.

I've been adjusting @BlackCat 's algo rebalance rate, and noticed that the log often reports that positions are sometimes unable to be fully bought into, or fully sold out of. It seems, if the whole order cannot be executed that day, the remaining unfilled shares are not ordered again the next day. Ex log output:

2014-02-03 13:00 WARN Your order for 40529 shares of EGY has been partially filled. 36540 shares were successfully purchased. 3989 shares were not filled by the end of day and were canceled.  

Then one month/rebalance later,

2014-03-03 13:00 WARN Your order for -36540 shares of EGY has been partially filled. 20150 shares were successfully sold. 16390 shares were not filled by the end of day and were canceled.  

As it stands now, the algo accrues not insignificant positions of unwanted companies.

I'm relatively new to Quantopian and Python, can someone point me in the right direction of rectifying this?

That seems to help somewhat (I set it to 60, see below), cuts it down to a couple times a year where the algo gets stuck holding a position. It's fine for backtesting purposes but it seems more a workaround than anything, especially given over many years you'll still accrue these erroneous positions.

Would I would like to do, is instead of buying the cheapest X stocks is buy the bottom half of the cheapest decile (this tends to equal 50-70 names), as well as cut from the universe all names that are not in the top 40% of market cap (basically from Carlisle's followup book, Deep Value). I'm not yet sure if this is within the scope of Quantopian.

Thanks for the reply.