Value Investing with a Multi-factor Fundamentals Template

Hi all,

We thought it would be a good time to create a simple template for using fundamental data to rank and select securities. Our intention is that someone who is focused on value investing and fundamental analysis can clone this algorithm and modify it to suit their interests without a lot of coding skill, as an exercise to help get started. Seong Lee did the bulk of the work writing this algo.

Here’s how it works:
The algorithm selects metrics from our fundamentals database. It uses these metrics to rank stocks for purchase. The algorithm reevaluates the rankings once per month and rebalances the portfolio. So if you query for 5 metrics from the fundamentals database, it will rank each security by each of those factors. In turn, a single ranking of securities is created from the separate rankings. The top stocks from this cumulative ranking are then purchased.

It is flexible in that you can switch the metrics you want to use or the number of metrics used overall. You can also easily set the number of stocks purchased with a single simple variable change. You don't need a lot of coding skill to make these kinds of changes.

Want to try yourself? Hit the “Clone Algorithm” button and follow the steps below to modify the algorithm with your own ranking criteria. My sample doesn't perform so well. Maybe you can build one that does.

Instructions for Modifying the Stock Selection Logic

Step 1: Select your metrics for ranking
The selected metrics are defined in the query() function:

    fundamental_df = get_fundamentals(
query(
# To add a metric. Start by typing "fundamentals."
fundamentals.operation_ratios.roic,
fundamentals.valuation_ratios.pe_ratio,
fundamentals.operation_ratios.ebit_margin
)
.filter(fundamentals.valuation.market_cap > 1000e6)
.filter(fundamentals.asset_classification.morningstar_sector_code != 103)
.order_by(fundamentals.operation_ratios.roic.desc())
.limit(num_stocks)
)


You can change the metrics selected and used for the ranking algorithm by updating the comma separated list inside the query() function. Our fundamentals DB has 600+ metrics to choose from. Type "fundamentals.” and a search box will pop up to help you search for the metric you have in mind.

Each stock in the algorithm will be ranked using each of these metrics. In this example case, each stock will be ranked by the three metrics listed. Each metric ranking will contribute equally.

Step 2. Select the size of your ranking pool
By default, 1000 companies are selected. This number is controlled by:
 num_stocks = 1000  set on line 29. This sets the maximum number of stocks selected for ranking.

Step 3. Filtering the selection of stocks
Which stocks are ranked? The stocks selected and used for ranking are controlled by two factors: how they are filtered and how they are ordered. The filter clause in this algo template is on lines 45-6. Multiple clauses can be strung together like in the sample where we filter based on market cap and filter out financial sector stocks:

Furthermore, you can order the stocks selected based on a metric, as demonstrated in the sample query on line 47. Ordering by a different metric will sort the universe of stocks by that metric. So if there are more than 1000 stocks that fit your criteria, it will select the top 1000 stocks as determined by the metric used in your order_by clause.

Step 4. Check your sorting order
By default, the each metric contributes to the ranking, ordered from highest value to lowest (descending).

Sometimes, you want to rank using a metric going from lowest to highest.

Line 55 lets you set which metrics you’d like to rank lowest value to highest, using their fundamentals database name:
 lower_the_better = ['pe_ratio'] 

So go clone this algorithm and experiment with your own variations of a multi-factor ranking model.

337
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Create a multifactor fundamentals weighting strategy - giving each fundamental
factor a ranking (highest number being the highest rank) and weighting each factor
equally
"""

def initialize(context):
# Dictionary of stocks and their respective weights
context.stock_weights = {}

# The algo will go long on the number of stocks set here

# Rebalance monthly on the first day of the month at market open
schedule_function(rebalance,
date_rule=date_rules.month_start(),
time_rule=time_rules.market_open())

"""
Called before the start of each trading day.
It updates our universe with the
securities we want to trade on this particular day
based on our ranking algorithm
"""

# Number of stocks we'll gather data on and rank
num_stocks = 1000

# Setup query to fundamentals db to screen stocks. Query for the
# fields specified (and use them for ranking)
# Filter results based on the fields specified in the .filter() methods
# ROIC and PE ratio are used in this sample as ranking metrics.
# We limit the number of results to num_stocks and return the data
# in descending order, i.e. highest to lowest in terms of the
# ROIC metric.
fundamental_df = get_fundamentals(
query(
# To add a metric. Start by typing "fundamentals."
fundamentals.operation_ratios.roic,
fundamentals.valuation_ratios.pe_ratio,
fundamentals.operation_ratios.ebit_margin
)
.filter(fundamentals.valuation.market_cap > 1000e6)
.filter(fundamentals.asset_classification.morningstar_sector_code != 103)
.order_by(fundamentals.operation_ratios.roic.desc())
.limit(num_stocks)
)

# Initialize for holding rankings
rankings = {}
# List of metrics where lower is better as opposed to higher is better.
# Comma separated
lower_the_better = ['pe_ratio']

# Create a percentile ranking for each security.
# Rank from 1 to N number of securities present in fundamental_df
ranked_df = fundamental_df.fillna(0).rank(axis=1).apply(lambda x: x/len(fundamental_df.columns.values))

# Weight each metric equally, find the sum of rankings, and
# give each stock the weighted sum of its ranks
# Note, it's ranking descending for each metric (a limitation of this simple algo)
weight = 1.0/len(fundamental_df.index) if len(fundamental_df.index) != 0 else 0
for stock in ranked_df:
sum_rank_for_stock = 0
for r in fundamental_df.index:
# For lower_the_better metrics, take the inverse
if r in lower_the_better:
sum_rank_for_stock += weight*(1 - ranked_df[stock].ix[r])
else:
sum_rank_for_stock += weight*ranked_df[stock].ix[r]
rankings[stock] = sum_rank_for_stock

# Order by rank and turn into a list and take only the top num_stocks_to_buy
context.rankings = sorted(rankings, key = lambda x: rankings[x])

# Include the top ranked stocks in our tradeable universe
update_universe(context.rankings)

def create_weights(context, stocks):
"""
Takes in a list of securities and calculates
the portfolio weighting percentage used for each stock in the portfolio
"""
if len(stocks) == 0:
return 0
else:
# Buy only 0.9 of portfolio value to avoid borrowing
weight = .9/len(stocks)
return weight

def handle_data(context, data):
"""
Code logic to run during the trading day.
handle_data() gets called every price bar. In this algorithm,
rather than running through our trading logic every price bar, every day,
we use scheduled_function() in initialize() to execute trades 1x per month
"""
pass

def rebalance(context, data):
# Track cash to avoid leverage
cash = context.portfolio.cash

# Exit all positions that have fallen out of the top rankings
for stock in context.portfolio.positions:
if stock not in context.rankings:
if stock in data:
order_target(stock, 0)
cash += context.portfolio.positions[stock].amount
log.info("Exiting security: %s" % stock)

# Create weights for each stock
weight = create_weights(context, context.rankings)

# Rebalance all stocks to target weight of overall portfolio
for stock in context.rankings:
if weight != 0 and stock in data:
notional = context.portfolio.portfolio_value * weight
price = data[stock].price
numshares = int(notional / price)

# Growth companies could be trading thin: avoid them
if cash > price * numshares and numshares < data[stock].volume * 0.2:
if stock in data:
order_target_percent(stock, weight)
cash -= notional - context.portfolio.positions[stock].amount
log.info("Placing order: %s" % stock)

There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

17 responses

Hi all,

One of the things that bugged me about the initial implementation of the template above was that it rebalanced monthly. I figure an annual holding period would be better so I grafted on the rebalancing model from the sample ETF rebalancing algo. Also, I track the leverage here as well.

304
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
Create a multifactor fundamentals weighting strategy - giving each fundamental
factor a ranking (highest number being the highest rank) and weighting each factor
equally
"""
import datetime
import pandas as pd

def initialize(context):
# Dictionary of stocks and their respective weights
context.stock_weights = {}

# The algo will go long on the number of stocks set here

# Rebalance annually
context.Rebalance_Days = 365
context.rebalance_date = None
context.rebalance_hour_start = 10
context.rebalance_hour_end = 15

"""
Called before the start of each trading day.
It updates our universe with the
securities we want to trade on this particular day
based on our ranking algorithm
"""

# Number of stocks we'll gather data on and rank
num_stocks = 1000

# Setup query to fundamentals db to screen stocks. Query for the
# fields specified (and use them for ranking)
# Filter results based on the fields specified in the .filter() methods
# ROIC and PE ratio are used in this sample as ranking metrics.
# We limit the number of results to num_stocks and return the data
# in descending order, i.e. highest to lowest in terms of the
# ROIC metric.
fundamental_df = get_fundamentals(
query(
# To add a metric. Start by typing "fundamentals."
fundamentals.operation_ratios.roic,
fundamentals.valuation_ratios.pe_ratio,
fundamentals.operation_ratios.ebit_margin
)
.filter(fundamentals.valuation.market_cap > 1000e6)
.filter(fundamentals.asset_classification.morningstar_sector_code != 103)
.order_by(fundamentals.operation_ratios.roic.desc())
.limit(num_stocks)
)

# Initialize for holding rankings
rankings = {}
# List of metrics where lower is better as opposed to higher is better.
# Comma separated
lower_the_better = ['pe_ratio']

# Create a percentile ranking for each security.
# Rank from 1 to N number of securities present in fundamental_df
ranked_df = fundamental_df.fillna(0).rank(axis=1).apply(lambda x: x/len(fundamental_df.columns.values))

# Weight each metric equally, find the sum of rankings, and
# give each stock the weighted sum of its ranks
# Note, it's ranking descending for each metric (a limitation of this simple algo)
weight = 1.0/len(fundamental_df.index) if len(fundamental_df.index) != 0 else 0
for stock in ranked_df:
sum_rank_for_stock = 0
for r in fundamental_df.index:
# For lower_the_better metrics, take the inverse
if r in lower_the_better:
sum_rank_for_stock += weight*(1 - ranked_df[stock].ix[r])
else:
sum_rank_for_stock += weight*ranked_df[stock].ix[r]
rankings[stock] = sum_rank_for_stock

# Order by rank and turn into a list and take only the top num_stocks_to_buy
context.rankings = sorted(rankings, key = lambda x: rankings[x])

# Include the top ranked stocks in our tradeable universe
update_universe(context.rankings)

def create_weights(context, stocks):
"""
Takes in a list of securities and calculates
the portfolio weighting percentage used for each stock in the portfolio
"""
if len(stocks) == 0:
return 0
else:
# Buy only 0.9 of portfolio value to avoid borrowing
weight = .9/len(stocks)
return weight

def handle_data(context, data):
"""
Code logic to run during the trading day.
handle_data() gets called every price bar. In this algorithm,
rather than running through our trading logic every price bar, every day,
we use scheduled_function() in initialize() to execute trades 1x per month
"""
# Get the current exchange time, in the exchange timezone
exchange_time = pd.Timestamp(get_datetime()).tz_convert('US/Eastern')

# Track leverage
record_leverage(context, data)

if context.rebalance_date == None or exchange_time > context.rebalance_date + datetime.timedelta(days=context.Rebalance_Days):

# Do nothing if there are open orders:
if has_orders(context):
log.info('has open orders - doing nothing!')
return
rebalance(context, data, exchange_time)
else:
return

def rebalance(context, data, exchange_time):
# Only during defined hours.
if exchange_time.hour < context.rebalance_hour_start or exchange_time.hour > context.rebalance_hour_end:
return

# Track cash to avoid leverage
cash = context.portfolio.cash

# Exit all positions that have fallen out of the top rankings
for stock in context.portfolio.positions:
if stock not in context.rankings:
if stock in data:
order_target(stock, 0)
cash += context.portfolio.positions[stock].amount
log.info("Exiting security: %s" % stock)

# Create weights for each stock
weight = create_weights(context, context.rankings)

# Rebalance all stocks to target weight of overall portfolio
for stock in context.rankings:
if weight != 0 and stock in data:
notional = context.portfolio.portfolio_value * weight
price = data[stock].price
numshares = int(notional / price)

if stock in data:
order_target_percent(stock, weight)
cash -= notional - context.portfolio.positions[stock].amount
log.info("Placing order: %s" % stock)

context.rebalance_date = exchange_time

def has_orders(context):
# Return true if there are pending orders.
has_orders = False
for sec in context.rankings:
orders = get_open_orders(sec)
if orders:
for oo in orders:
message = 'Open order for {amount} shares in {stock}'
message = message.format(amount=oo.amount, stock=sec)
log.info(message)

has_orders = True
return has_orders

def record_leverage(context, data):
P = context.portfolio
market_value = sum(data[i].price * abs(P.positions[i].amount) for i in data)
record(leverage=market_value / max(P.portfolio_value, 1))

There was a runtime error.

The algorithm is quit slow. In fact it recomputes rankings every day. It is not needed. I do it once a month. Runs much faster.

def before_trading_start(context):
if get_datetime().month==context.current_month :
return
context.current_month =  get_datetime().month


Regards

Hi all,

where could find the detail code table infomation for fundamentals parameter
example like following metrics definition :

fundamentals.asset_classification.morningstar_sector_code
fundamentals.company_reference.primary_exchange_id
fundamentals.company_reference.country_id

Thanks icare. Few questions:

1. once a year balance gives better results but very sesnitive to downturns, it needs to be protected somehow from such timings , maybe rebalance monthly to defensive stocks in such periods. Also I havent seen anything that measure or make sure the 10 stocks portfolio is diversified for risk/volatility reduction

2. Why daily backtets gives no results but only minute backtest ? the minute run takes much more time with no visible reason ?.

@joe

1. Go for it! I built this just as a simple starting point. I'm sure there are a hundred ways to improve it. For sure it is sensitive -- it loses something close to \$1M in a 2 week period in 2008.

2. Some of the code in the rebalancing function relies on time of day. I was lazy and and stole the code from one of our existing samples that implements it in that fashion. (https://www.quantopian.com/posts/rebalance-algo-9-sector-etfs):

def rebalance(context, data, exchange_time):
# Only during defined hours.
if exchange_time.hour < context.rebalance_hour_start or exchange_time.hour > context.rebalance_hour_end:
return

# Track cash to avoid leverage
cash = context.portfolio.cash
# Exit all positions that have fallen out of the top rankings
for stock in context.portfolio.positions:
if stock not in context.rankings:
if stock in data:
order_target(stock, 0)
cash += context.portfolio.positions[stock].amount
log.info("Exiting security: %s" % stock)


@Novice,
You can find some more details here: https://www.quantopian.com/help/fundamentals

If you have further questions not covered by the docs, feel free to submit a ticket.

@Josh

morningstar_sector_code
Industry groups are consolidated into 11 sectors. See appendix for mappings.

But I cannot find the appendix ...

country_id
3 Character ISO code of the country where the firm is domiciled. See separate reference document for Country Mappings.

But where could find the Country Mappings..

Ah, sorry. I should have read more carefully.

The sector codes need to be better documented. Here's what we've generated in the past:

    # Sector mappings
context.sector_mappings = {101.0: "Basic Materials",
102.0: "Consumer Cyclical",
103.0: "Financial Services",
104.0: "Real Estate",
205.0: "Consumer Defensive",
206.0: "Healthcare",
207.0: "Utilites",
308.0: "Communication Services",
309.0: "Energy",
310.0: "Industrials",
311.0: "Technology"}


The country id's follow the ISO standard for three letter codes which is documented on wikipedia here: http://en.wikipedia.org/wiki/ISO_3166-1#Current_codes

Q's

• During backtest is the fundamentals data returned based on the current backtest date?
• Is get_fundamentals now available for live trading?

Thanks!

@Charlie,

The fundamentals data is stored using a "point in time" database. So we return data to the backtester only if it would be known to an investor at the particular point in time being simulated in the backtester. Since we only have monthly updates of historic data from 2002 - May 2014, we make some conservative assumptions. Not sure if this is the exact question you're asking. Let me know if I'm misunderstanding.

get_fundamentals() is available in paper trading (and can be used in the Quantopian Open contest) and will be available for real money trading very, very shortly.

Okay, sounds good. If it is coming to real trading it is definitely worth integrating (doing it now). Thanks, this is a really nice capability.

@Josh

Since we only have monthly updates of historic data from 2002 - May 2014,
so we using get_fundamentals to do backtest, it should be backtest between this period?

I'm not recommending that, no.
After May, 2014, those metrics update daily in our data set. I simply want you to be aware of a different update frequency.

That's pretty neat. But the drawdown seems to be more than 50%

For sure! I just meant this as a starting point algo for folks. I'd love to see if folks can improve it to smooth things out a bit.

Hi Josh, thanks for the extensive material you've provided on value strategies.

I see your algo's leverage keeps creeping up over time -- probably due to "dead" positions, something I've experienced in some of my own experiments as well. Is there a best-practice solution to get rid of these positions?

@Alex,

Here's a thread that has discussed various ways of handling this: https://www.quantopian.com/posts/when-a-company-gets-acquired-my-portfolio-still-owns-shares-of-the-original-company-is-that-right

If you put something in place in this algo, I'd love to see it here to compare.

Thanks!