Back to Community
Assigning Valuation Instead of Filtering

I sometimes find that the fundamental filtering process can really limit the number of stock available to be purchased if you have multiple criteria.

Is there be a way to assign a "value" to each criteria, then add these values for each stock's different criteria then sort by the total value?

Ie

If pe_ratio had the following values:

0 - 10 = 5
10 - 12 = 4
12 = 15 = 3
15 - 20 = 2
20 - 30 = 1
30+ = 0
NA = 0

Then for pb_ratio have the following values:

0 - 1 = 5
1 - 2 = 4
2 - 3 = 3
3 - 4 = 2
4 - 5 = 1
5+ = 0

Then if we added "pe_ratio value" and "pb_ratio value" together and sorted from highest to lowest?

16 responses

Here's a trick that seems to work. Run two different fundamental queries, each selected and sorted by a diff metric, and then find the intersection between them:

[Note: I'm a python neophyte and admit there must be another way to perform an intersection on the two dataframes, but it eludes me.]

def before_trading_start(context):  
    f = fundamentals  
    marketCapFundy = get_fundamentals(  
        query(  
            f.valuation.market_cap  
        )  
        .filter(fundamentals.valuation.market_cap != None)  
        .order_by(fundamentals.valuation.market_cap.desc())  
        .limit(1000)  
    )  
    returnOnAssetsFundy = get_fundamentals(  
        query(  
            f.operation_ratios.roa  
        )  
        .filter(fundamentals.operation_ratios.roa != None)  
        .order_by(fundamentals.operation_ratios.roa.desc())  
        .limit(1000)  
    )  
    context.fundamental_df = pandas.Series(list(set(marketCapFundy).intersection(set(returnOnAssetsFundy))))  
    # Take the first 50 that show up in both lists  
    context.fundamental_df = context.fundamental_df[:50]

    #: Update our universe with the intersection of symbols  
    update_universe(context.fundamental_df.values)  

What I do when I have lots of factors is just don't limit my fundamental query. The 200 stock limit actually doesn't apply to the the fundamental query, but to the update_universe method. So, I just query for whatever variables I need with minimal or no filtering, then do the filtering on the resulting pandas dataframe. Finally, a tail of the resulting dataframe limits my universe before I set it.

@MarketTech - do you think this strategy would work as well if you had four or five criteria?

@Chris C. -- give'r a try. I've added a third and I'm still getting at least 50 filtered in. At some point the intersection will run out of common joins, don't know when though and for what criteria.

How would the following line be adjusted to make it work with three intersections (sorry I'm still very new to python):

    context.fundamental_df = pandas.Series(list(set(marketCapFundy).intersection(set(returnOnAssetsFundy))))  
def before_trading_start(context):  
    f = fundamentals  
    marketCapFundy = get_fundamentals(  
        query(  
            f.valuation.market_cap  
        )  
        .filter(fundamentals.valuation.market_cap != None)  
        .order_by(fundamentals.valuation.market_cap.desc())  
        .limit(1000)  
    )  
    returnOnAssetsFundy = get_fundamentals(  
        query(  
            f.operation_ratios.roa  
        )  
        .filter(fundamentals.operation_ratios.roa != None)  
        .order_by(fundamentals.operation_ratios.roa.desc())  
        .limit(1000)  
    )  
    currentRatioFundy = get_fundamentals(  
        query(  
            f.operation_ratios.current_ratio  
        )  
        .filter(f.operation_ratios.current_ratio != None)  
        .order_by(f.operation_ratios.current_ratio.desc())  
        .limit(1000)  
    )  
    context.fundamental_series = pandas.Series(list(  
            set(currentRatioFundy).intersection(  
                set(marketCapFundy).intersection(  
                    set(returnOnAssetsFundy)))))  
    # Take the first 50 that show up in both lists  
    context.fundamental_series = context.fundamental_series[:50]

    #: Update our universe with the intersection of symbols  
    update_universe(context.fundamental_series.values)  

Thank you MarketTech - I'll definitely play around with this.

Does anyone else know how to best assign values to criteria instead of filtering?

Hi Chris,

I think this is an interesting use case but I don't fully understand what you're looking to do. Are you attempting to score each security given a range for a criteria (e.g. PE ratios between 5-10 get 1 point), sort them, and choose the ones with the most amount of points?

Seong

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Chris C. What you're asking for is scale normalization. It's been done and explored here numerous times.

http://codepad.org/haCgJC9x

mylist = [100.0, 42.0, 230.0, 30.0, 115.0]  
mylist = sorted(mylist)  
mymin = min(mylist)  
mymax = max(mylist)  
myrange = mymax - mymin  
normalizedList = [round((x - mymin) / myrange * 100.0, 0) for x in mylist]

print(mylist)  
print(mymin)  
print(mymax)  
print(myrange)  
print(normalizedList)

#~~~~~~~~~~~~~~~~~~~~~~~~~

[30.0, 42.0, 100.0, 115.0, 230.0]
30.0  
230.0  
200.0  
[0.0, 6.0, 35.0, 43.0, 100.0]

@ Seong Lee - yep I'm looking to do exactly what you mentioned - I'd like to try using multiple criteria so that the universe of stocks you end up with hasn't necessarily eliminated stocks just because they don't fit one criteria.

For example if there was two companies with the following criteria:

Company A:
P/E Ratio - 12.5
P/B Ratio - 0.9
PEG Ratio - 0.85

Company B:
PE Ratio - 11.5
PB Ratio - 1.9
PEG Ratio - 1.8

I'd argue that Company A offers much better value because it has a similar PE ratios, but with a much better earnings growth rate and more book value.

Yet if if my filters for defining a "good value" company were:

P/E < 12
P/B < 2
PEG < 2

Then Company A would be eliminated whilst Company B (which is arguably lower value) would be eligible.

And even if the P/E filter was raised to say P/E 15, if I sorted the companies by lowest P/E and had a limit, company B would still be chosen before company A.

Whereas if values were instead assigned on the basis of:

P/E = 0 - 5 = 5
P/E = 5 - 7 = 4
P/E = 7 -10 = 3
P/E = 10 - 13 = 2
P/E = 13 - 16 = 1
P/E = 16 - 20 = 0
P/E = 20 - 25 = -1
P/E = 25+ = -2

P/B = 0 - 0.5 = 5
P/B = 0.5 - 1 = 4
P/B = 1 - 1.5 = 3
P/B = 1.5 - 2 = 2
P/B = 2 - 3 = 1
P/B = 3 - 5 = 0
P/B = 5+ = -1

PEG = 0 - 0.5 = 4
PEG = 0.5 - 1 = 3
PEG = 1 - 1.5 = 2
PEG = 1.5 - 2 = 1
PEG = 2 - 3 = 0
PEG = 3+ = -1

So in the above case the companies would have the following scores:

Company A = 2 + 4 + 3 = 9
Company B = 2 + 2 + 1 = 5

So when the companies are sorted by the total value Company A should be seen as superior.

Is this possible?

Any python pros know how to code this?

This "value score" idea is great tool for an assessment of relative "value" of stocks. .

Perhaps this simple linear formula something like this?
company value score = round(20 - PE)/5 + round (5.0 - P/B ) + 2*round(2.5- PEG)

Company A:
P/E Ratio - 12.5
P/B Ratio - 0.9
PEG Ratio - 0.85
company A value score = round( (20 - 12.5)/5) + round (5 - 0.9) + 2*round(2.5 - 0.85) = 2 + 4 + 3 = 9

Company B:
PE Ratio - 11.5
PB Ratio - 1.9
PEG Ratio - 1.8
company B value score = round(20 - 11.5)/5 + round (5.0 - 1.9 ) + 2* round(2.5 - 1.8) = 2 + 3 + 1 = 6

Of course, PE, P/B and PEG may not be directly comparable for stocks in diverse industrial sectors, so maybe other factor(s) could be included?

@Chris C. This is where you have to get your hands bloody and unfortunately, as I've found, learning python requires self administered brain surgery.

The current Open contest is a perfect example of using multiple scaled metrics, combined into a composite score, to evaluate targets.

Take each metric from the fundamentals code above.
Create a collection for each, stock : metric value
Scale normalize the values per the codepad code above.
At this point you'll have ( from above) three collections whose numbers all range from 0:100.
Perform the intersection now. This may require nested loops, or the existing pandas intersection technique (don't know.)

That's the theory.

@Dave M. From my understanding of Chris's requirements, a single fundamental request can ONLY be sorted by a single metric. Which ends up excluding the others as sorted filters. Although, Chris, you could use the filters sort a single column but also to include only those best of breed selections for the other metrics you desire. Then you could use Dave. M's idea, or you could build separate scaled normalization collections -- either way.

@Chris C. Well, that was fun. What would the software world do without StackOverflow eh?

Below is a method to select 3 different fundamental metrics. Independently.
The intersection of the three is then found and used as a filter later.
Each metric is scaled from 0 to 1
The metrics are then summed to produce a composite rank.

This is, just like every other technique in Python, just one way to produce a composite, scaled fundamental metric based ranking of securities.

def before_trading_start(context):  
    f = fundamentals

    priceToBookFundy = get_fundamentals(  
        query(  
            f.valuation_ratios.pb_ratio  
        )  
        .filter(f.valuation_ratios.pb_ratio != None)  
        .filter(f.valuation.market_cap >= 100000000)  
        .order_by(f.valuation_ratios.pb_ratio.desc())  
        .limit(1000)  
    )

    priceToEarningsFundy = get_fundamentals(  
        query(  
            f.valuation_ratios.pe_ratio  
        )  
        .filter(f.valuation_ratios.pe_ratio != None)  
        .filter(f.valuation.market_cap >= 100000000)  
        .order_by(f.valuation_ratios.pe_ratio.desc())  
        .limit(1000)  
    )

    priceToSalesFundy = get_fundamentals(  
        query(  
            f.valuation_ratios.ps_ratio  
        )  
        .filter(f.valuation_ratios.ps_ratio != None)  
        .filter(f.valuation.market_cap >= 100000000)  
        .order_by(f.valuation_ratios.ps_ratio.desc())  
        .limit(1000)  
    )

    # Perform the intersection which will give us the common securities  
    fundySeries = pandas.Series(list(  
            set(priceToBookFundy).intersection(  
                set(priceToEarningsFundy).intersection(  
                    set(priceToSalesFundy)))))

    # Scale each fundamental metric from 0.0 -> 1.0  
    try:  
        priceToBookFundy     -= min(priceToBookFundy.min())  
        priceToBookFundy     /= max(priceToBookFundy.max())  
        priceToEarningsFundy -= min(priceToEarningsFundy.min())  
        priceToEarningsFundy /= max(priceToEarningsFundy.max())  
        priceToSalesFundy    -= min(priceToSalesFundy.min())  
        priceToSalesFundy    /= max(priceToSalesFundy.max())  
    except:  
        return

    # Subselect only the securities that are common  
    priceToBookFundy     = priceToBookFundy[fundySeries]  
    priceToEarningsFundy = priceToEarningsFundy[fundySeries]  
    priceToSalesFundy    = priceToSalesFundy[fundySeries]

    # Create composite dataframe with the n Values as row:columns  
    compositeDeck        = priceToBookFundy.add(priceToEarningsFundy, fill_value=0).add(priceToSalesFundy, fill_value=0)  
    # Sum the n scaled values into a single value  
    compositeDeck        = compositeDeck.sum(axis=0)

    # Rank largest to smallest and take the first X that show up  
    compositeDeck = compositeDeck.order(ascending = False)  
    compositeDeck = compositeDeck[:MaxSecuritiesToTrade * 2]

    # Save to context for use in selection later  
    context.securityRankDeck = compositeDeck

    #: Update our universe with the intersection of symbols  
    update_universe(compositeDeck.index)  

If I run this code, for some start date, I'll get, within the context.securityRankDeck a Series that looks like this:

  context.securityRankDeck: Series  
    0_Security(4152  [JNY]): 2.8738128834  
    1_Security(7650  [TW]):  2.87175883622  
    2_Security(438   [AON]): 2.81025635388  
    3_Security(24076 [EZEM]):2.77695518353  
    4_Security(21964 [XEL]): 2.74124254465  
    5_Security(6925  [SKYW]):2.69893516546  
    6_Security(2671  [EZPW]):2.65566543039  
...

@Market Tech and @Chris C,
I've tried to take your discussion above, and recreate the idea using the new pipeline API. If you take a look at the attached backtest, here is what I've done.

  • Filtered down the universe from all 8000+ securities on any given day, to just the top 2000 by market cap
  • Gotten the latest PE, PB and PEG ratios for every remaining security, then ranked all the securities by each of those ratios
  • Calculated a composite rank of the ratio ranks, longed the top 200, shorted the bottom 200.

The thing I am not sure I have right is the rank order of the different ratios. I would love your thoughts.

Clone Algorithm
19
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56146984cf4e4110b0940843
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Coolio Karen.

Whoever came up with this pipeline paradigm, good work. You deserve a raise. This one feature may end up being the star player in Quantopian's lineup. The power you wield here is substantial.

No idea about the rank orders of the details. But, here, specifically, I would say that ordinal rank used as a proxy for weight has some problems. Scaled weighting can still rank but won't hide vast discrepancies between data points. But it's prolly easily remedied. Also, not using the rank as apportioning weight is leaving a considerable amount of sizing information on the table. But again, easily remedied.

Too bad I've lost interest here, I might be tempted to kick the tires a bit.