Back to Community
Free Cash Flow to Enterprise Value with FactSet Data - Template Fundamental Algo

Many of our funded authors have relied upon price driven strategies. As we continue to evaluate and add algorithms to our portfolio, we will be especially interested in new strategies that take advantage of a broader range of fundamental factors.

FCF/EV

Free cash flow (FCF) is a measure of how much cash a company has on hand after all expenses are extracted. High FCF indicates that larger amounts of cash are available to the company for reinvestment. By dividing by a company’s enterprise value (EV), we can compute a ratio that shows how cash is generated per unit of the value of a company. In this implementation, we can test the idea that companies with a relatively higher ratio of FCF/EV are likely to outperform companies with relatively lower levels of FCF/EV. Read more here.

As we look to expand the set of algorithms receiving allocations over the next few months we expect to give preference to new ideas that take advantage of a broader range of fundamental factors.

To get started, clone this algorithm, improve it with your own ideas, and submit it to the Quantopian Daily Contest.

N.B. As implemented here, this algo doesn't fully meet all of the criteria for entry in the daily contest so we're leaving that as an "exercise for the reader".

Fundamental Sample Strategies Library

To see all of our fundamental sample strategies, please visit our new library post. We will be adding more templates in the future, so keep an eye on the "Algo Template" tag in the Quantopian forums: https://www.quantopian.com/posts/tag/Algo-Template/newest.

Clone Algorithm
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import factset

ZSCORE_FILTER = 3 # Maximum number of standard deviations to include before counting as outliers
ZERO_FILTER = 0.001 # Minimum weight we allow before dropping security
        

def initialize(context):
    
    algo.attach_pipeline(make_pipeline(), 'alpha_factor_template')

    # Schedule our rebalance function
    algo.schedule_function(func=rebalance,
                           date_rule=algo.date_rules.week_start(),
                           time_rule=algo.time_rules.market_open(),
                           half_days=True)

    # Record our portfolio variables at the end of day
    algo.schedule_function(func=record_vars,
                           date_rule=algo.date_rules.every_day(),
                           time_rule=algo.time_rules.market_close(),
                           half_days=True)


def make_pipeline():
    # Setting up the variables
    alpha_factor = factset.Fundamentals.free_cf_fcfe_qf.latest / \
                   factset.Fundamentals.entrpr_val_qf.latest

    # Standardized logic for each input factor after this point
    alpha_w = alpha_factor.winsorize(min_percentile=0.02,
                                     max_percentile=0.98,
                                     mask=QTradableStocksUS() & alpha_factor.isfinite())

    alpha_z = alpha_w.zscore()
    alpha_weight = alpha_z / 100.0

    outlier_filter = alpha_z.abs() < 3
    zero_filter = alpha_weight.abs() > 0.001

    universe = QTradableStocksUS() & \
               outlier_filter & \
               zero_filter

    pipe = Pipeline(
        columns={
            'alpha_weight': alpha_weight
        },
        screen=universe
    )
    return pipe


def before_trading_start(context, data):
    context.pipeline_data = algo.pipeline_output('alpha_factor_template')


def record_vars(context, data):
    # Plot the number of positions over time.
    algo.record(num_positions=len(context.portfolio.positions))
    algo.record(leverage=context.account.leverage)

    
def rebalance(context, data):
    # Retrieve pipeline output
    pipeline_data = context.pipeline_data
    
    alpha_weight = pipeline_data['alpha_weight']
    alpha_weight_norm = alpha_weight / alpha_weight.abs().sum()


    objective = opt.TargetWeights(alpha_weight_norm)

    # No constraints, want all assets allocated to
    constraints = []
    
    algo.order_optimal_portfolio(
        objective=objective,
        constraints=constraints
    )
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

24 responses

Interview with Michael Mauboussin on the merits and pitfalls of using EV/EBITDA multiples (close to the inverse of above FCF/EV yield. I believe FCF = EBITDA + CapEx).

https://www.forbes.com/sites/kevinharris/2018/11/05/making-sense-of-multiples-mauboussin-on-evebitda/

Specific returns (as calculated by Quantopian at least) are zero over the entire period. Would that I could do further work on the calculation of "specific returns" but the research environment is, alas, not conducive to that.

On a brighter note the Quantopian video Home Runs is well worth watching. Right or wrong the takeaway is that there are a mere handful of factors driving stock prices and that relative simplicity is best. Chris Covington of High Vista talks much sense (in my terms at least). But perhaps that is just my own bias.

Incidentally I rrealise of course that this algo is merely a very helpful example. Nonetheless it is instructive to run it as is out of sample.

The other point which is very evident from these examples is that turnover has to be "manufactured" if the algorithm is to meet Quantopians expectations. Of course it is obvious why they insist on certain turnover levels - they state their point. It is about sample size.

And yet if you are hoping to base your investment on a few simple fundamental factors you are going to have to manufacture turnover by using a ratio which changes daily (like PER) rather that balance sheet extractions which change only once a quarter like debt.

One problem in using ratios such as PER is that they are bound, by definition, to cause ranking by sector. At least in recent years. Tech companies for instance have tended to have high PERs and if you rank high per as a "long" then you will be dominated by tech stocks. The reverse and your choice will be dominated by value.

I am tempted to wonder whether one can in fact have it all in one algo,

One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them

A further question I am led to this morning is: "if it is possible to isolate and ascertain specific return can you build an algorithm to express that specific return and nothing else". If you can not them I am left wondering whether the calculation of specific return is in fact valid. If you can not isolate it in actual trading then I rather doubt that it exists.

Incidentally while my posts sound negative I am in fact wholeheartedly in support of the drive to use "fundamentals" in quantitative investment. Price factors seem dominated by "momentum" - does any other price factor really exist? It would be nice to move away from price and towards fundamental factors. Although of course price will itself reflect the fundamentals in due course.

Price factors seem dominated by "momentum" - does any other price
factor really exist?

“Mean-reversion “ is a price factor as well.

I would add volatility to price factor as well and further say that it's what describes the "intensity or magnitude" of what conditions of momentum and mean reversion are.

The way I look at it is, fundamentals describes an individual company's financial health, performance, stability, size, value, etc. and one can score them accordingly through these various characteristics vis a vis their industry and / or against the stock universe as a whole. Price factors, on the other hand, describes how the company's stock price behaves under the influence of market forces, i.e. price equilibrium of supply and demand.

Yeah, in my opinion you kinda need both. Fundamentals is a great starting point (where I always start anyway), but you don't want to pay too dearly for great fundamentals. As the great Oracle of Omaha has been known to say: "Price is what you pay; value is what you get" and "It's better to buy a wonderful company [great fundamentals] at a fair price, than a fair company at a wonderful price."

In other words, one can't just look at fundamentals in isolation, but always need to look at them in relation to the price I'm paying for them. Note, this is just the way I'm trying to analyze companies - I know it's not the only game in town.

Hi Joakim,

Yes very true, you need both. What you described as your thought process and the quote from the great Oracle from Omaha is the essence of value investing. And you're right to say that it is not the only game in town, most specially with the developments in computing technologies, quantitative techniques and data driven approaches. And I guess this is the exercise we are all undergoing here in Quantopian, to make new alpha discoveries devoid of or away from what has already been commonly known in the trading world.

Hi James,

Yeah, absolutely. I think it can be applied to 'growth' investing too though. One should be able to find 'value' in growth companies as well, if the 'priced in discounted growth rate' is significantly less (i.e. with a 'margin of safety') than one's own estimated (and hopefully actual) future growth rate (e.g. GARP investing).

Regarding ML, I'm very much a novice in this field so please correct me if I'm wrong. Would you agree thought that ML is very good at detecting and acting on 'price patterns' (much better than humans), but that those price patterns can oftentimes be fitted mostly on noise, or on a particular market regime specific to the time-series trained on? So, as I gathered from Ernie Chan's recent webinar, the key to using ML in financial time-series might not be to predict future prices or returns, but to choose non-price related features that may indirectly affect future prices (e.g. using ML to try to predict earnings surprises)?

Also, in your opinion, is 'Reinforcement Learning' one of the better type of ML algorithm for financial time-series?

Yes, Joakim, ML algos are very good at detecting 'prices patterns' but is also very susceptible to overfitting on noise because of the non stationarity of financial time series. So one has to guard on these pitfalls through holdout data cross validation. Ernie Chan's suggestion to use non price related future returns for prediction is probably a valid alternative but I think one could still use future returns or transformation thereof with the same efficacy but with strict cross validation processes to guard against overfitting.

RL is one of the better methods as one can define the environment and conditions and the corresponding reward/penalty as the learning process framework. I also like deep learning neural networks. I am currently experimenting off-Q platform a hybrid technique that combines convolutional NN which specializes in static/image data (think of it as a memorizer of stock charts) and Long-Short Term Memory (LSTM) NN which specilizes in the non-linearity and non stationary of the timeseries. I have had some success with it in Numerai competitions.

Thanks James, very much appreciated!

Keen interests, Joakim and James of mine as well.. while on the subject, you may find these useful:

Convolutional Neural Network Models for Time Series Forecasting

LSTM Models for Time Series Forecasting

ps: Jason Brownlee is an Australian ML & AI practitioner

Cheers

@ Karl,

Is it difficult to implement these ML in quantopian with long/short algo template?

Hi CcMm,

I have a page on Machine Learning with posts on applying ML by users on the Quantopian platform, and others.
On that page, there is also an article by Saeed Rahman on LinkedIn about using Deep Reinforcement Learning as "Reward Engineering" for "Alpha Combination" in the Quantopian Work Flow. Saeed's report and GitHub repository is at the end of the LinkedIn article.

There is also a video by Delany MacKenzie with Dr Tom Starke on "How Reinforcement Learning can be Applied to Quantitative Finance".

As for applying these ML methods, I have not implemented directly into Quantopian IDE at this point although it may be possible/desirable to pipe in your ML-processed signals as Self-Serve Data to use in a Quantopian algorithm.

Hope this helps.

Maxwell

Can you explain the many, many transformations of the source of alpha?

Zscore - is this necessary in the case of a single alpha factor which is is any case a ratio? Or is it built in so as to enable the addition of further factors?

alpha_weight = alpha_z / 100.00 - what is the purpose of dividing the zscore by 100? Just so you can ascertain very very small alphas?

Why does the alpha factor need further normalizing?alpha_weight_norm = alpha_weight / alpha_weight.abs().sum()
Isn't the zscore already applied enough to make the alpha factors comparable?

I find it curious that this algo has exposure to only one style category - "volatility". I'm not sure how Quantopian defines this style strategy but perhaps the code is lying somewhere about.

Can anyone shed any light on this?

I think it also skews the calculation of specific return? Which comes out as virtually zero. According to Quantopian's calculations virtually all this algo's returns are "common returns". For an algo with a std of 3%, a beta to SPY of 0.1, a minuscule drawdown and no exposure to any factor except volatility this does not seem right.

Hey Zeno,

The division by 100 is definitely superfluous, more of an aesthetic thing. It isn't necessary, but it doesn't have any negative impact.

I both z-score and divide by the sum because I am using the TargetWeights objective with no constraints. Since I am not using any constraints, I wanted to make sure that my weights summed to 1. The z-scoring is to get weights proportional to my alpha values that lower the impact of outliers and the division by the sum makes sure the weights I have calculated will incur no leverage.

Maxwell
Many thanks for this, most helpful. I assume however that the optimization procedure will not allocate to every single stock in the universe (thus the guarantee of the sum to 1)? Or perhaps it does? I am wondering also how / why the optimization would arrive at shorts since I am assuming that would require a negative cashflow?

Can you shed any light on the reason the optimization routine ascribes all the return to "volatility"? Does this bring you to question your constraints algo at all? I can't see how this factor relates to volatility or a volatility strategy?

I would also be interested in your views on why this metric, this beta has failed to produce profit in the past two years after such a spectacular run?

This metric captures the very essence of how to value a stock and works (at least in isolation) very much better than most other fundamental ratios.

Which leads me back to a point few seem to appreciate on this forum:

Stock prices are driven in the long term by fundamentals. Cashflow is one of the most basic and important fundamentals for valuing a stock. Fundamentals don't change: cashflow will always be important and a lack of cashflow will always (eventually) sink a company if left uncorrected.

In that event may we assume this alpa will continue to perform in the future? Or do you suppose that some change has occured within the US capital markets which means that the ratio of cashflow to enterprise value will no longer be relevant in the future in valuing a stock?

I both z-score and divide by the sum because I am using the TargetWeights objective with no constraints. Since I am not using any constraints, I wanted to make sure that my weights summed to 1.

Actually let me make my question on this more precise: does this mean that the optimization merely allocates proportionately if you sum to one?

Hi @James, @Joakim, @Karl, your comments above are all good ones and also all touch on some of my own favorite topics too.

Regarding value, growth and our favorite Oracle of Omaha, it is interesting to note that although Warren B started very much following the ideas of his own predecessor & master, he also expanded on them over time. While he (WB) is seen as predominantly a "Value" rather than a "Growth" investor, it is interesting that, in the details, his analysis method very much takes into account expectations of Return on Owners Equity and Equity Growth over time and this is discussed nicely in Belafonte's book on Buffet.

I think the distinction between "Growth vs Value" stocks is an artificial one, and Joakim implies this as well in his comment about finding "value in growth", which is also similar to the idea of "Growth at a Reasonable Price".

As to exactly how best to do that, of course we all have somewhat different ideas and I think Ernie Chan's comment and Joakim's take on it are definitely good ones: "the key to using ML in financial time-series might NOT be to predict future prices or returns, but to choose non-price related features".

The way that ML is often used for discovering price patterns & making predictions therefrom has certainly been successful (profitable) in some cases, but is fraught with problems, especially when ML is used to data mine the difficult-to-see price patterns the soon get arbitraged away as more people discover them, and so this way (the most common way) of using ML for price patterns probably offers little of LONG-TERM value.

I'm absolutely NOT against ML, but i do just think that there are better ways to use it, as Ernie implies. It is interesting to compare the market phenomena that lead only to ephemeral profits (such as being the first to see hidden price patterns that actually have no real long-term basis) vs the phenomena that do continue to be profitable year-after-year, decade-after-decade, irrespective of how many people find them (such as trends based on delays in supply & demand - e.g. in the time it takes to bring new mines into production in response to demand for new tech metal resources, etc).

I think "ZenoTheStoic" (familiar face with a new name huh? Not the same Zeno as the one with the Paradox right?) also touches on this very well with his comment: "* Stock prices are driven in the long term by fundamentals. Cashflow is one of the most basic and important fundamentals for valuing a stock. Fundamentals don't change: cashflow will always be important and a lack of cashflow will always (eventually) sink a company if left uncorrected.*"

Maybe it is indeed " ... a point few seem to appreciate on this forum", but nevertheless i think those of us talking here certainly do get it.

An interesting question is how well can Karl's (and others of us with an interest in it) actually use ML in a better-than-usual way, such as by being applied to this all-important topic?

Cheers, all the best, from TonyM

Continuous learning for me, Tony :) and this adds to the steep curve:

Reinforcement Learning in the Presence of Nonstationary Variables posted by Paige Murphy

Conventional reinforcement learning is difficult, perhaps impossible
to use "as is" in the context of financial trading, due to the
presence of time-varying coefficients and nonstationary variables in
the data. Common machine learning techniques assume the data
distributions to be stationary, which is almost always false in
financial contexts.

This talk explains in detail the nature of this problem, with Python
code examples, and then provides a solution based on generative
modeling and Monte Carlo simulations in a Bayesian context. By using
an imagination-augmented reinforcement learning agent, we are able to
train the agent to act in an optimal way even on historically unseen
values of these stochastic, nonstationary coefficients and variables.

Thanks Quantopian - really looking forward to the webinar!

Hi Karl. I hope there will be a transcript or a recording of this video available as unfortunately i can't manage it at the time it's on, but the info about it certainly looks interesting, especially the term "imagination-augmented reinforcement". Please keep us posted. Cheers, best regards.

What's the best way to make changes for Target_Weights going long one quantile and shorting the other four?..i.e. long 20% of universe and short the other 80%, while maintain dollar_neutral?

Perhaps that would be a suggestion for future library posts: different types of rebalances for more examples.

Thanks all.