Back to Community
Beta Constraint in Risk Model totally unnecessary

I have long suspected that the beta constraint in the risk model was totally unnecessary and might be actually detrimental to overall algo performance. I had the chance to confirm this when I stumbled upon a post by Grant and my subsequent findings here. The stock universe generally moves with the market as represented by benchmark SPY and under the very nature of a long/short equity market neutral framework, the effect of the longs is neutralized by the shorts, so it naturally mitigates getting a high beta to SPY.

Long/short equity market neutral strategy works by taking long positions in stocks that are expected to outperform their peers and short positions in stocks expected to underperform. The positions are chosen so that the equity market exposure of the long side of the portfolio is offset by the exposure of the short side. This results in a strategy that is hedged to the aggregate stock market, supposedly insulating the fund from the major ups and downs in equities. In Q framework this is already accomplished by the Dollar Neutral constraint. So in my mind, I thought that adding the beta constraint was an overkill. To prove my point, I took Grant's algo which shows a 0.18 beta that he tried to constrain to +- 0.05 but didn't work and commented out the beta constraint. The results were an increase in beta from 0.18 to 0.21 , still within the contest limits but an increase in returns and reduced drawdown! This confirms my point that the beta constraint is not necessary but more importantly curtails the potential for more returns and an improved drawdown.

To further confirm my point, I took Q's model algo that meets all risk model constraint requirements from here and again I commented out the beta constraint and results shown below. The beta went from -0.6 to -0.07, returns were about the same but improved drawdowns. I invite the Quantopian staff or anyone to disprove my points, maybe I'm missing something.

Clone Algorithm
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import pandas as pd

import quantopian.algorithm as algo
import quantopian.optimize as opt

from quantopian.pipeline import Pipeline
from import builtin, Fundamentals, psychsignal
from quantopian.pipeline.factors import AverageDollarVolume, RollingLinearRegressionOfReturns
from quantopian.pipeline.factors.fundamentals import MarketCap
from quantopian.pipeline.classifiers.fundamentals import Sector
from quantopian.pipeline.experimental import QTradableStocksUS, risk_loading_pipeline

# Algorithm Parameters
# --------------------



def initialize(context):
    # Universe Selection
    # ------------------
    base_universe = QTradableStocksUS()

    # From what remains, each month, take the top UNIVERSE_SIZE stocks by average dollar
    # volume traded.
    monthly_top_volume = (
        .top(UNIVERSE_SIZE, mask=base_universe)
    # The final universe is the monthly top volume &-ed with the original base universe.
    # &-ing these is necessary because the top volume universe is calculated at the start 
    # of each month, and an asset might fall out of the base universe during that month.
    universe = monthly_top_volume & base_universe

    # Alpha Generation
    # ----------------
    # Compute Z-scores of free cash flow yield and earnings yield. 
    # Both of these are fundamental value measures.
    fcf_zscore = Fundamentals.fcf_yield.latest.zscore(mask=universe)
    yield_zscore = Fundamentals.earning_yield.latest.zscore(mask=universe)
    sentiment_zscore = psychsignal.stocktwits.bull_minus_bear.latest.zscore(mask=universe)
    # Alpha Combination
    # -----------------
    # Assign every asset a combined rank and center the values at 0.
    # For UNIVERSE_SIZE=500, the range of values should be roughly -250 to 250.
    combined_alpha = (fcf_zscore + yield_zscore + sentiment_zscore).rank().demean()
    beta = 0.66*RollingLinearRegressionOfReturns(
                    mask=combined_alpha.notnull() & Sector().notnull()
                    ).beta + 0.33*1.0

    # Schedule Tasks
    # --------------
    # Create and register a pipeline computing our combined alpha and a sector
    # code for every stock in our universe. We'll use these values in our 
    # optimization below.
    pipe = Pipeline(
            'alpha': combined_alpha,
            'sector': Sector(),
            'sentiment': sentiment_zscore,
            'beta': beta,
        # combined_alpha will be NaN for all stocks not in our universe,
        # but we also want to make sure that we have a sector code for everything
        # we trade.
        screen=combined_alpha.notnull() & Sector().notnull() & beta.notnull(),
    # Multiple pipelines can be used in a single algorithm.
    algo.attach_pipeline(pipe, 'pipe')
    algo.attach_pipeline(risk_loading_pipeline(), 'risk_loading_pipeline')

    # Schedule a function, 'do_portfolio_construction', to run twice a week
    # ten minutes after market open.

def before_trading_start(context, data):
    # Call pipeline_output in before_trading_start so that pipeline
    # computations happen in the 5 minute timeout of BTS instead of the 1
    # minute timeout of handle_data/scheduled functions.
    context.pipeline_data = algo.pipeline_output('pipe')
    context.risk_loading_pipeline = algo.pipeline_output('risk_loading_pipeline')

# Portfolio Construction
# ----------------------
def do_portfolio_construction(context, data):
    pipeline_data = context.pipeline_data

    # Objective
    # ---------
    # For our objective, we simply use our naive ranks as an alpha coefficient
    # and try to maximize that alpha.
    # This is a **very** naive model. Since our alphas are so widely spread out,
    # we should expect to always allocate the maximum amount of long/short
    # capital to assets with high/low ranks.
    # A more sophisticated model would apply some re-scaling here to try to generate
    # more meaningful predictions of future returns.
    objective = opt.MaximizeAlpha(pipeline_data.alpha)

    # Constraints
    # -----------
    # Constrain our gross leverage to 1.0 or less. This means that the absolute
    # value of our long and short positions should not exceed the value of our
    # portfolio.
    constrain_gross_leverage = opt.MaxGrossExposure(MAX_GROSS_LEVERAGE)
    # Constrain individual position size to no more than a fixed percentage 
    # of our portfolio. Because our alphas are so widely distributed, we 
    # should expect to end up hitting this max for every stock in our universe.
    constrain_pos_size = opt.PositionConcentration.with_equal_bounds(

    # Constrain ourselves to allocate the same amount of capital to 
    # long and short positions.
    market_neutral = opt.DollarNeutral()
    # Constrain beta-to-SPY to remain under the contest criteria.
    beta_neutral = opt.FactorExposure(
        min_exposures={'beta': -0.05},
        max_exposures={'beta': 0.05},
    # Constrain exposure to common sector and style risk 
    # factors, using the latest default values. At the time
    # of writing, those are +-0.18 for sector and +-0.36 for 
    # style.
    constrain_sector_style_risk = opt.experimental.RiskModelExposure(

    # Run the optimization. This will calculate new portfolio weights and
    # manage moving our portfolio toward the target.
There was a runtime error.
65 responses

Typo: Should read "The beta went from -0.06 to -0.07..."

Grant's algo which shows a 0.18 beta that he tried to constrain to +- 0.05 but didn't work...

To me, this is the larger concern. If the Optimize API has a beta constraint of +/- 0.05 and a 2-year backtest yields a beta of 0.18, then it suggests something is amiss. It would be interesting to see an example of an algo for which the beta constraint causes the Optimize API to fail, due to the constraint not being met in the optimization.

@Grant, thanks for catching the typo. In your particular algo, what concerns me more is you are missing out on increased returns and improved drawdowns without the beta constraint while still being within the limits of -+ 0.3.

@Karl, and this is what Investopedia has to say about that and I quote:

What is a 'Zero-Beta Portfolio'
A zero-beta portfolio is a portfolio constructed to have zero systematic risk or, in other words, a beta of zero. A zero-beta portfolio would have the same expected return as the risk-free rate. Such a portfolio would have zero correlation with market movements, given that its expected return equals the risk-free rate or a relatively low rate of return compared to higher-beta portfolios.

Read more: Zero-Beta Portfolio
Follow us: Investopedia on Facebook

Hi Karl -

I'm not necessarily aiming for zero beta; beta of +/-0.3 might be best, as you point out. However, if the beta constraint as implemented by Q isn't doing anything or has a bug or I'm using it incorrectly, then it won't help to have it in the code.

@ James -

In your particular algo, what concerns me more is you are missing out on increased returns and improved drawdowns without the beta constraint...

One current interest of mine is to understand how to manage the drag imposed by:


The effect you are highlighting may simply be due to the additional churn created by the Optimize API beta constraint (although larger draw-downs would not necessarily be explained).

Overall, I think one wants to place lots of small "bets" (e.g. rebalance daily), while managing trading costs. Presently, I'm not sure the Optimize API does this so well.

@Grant, the new slippage model is really a major drag but if Q research says this is what they get in real terms then we all have to live with it. Placing lots of small bets and managing trading costs is an oxymoron but it seems like the path to take under Q framework which seeks low volatilities and drawdowns, a very tall order if you also seek above risk free rate returns.

Without knowing what's under the hood of Optimize API, it's hard to tell or test if its (1) programmatically accurate and (2) if the constraints are consistent with the desired objective. Take the case of the beta constraint, the Dollar Neutral constraint by its very nature already assures that the beta constraint limits of +- 0.3 will not be breached assuming a considerable number of stocks are traded long/short (not 2 or 6). Adding beta constraint on top of that, I think, causes the logic to break down either through (1) or/and (2) above.

Long/short equity market neutral strategies have been around for quite sometime now, my concern is if they are executing it correctly within their framework. This is the second issue I've already raised, first being the computation of Sharpe and Sortino ratios and now this. Is there anymore?

@ James -

first being the computation of Sharpe

Presumably, you are referring to:

Yes. Not sure why that never gained any traction. Seems it shouldn't be called the Sharpe ratio if the risk-free rate is not incorporated; not referencing it doesn't make sense from an economic standpoint (what little I understand of economics).

@Grant, to determine the cost associated with the FixedBasisPointsSlippage and minimum commissions, it is sufficient to run your strategy twice. Once with, and once without. The difference will give an inkling as to the cost of having them on.

I did this for your Better Beta strategy over a 2-year period. The difference in performance: $ 979,318. That is more than half of what it could have made. So, the impact is not trivial, and should always be accounted for, no matter what kind of trading strategy we might want to use.

There are expenses to trading, better account for them at the planning stage than seeing them massacre a portfolio when it will go live, if ever it gets there.

@Grant, yes, that was exactly what I was referring to. After back and forth wrangling with Thomas W. , I thought I finally convinced him and I quote him:

Thanks, James, I get where you're coming from. We'll keep an eye on it and will adjust accordingly.

So for now, I'm just keeping quiet and waiting...I agree with you, they shouldn't call it Sharpe if they insist on not subtracting risk free rate. The answer is as I quoted above from Investopedia..."A zero-beta portfolio would have the same expected return as the risk-free rate." So by extrapolation, if you subtract risk free rate, you get a donut hole Sharpe ratio, LOL!

No takers? Just bumping it up. Calling on Q authors of Risk Model. What am I missing?

@Luca, I am not mixing the two concepts together. Don't you think I know that these are two different things. READ THE TITLE! From a teaching point of view??? Let me put it in a way a very smart guy like you can understand. Empirical studies show that the stock universe generally moves with the market as represented by SPY in this case, so they are highly correlated. The strategy employed by Q is what they call in the industry, long/short market neutral , designed to neutralized exposures to market risks by having almost equal amounts of longs and shorts usually by the hundreds (not like two stocks example). The Dollar neutral constraint achieves this. But since the stock universe is highly correlated, the beta neutral constraint is no longer needed because the effect of the longs is offset by effect of the shorts and almost always guarantees beta neutrality. No need for the redundancy specially if the consequence are, as shown in Grant's algo example, a curtailment on improved returns and improved drawdowns. And that is the difference between a teacher like you and an industry practitioner like me.

@Luca, regardless of whether I misinterpreted you on the "teaching point of view", you are still missing the point. First of all, this is not in the context of teaching but in the context of the contest and its parameters. If this is how Q is teaching how to structure a long/short market neutral strategy, if I am right , then they are teaching it wrong, capisce! That is why I want them to prove me wrong so I can shut up already and I will readily admit that I am wrong. But if I am right, imagine all those algos in the contests that might have been deprived of improved returns and drawdowns which could have altered the final results because of this unnecessary beta constraint. I am doing this to actually help the Q model just like if you find a bug in code you report it. It is up to them to verify and report their findings.

@Luca, but the beta constraint as it is, does not work as illustrated by Grant's algo where he tried to constraint beta to be +-0.05, it is still gave him 0.18. I also tried this in my personal algos and it is not constraining beta to what I set it to. Yes I agree we need more examples. You can start with your own algos, (1) try to constraint beta to +- 0.05 and see if it works. Then next try to comment out the beta constraint and see what happens.

@ Luca, first of all, removing the beta constraint will never guarantee improved performance in all algos because a bad algo will always be a bad algo, with or without constraints. What is more important is the prospect of a good algo getting improvements by removing the beta constraint, this could be a game changer. Yes, there are two parameters in Beta, but in the context of the contest this should be fixed (currently at one year daily returns) because they give a threshold to be between +-0.3. Otherwise, the will be inconsistencies if these parameters are loosely changed. As to the optimality of these parameters, you're guess is as good as mine. As to whether changing the values of these parameters to make the beta constraint work, that's the job of the Q team but I highly doubt it.

@Karl, I tried your suggestion but I already knew that the results will be the same as commenting out the beta constraint because setting the beta to +- 1 is the same as not having the beta constraint at all. But for your viewing pleasure, here it is:

Clone Algorithm
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output, order_optimal_portfolio
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import SimpleBeta, AnnualizedVolatility
import quantopian.optimize as opt
from sklearn import preprocessing
from quantopian.pipeline.experimental import QTradableStocksUS, risk_loading_pipeline
from scipy.stats.mstats import winsorize
import numpy as np
import pandas as pd

# Algo parameters
N_LEADING = 5 # days
N_TRAILING = 30 # days
EPSILON = 1.003
WIN_LIMIT = 0.00

# Optimize API parameters
MAX_POSITION_SIZE = 0.025 # absolute value 
MAX_BETA_EXPOSURE = 1.0 #0.05 # absolute value

def initialize(context):
    # set_commission(commission.PerShare(cost=0, min_trade_cost=0))
    # set_slippage(slippage.FixedSlippage(spread=0))
    schedule_function(record_out, date_rules.every_day(), time_rules.market_close())
    schedule_function(get_weights, date_rules.every_day(), time_rules.market_open(minutes=60))
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes=60))
    attach_pipeline(make_pipeline(context), 'my_pipe')
    attach_pipeline(risk_loading_pipeline(), 'risk_loading_pipeline')
def make_pipeline(context):
    universe = (
        .top(NUM_TOTAL_POSITIONS, mask=QTradableStocksUS())
    beta = SimpleBeta(target=sid(8554), regression_length=260)

    pipe = Pipeline(columns = {
    screen = universe)
    return pipe
def before_trading_start(context,data):
    context.pipeline_data = pipeline_output('my_pipe')
    context.stocks = list(pipeline_output('my_pipe').index.values)
    context.risk_loading_pipeline = pipeline_output('risk_loading_pipeline')
def record_out(context, data):
    num_secs = 0
    for stock in context.portfolio.positions.keys():
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
    record(num_secs = num_secs)
    record(leverage = context.account.leverage)

def get_weights(context,data):
    prices = data.history(context.stocks, 'price', 390*N_TRAILING, '1m').dropna(axis=1)
    context.stocks = list(prices.columns.values)
    prices = prices.ewm(ignore_na=False,min_periods=0,adjust=True,com=78).mean()
    m = len(context.stocks)
    b_t = np.zeros(m)
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data.current(stock,'price'))
    denom = np.sum(np.absolute(b_t))
    # test for divide-by-zero case
    if denom > 0:
        b_t = b_t/denom
        b_t = 1.0*np.ones(m)/m
    a = np.zeros(m)
    b = np.zeros(m)
    for n in range(5*N_LEADING,5*N_TRAILING+1):
        p = prices.tail(n*78).as_matrix(context.stocks)
        p_mean = np.mean(p,axis=0)
        p_rel = p_mean/p[-1,:]
        p_rel[p_rel<1] = 0
        a += preprocess(get_opt(p_rel,np.sign(p_rel)*b_t))
        p_rel = p[-1,:]/p_mean
        p_rel[p_rel<1] = 0
        b += preprocess(get_opt(p_rel,np.sign(p_rel)*b_t))
    a = preprocess(a)
    a[a<0] = 0
    b = preprocess(b)
    b[b<0] = 0
    context.weights = pd.Series(preprocess(a-b),index=context.stocks)
    context.weights = context.weights - context.weights.mean()
    context.weights = context.weights/context.weights.abs().sum()
def get_opt(x_tilde,b_t):

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod =, x_tilde)
    num = EPSILON - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
        lam = max(0, num/denom)
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    return b_norm*,x_tilde)

def rebalance(context, data):
    pipeline_data = context.pipeline_data
    objective = opt.TargetWeights(context.weights)
    constraints = []
    beta_neutral = opt.FactorExposure(
        min_exposures={'beta': -MAX_BETA_EXPOSURE},
        max_exposures={'beta': MAX_BETA_EXPOSURE},
    risk_model_exposure = opt.experimental.RiskModelExposure(
def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w

def preprocess(a):
    a = np.nan_to_num(a - np.nanmean(a))
    a = winsorize(a,limits=(WIN_LIMIT,WIN_LIMIT))
    a = a/np.sum(np.absolute(a))
    return preprocessing.scale(a)
There was a runtime error.

One thing I think I'm seeing is that low-volatility stocks make it easier to achieve a low beta, whereas high-volatility ones make it harder. I've started playing around with:

    universe_top = (  
        .percentile_between(92.5, 100))  
    universe_bottom = (  
        .percentile_between(0, 7.5))  
    universe = (universe_top|universe_bottom)  

The best of both worlds?

Yes, Grant. Individual stock volatility is directly correlated to its beta to SPY. Again as stock universe is highly correlated to SPY, stocks with low volatility will tend not to veer too much from the movement of SPY and vice versa.

Base portfolio expectation equation: E[R(p)] = r_f + β∙(E[R(m)] – r_f). Gives a portfolio's return expected value as the added premium over the risk-free rate.

This translate to: E[R(p)] = r_f + β∙E[R(m)] – β∙r_f = β∙E[R(m)] + (1 – β)∙r_f. A beta of one will give: E[R(m)], the expected market return, as expected.

Beta-neutral can be expressed as hedging the portfolio:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)] = r_f – β∙r_f = (1 – β)∙r_f

Market-neutral has half long and half short:

E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙β∙E[R(m)] = r_f – 0.50∙β∙r_f = (1 – 0.50∙β)∙r_f

The added restriction does have an impact, even if it could be minor.

Those are the numbers to expect.

This means that the beta-neutral, market-neutral stuff does not have that high a return expectancy. It will offer equity line stability (low volatility, beta and drawdowns) but at a cost.

@Guy, I believe your formula on beta neutral is incorrect:

Beta-neutral can be expressed as hedging the portfolio:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)] = r_f – β∙r_f = (1 – β)∙r_f

It should be: E[R(p)] = r_f

This is consistent with Investopedia's definition:

What is a 'Zero-Beta Portfolio'
A zero-beta portfolio is a portfolio constructed to have zero systematic risk or, in other words, a beta of zero. A zero-beta portfolio would have the same expected return as the risk-free rate. Such a portfolio would have zero correlation with market movements, given that its expected return equals the risk-free rate or a relatively low rate of return compared to higher-beta portfolios.

And @Guy, your market neutral formula is also incorrect:

Market-neutral has half long and half short:

E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙β∙E[R(m)] = r_f – 0.50∙β∙r_f = (1 – 0.50∙β)∙r_f

It should be:
E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙β∙E[R(m)-r_f] = r_f

Ergo, your conclusion, "The added restriction does have an impact, even if it could be minor", is also incorrect.

However, your statement, "This means that the beta-neutral, market-neutral stuff does not have that high a return expectancy. It will offer equity line stability (low volatility, beta and drawdowns) but at a cost" is correct!

@ James -

Luca must have deleted his posts (hmm?), but I think his key insight is that beta is a trailing indicator. And so while the Optimize API can satisfy a constraint (e.g. beta = +/- 0.05) based on a trailing window beta estimate, it doesn't stick going forward. Out of the QTradableStocksUS() hypothetically there is a worst-case set of N stocks and a best-case set, for which the SimpleBeta and the corresponding Optimize API constraint will appear to be effective. It is a mirage though, if the beta factor is not predictive; building a better beta requires improving the predictability of the beta factor. One solution might be to formulate a more predictive short-term beta factor based on minute bars and feed it to the Optimize API.

Dollar-neutrality is another matter, since for a large number of N stocks, one would expect to be able to get reasonably close by going long on N/2 of them, and short on the remaining N/2. Assuming daily re-balancing, by the end of the trading day, the probability of dollar-neutrality being off by much would be very low, for large N (e.g. N ~ 100), since the dollar-neutrality calculation is based on current prices. However, it doesn't necessarily guarantee low beta; I think this depends on the universe and how one divides it up, long and short.

Regarding the value of a low-beta strategy, my understanding is that it is important in the hedge fund domain. It basically comes down to the fact that buying beta is dirt-cheap, and so if a strategy relies on it, it is just money flying out the window. Customers will get it elsewhere a lot cheaper.

@Luca, why did you delete your posts? While it is your option to do that, it is not fair to those reading this thread. You come into my thread, engaged me in a discussion and after raising the white flag you delete your posts. I do not purport to have a monopoly of knowledge, that is why I like having these discussions and debates because it is through these that we get further insights and then discovery. So now I take it that you've be schooled!

@Grant, when Luca starts invoking the standard waiver of "past performance does not guarantee future results", I crinch because it is a total cop out. Beta is not designed to be predictive but rather show the one year historical relationships of an individual stock's daily returns vis a vis that of SPY. The key here is the measurement is one year. And based on this measurement, empirical studies have shown that individual stock price movement is highly correlated to that of the market SPY. On a minute, one day or even one week basis, this relationship could swing wildly mainly due to time lag, reaction time to market moves, among other things.

The other thing to understand is a long/short market neutral strategy is not designed to be a standalone strategy but rather as a small component of multi-asset portfolio that serves as a mitigator/hedge to market exposure risks. In a multi-tiered return/risk spectrum, market neutral strategies is one notch above risk free rates.

@James: I only read your top comment so excuse if this is redundant and already addressed below. If I understand you correctly your hypothesis states that: stocks with similar beta in a balanced long/short portfolio (net-zero) should naturally lead to beta-neutral portfolio, so why add it as an extra constraint?

I guess I see two related problems with that theory:
1. It's only true if the stocks indeed have similar beta and the long/short sides are not beta-biased.
2. Your examples show that these algos actually exist and by removing the beta constraint in the optimizer, beta goes up. It's not surprising that returns increase if you allow for more beta as the stock-market has mostly gone up. Another example is the ML algo which is short-beta biased. I haven't yet found out how it picks that beta-bias up but the algo is certainly net-zero.

So without the beta-constraint users would be incentivized to submit positively beta-biased long/short algorithms that hit the upper beta limit to bank on the market continuing to go up.

Finally, even if that hypothesis were true, the beta-constraint would just be a noop in that case and thus irrelevant if we included it.

Let me know if I misunderstood / mischaracterized your hypothesis.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Thomas, first thanks for your quick response. As to your first problem:

  1. It's only true if the stocks indeed have similar beta and the long/short sides are not beta-biased.

This we have to prove with numbers and I will guidance on how we can do this later. My theory hinges on empirical studies that shows that by standard beta measurements (daily stock returns for one year), the stock universe is highly correlated with the market as represented by SPY. Given this relationship, the Dollar Neutral constraint will naturally almost always guarantee beta neutrality or at the very least within the limits of +- 0.3 threshold on beta because the beta of longs cancels or offset that of the shorts.

Now as to the guidance on how we can prove or disprove this theory ( I would have done it myself except my computing power and python skills are limited) , I propose:
1) Run a pipeline on QTradeableUS
2) Randomly select 250 longs and 250 shorts
3) Compute SimpleBeta (I understand there is the super fast vectorized version)
4) Run 100 times (or as many as you like) through Optimize API with all constraints except beta.
5) Finally, see how many times it breaches the +-0.3 threshold of beta.
If this is a significant number then I am totally wrong.

On your second problem:

  1. Your examples show that these algos actually exist and by removing the beta constraint in the optimizer, beta goes up. It's not surprising that returns increase if you allow for more beta as the stock-market has mostly gone up. Another example is the ML algo which is short-beta biased. I haven't yet found out how it picks that beta-bias up but the algo is certainly net-zero.

Yes, it does increase beta but does not breach the +-0.3 threshold, at least in my two examples and so far on my personal algos. The prospect alone of improved returns and drawdowns without this beta constraint, as illustrated in above two examples, is crucial if I'm proven right. Secondly, as Grant and I both found out, the beta constraint does not work or at least give you an Infeasible Error when the set parameter is breached. In the case of Grant's algo, he tried to constrain beta to within +- 0.05 but still got a beta of 0.18 in the end. I tried it too in my personal algos and it seem that it does not operationally work. Why is this so? Is there a problem programmatically with Optimize API or does the logic break down because of inconsistencies in the various constraints as it relates to the desired objective?

On your last comment:

Finally, even if that hypothesis were true, the beta-constraint would just be a noop in that case and thus irrelevant if we included it.

Not if it curtails the prospect of increased returns and improved drawdowns.

Thanks for the thoughtful response.

I agree with your experiment proposal, that would be quite interesting. Do note, however, that it's possible, especially for ML algos, to latch onto alphas that are correlated to the market and essentially find that the market goes up and directly inject a beta bias (I think something like this is happening to the ML algo I posted a while ago).

I feel like the operational problem you point to is much more important, though. And it's possible that there is a bug which we should definitely fix. We'll look into it.

The optimizer also should only kick in if the beta limit would be breached, but not before. If it doesn't do that this also seems like a bug to be fixed. But if we think about it as a fail-safe if the portfolio does pick up beta (or gets pushed over the edge due to noise) I think you wouldn't disagree that's useful?

@James, if you look at the base equation again, you have:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) = r_f + β∙E[R(m)] – β∙r_f

and with a β = 0, you will have: E[R(p)] = r_f . So, Investopidia's, Wikipedia, the CAPM and academic definitions stand.

If you set a β = 1, you will get: E[R(p)] = E[R(m)], which again stands. It is why buying index funds is so cheat. They hope to match the market's long-term performance. But anybody can do that just buying a tracking index such as SPY.

You change the picture once you want to hedge. Because in that case, you are not hedging the premium, you are hedging the market. You are hedging price movements. And therefore, it will have for equation:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)], which when recomposed will give:

E[R(p)] = r_f + β∙E[R(m)] – β∙r_f – β∙E[R(m)], which invariably leads to:

E[R(p)] = r_f – β∙r_f = (1 – β)∙r_f

The outcome will be a low beta portfolio, and will slightly affect the long-term expected return. The term will approach r_f as β tends to zero: (1 – β)∙r_f → r_f as β → 0.

Conclusion, every equal sign stands.

As you said, such a strategy would be part of a portfolio of strategies. It would be used to dampen part of the volatility excursions of other strategies. These types of strategies can have a major role to play in the trillions of dollars that sit idle all over the planet and requiring excessive liquidity while providing some minimum positive return.

@Guy, got you, thanks!

@Thomas, please link me to the ML algo you're referring to, I'd like to have a look at it.

I hope you have a go at the proposed experiment though but agree that Optimize API more crucial. Actually, they're both interrelated and might discover some very interesting outcomes. Thanks again.

@James: I won't have time to do that experiment. Shouldn't be too difficult though for a user to pick it up.

@Thomas, hopefully a user picks it up. But I'm sure you have numerous research staff members and ample computer resources that can spare an hour or two do something that might reveal something that is meaningful which could lead to the improvement of the Q model.

I guess the biggest issue I see is the Optimize API beta constraint seemingly not doing its job, keeping beta within limits. I'd been happily over-fitting my way to a stupendous contest entry when I realized that the beta constraint wasn't doing a darn thing. It's a serious matter, if the Q folk want to make money, since the market ain't gonna pay for beta. Better figure out what's goin' on!

@Vladimir, thanks for concurring.

The quest for a beta-neutral market-neutral portfolio has led to the following equation based on the CAPM:

E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙β∙E[R(m)] = r_f – 0.50∙β∙r_f = (1 – 0.50∙β)∙r_f

This translate to: the expected outcome for a beta-neutral market-neutral portfolio is to achieve something close to the risk-free rate since the last term will approach r_f as β tends to zero: (1 – 0.50∙β)∙r_f → r_f as β → 0.

Quantopian interprets this “almost r_f” as alpha. There seems to be some confusion between what could be called a “source of profit” and what has been defined as alpha over the past 50 years. Regardless, to improve on the above formula would require just that, the old formulation for alpha which is the return over and above the risk premium (E[R(m)] – r_f) .

The equation for that is: E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙β∙E[R(m)] + α, which would result in: E[R(p)] = (1 – 0.50∙β)∙r_f + α.

The effort needed to achieve an alpha greater than zero might be considerable. As of yet, I have seen little of it being deployed in this type of trading strategy, at least, from those presented to date. But, that does not change the fact that to achieve a higher performance level, alpha will have to be there, and be positive.

To gain some traction, one could go for a positive net beta exposure through the stock selection process itself. But, that is not alpha per se, it is simply accepting some risk exposure to which is associated its rewards. Nonetheless, accepting to be beta positive within limits can improve the performance level. For instance, we could have: E[R(p)] = (1 – 0.50∙β)∙r_f + 0.30∙E[R(m)] + α, for a 0.30 net beta exposure.

You would still need methods to extract some added alpha. Note that with this last formula, you will be able to exceed the expected market return: E[R(p)] > E[R(m)], only if: (1 – 0.50∙β)∙r_f + 0.30∙E[R(m)] + α > E[R(m)].

The extracted alpha would need to satisfy: α > E[R(m)] - 0.30∙E[R(m)] - (1 – 0.50∙β)∙r_f . And that is not a trivial task.

@Guy, agree with your assessments and formulations. If you look around industry literature of hedge funds deploying long/short market neutral strategy, you will find that they benchmark returns to 3 month US Treasury Bills (i.e. risk free rates), that is why I find it odd that they use SPY as the benchmark. Fact is, if they're targeting zero beta then their expected return should equal risk free rate. It is only because there is leeway in their threshold of beta (between -+ 0.3) that you can get returns above risk free rates and it is a very tall order to get returns, even with this leeway, above market returns without sacrificing increased volatilities. As per Thomas Weicki's statement, they want to tighten this leeway further, good luck with that!

Hi James, Guy, & anyone else with the idea that beta ~ 0 will automatically equate to zero return over the risk-free rate,

Perhaps you could provide a simple, intuitive explanation (without mathematical machinations) as to why beta ~ 0 is a really bad idea. I see this echoed several places:

I gather, without digesting the details, that if one subscribes to something called the capital asset pricing model, then somehow a beta ~ 0 portfolio won't return much. Or is this just another version of the efficient market hypothesis, taken to an extreme? In other words, Quantopian is banking on inefficiencies with a beta ~ 0 constraint?

@Grant, there is no mathematical machination. But, there is an equal sign on the table.

The CAPM has been accepted since the 60's as a reasonable portfolio return expectation formula and it has bared an equal sign ever since. In a risk-return space chart, it gives a upward slanting straight line with an r_f intercept.

Quantopian used the same formula in its lecture:

So, I can not say that I am bringing something new to the discussion.

However, I do look at what the equation implies and where it leads to. Ignoring the math of a problem does not solve the problem. Also, ignoring the shoulders we stand on can at times only lead to reinventing the wheel.

All the “mathematical machinations” presented were limited to additions, subtractions and multiplications. Therefore, there is no “voodoo” in the presented math.

Nonetheless, there are a lot of questions to be answered. Not on the evaluation of the equations, that is simple, but on their interpretation for what they do say.

An equal sign is a powerful statement. It is not a choice, or an opinion. It can only be dismissed with a not equal sign... And in this case, everyone is invited to try!

All my equations, based on what is already there, explore the mathematical model of a zero-beta market-neutral trading strategy. And, what is said is that: such a portfolio will have: E[R(p)] = (1 – 0.50∙β)∙r_f, as long-term expected return. Moreover, as β → 0, you will get E[R(p)] → r_f, the risk-free rate.

Therefore, it becomes reasonable to state that if one wants more than the risk-free rate of return out of this kind of portfolio, then they better bring some positive alpha (some added skills) to the mix whatever its origin because they won't get it as a free lunch from the zero-beta market-neutral strategy.

Therefore, it becomes reasonable to state that if one wants more than the risk-free rate of return out of this kind of portfolio, then they better bring some positive alpha (some added skills) to the mix whatever its origin because they won't get it as a free lunch from the zero-beta market-neutral strategy.

I gather that is the whole point, to null out the gross market effect and isolate positive returns that are not attributable to the market. Makes sense, so I don't know what all this jazz is about only being able to get the risk-free rate with beta ~ 0; but then I haven't studied the CAPM.

@Grant, you sometimes advocate that we should know the fundamental reasons as to how and why our trading strategies work. It is not only the mechanics of the trade that should be of interest. But the why you should win the game over the long haul. What reasons made your program do what it did and why should it have been so. More importantly, why should it do so going forward.

You would be the first to say that there is almost no profit in the following except for some fraction of the risk-free rate.

E[R(p)] = r_f + 0.50∙E[R(SPY)] – r_f) – 0.50∙E[R(SPY)] = (1 – 0.50)∙r_f, where you went long the market and then short the same market. R(SPY) is used as a proxy for the market R(m). Such a strategy has an expected outcome of +0.50∙r_f.

So, all that jazz has some very simple reasoning. Note that SPY has a beta of one in this case. But will give a zero-beta market-neutral portfolio nonetheless.

@Grant, beta neutral and market neutral are two different and distinct strategical concepts that achieves basically the same or similar outcomes in terms of expected returns (at or close to risk free rates) as shown by @Guy through the CAPM equations. We are not saying it is a bad idea, only explaining what should be expected in terms of returns with these kind of strategies and what this kind of strategies are used for in a multi-asset investment portfolio. For my part, since both are trying to achieve the same expected returns, I want to get rid of the redundancy of the beta constraint in the context of Optimize API / Risk Model of the Q framework (long/short market neutral strategy) for the following reasons:
1. Based on empirical studies that concludes the stock universe generally moves with the market price movements as represented by SPY. Translated into the standard beta to SPY formula (one year daily returns) should give a postive and statistically significant correlation between the stock universe and SPY. Since we have such significant beta correlations, the market (dollar) neutral constraint which guarantees that one has a fairly equal amounts of longs and shorts will almost always also guarantee beta neutrality because the betas of the longs cancels the betas of the shorts. But because of the words, "generally moves with the market", my hypothesis can be wrong to some extent that is why I provided guidance above as to how this can be proven or disproven. Q staff seem indifferent to this test / guidance.
2. As your algo, the Q model algo above and my personal algos have shown, the beta constraint does not operationally do its job at constrainting beta to your set thresholds. I suspect its either a bug in code or inconsistencies in constraints that causes the logic to breakdown in optimization of desired objective.
3. As your algo, the Q model algo above and my personal algos have shown, commenting out the beta constraint have resulted in improved returns and improved drawdowns while not breaching the thresholds for beta at +-0.3. The effect might not be universal to all algos because a bad algo will always be a bad algo, with or without this constraint. But the prospect alone of good algo getting better returns and better drawdowns while still staying within the beta thresholds without this added beta constraint, is a game changer for me.

Thanks Guy & James -

I'll noodle on this stuff, as time allows. The observation that the Optimize API beta constraint does nothing (and probably has downside) is valuable.

I really like the mathematical rigor applied to the arguments here. However, I don't think we should take a model to be something more than that. Like with any model, we know it's wrong (but definitely useful). More concretely, CAPM assumes the Efficient Market Hypothesis (EMH) to be true, of which there are several counter-examples (like momentum; but I do realize this is a much larger debate). We are looking for alpha which is by definition unrelated to beta (and not present in the CAPM formulation used here). So I think the underlying argument put forward here is that the EMH is true, which is a bit of a different discussion.

The other thread I see is that by applying the risk model too strictly (to reduce beta or other exposures) is detrimental. I definitely agree there. Again, it's a model, and thus noisy, so should be applied with care to not dilute any alpha.

@Thomas, so let's start with what both you and I agree now. Care to update me on what Q is doing to address these. What is frustating for me is, in my sincere efforts to try and help Q improve its framework by bringing forward what I think is remiss in it, I am quickly brush aside. Engaged with me, debate with me, disprove my hypothesis, because in the final analysis, it is through this process that we get further insights and discoveries. Let's throw egos aside since neither you or I have monopoly of knowledge.

Sorry you feel that way, but engaging with you and your arguments is what I'm trying to do.

@Thomas, yeah right with short and deflective answers. I think it's best that we just totally ignore each other. You have done nothing on the issue I raised earlier with regards to the proper calculations of Sharpe and Sortino ratios ( thought I convinced you on the merits of these ) and I highly doubt if you will anything with the issues I raised in this thread. So I bid you adieu...

@Thomas, we use mathematical models to give us approximations of what is out there. And we give them an equal sign if, and only if, they do seem as reasonable explanations for what we are appraising for whatever purpose. There is a universe explained in E = mc^2, a simple formula and yet a world of study all by itself.

When we put an equal sign on the table, it better hold, that it be an approximation or not. It must absolutely make some sense, otherwise I would be the first to put a not equal sign on such a model.

The CAPM is a very basic equation. It only says: there is this part and then there is this other part. Here is an illustration in a portfolio context:

Was said in a previous post, that from such a model, even if it is just an approximation, all the parts are accounted for. There is no space in the CAPM for any alpha. Not in the sense Quantopian measures it, but, in the traditional Jensen definition of alpha.

Should someone want to extract some alpha from the CAPM, then they will have to bring it themselves, and justify its inclusion, as in:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) + α

Including alpha, not as some error term or luck factor, but as an explicit added performance level, no matter how it was or will be achieved. Note that the CAPM equal sign will stand for an alpha of zero, as it has stood for over five decades.

Mr. Buffett over his 50+ year career has achieved close to a 20% CAGR while the market over the same period had only provided, as approximation, a little less than 10%.

That it be for Mr. Buffett or anyone else, the market over those 50 years only had a 10% CAGR expectancy: E[R(m)] = 10% CAGR. It is all it had to offer. So, for sure, Mr. Buffett added some skills to the game, read some Jensen alpha. Are there economic reasons why he could achieve those results over such an extended time horizon that escaped the CAPM aficionados? Yes, and plenty of them. He simply adapted his investment methodology to willingly generate this alpha. One would have to say, in hindsight, that the generated alpha was a direct consequence of his actions. Over some 30 years ago, he gave the recipe of his methodology. At the time, he also said that nobody would follow his lead anyway.

From here, guess what we should do? We should also find means to generate our own and mostly know why it will be generated going forward. Not as a luck factor, but as a direct consequence of our methods of play.

Two guys both proud of their cars calls for a race. [edit]

Like with any model, we know it's wrong (but definitely useful).

[edit] when someone is saying, here's something better, sell tickets and beer, on your mark, get set, ...

@Blue Seahawk, oh yes I'm proud of my car but it's not a race car, it's a luxury car. So I wouldn't want to go and race it but I would want others to appreciate it, LOL!

There seem to be two separate issues at play here:

  1. Should Quantopian be targeting beta ~ 0 for the long-short U.S. equity portion of their super-duper crowd-sourced "Q Fund" (formally known as the 1337 Street Fund)? As a side bar, we peon crowd folk didn't get a say in naming it, so I'm gonna call it the "Q Fund"--it rolls off the tongue much more easily.
  2. Does the Optimize API do diddly squat in controlling beta?

As far as #1 goes, it makes sense to me, assuming that it is computed in a relevant way. I think they are basically saying, y'all isolate the alpha, so we don't have to bother (e.g. by nulling it out with a hedging instrument). We'll award bonus points to the amateur quant who can do the best at stomping down correlation to the market, and pay for whatever is left. To me, it makes sense that they wouldn't have customers lining up to pay 2/20 for SPY. No rocket science or fancy math required.

The #2 issue is different. It suggests that there is a gap in the tools available to users to achieve beta ~ 0. So, I'd be interested in hearing more about #2. It smells like a practical issue, versus an academic one over CAPM/EMH which will just land at investing in an index fund with Vanguard or the institutional equivalent.

Can somebody from Q staff, other than you know who, please comment on possible inconsistency and/or vulnerabilty in code from above Q model algo? This snippet of code pertaining to beta (also recently changed to SimpleBeta) is exposed for users to change:

    # Alpha Combination  
    # -----------------  
    # Assign every asset a combined rank and center the values at 0.  
    # For UNIVERSE_SIZE=500, the range of values should be roughly -250 to 250.  
    combined_alpha = (fcf_zscore + yield_zscore + sentiment_zscore).rank().demean()  
    **beta = 0.66*RollingLinearRegressionOfReturns(  
                    mask=combined_alpha.notnull() & Sector().notnull()  
                    ).beta + 0.33*1.0**

This beta calculation is used to constrain beta to minimum and maximum levels per user setting in Optimize API:

    # Constrain beta-to-SPY to remain under the contest criteria.  
    beta_neutral = opt.FactorExposure(  
        **min_exposures={'beta': -0.05},  
        max_exposures={'beta': 0.05},**  

Contest threshold for beta is between +- 0.30. If user has the option to change window length in beta calculation, (1) doesn't this create inconsistencies with the beta threshold of +-0.30 because the beta distribution of different window lengths is also different, and (2) doesn't this create vulnerabilities in the sense that it could be used to "game" the algo if the beta constraint operation in Optimize API actually worked. I think the beta calculation code should be hardwired to standardized one year daily returns and hidden from user to change parameters perhaps in the riskloadings pipeline. Just asking...This issue is not related to my hypothesis of beta as unnecessary because of redundancy with dollar neutral.

@Karl, "Relativity", a word made famous by Prof. Albert Einstein is the key to understanding Information Ratio in the context of market neutral strategies. What does the hedge fund industry use as benchmark for market neutral strategies? Three month US Treasury Bills (aka risk free rates) not SPY. Below I quote a Morningstar whitepaper ( :

Because market-neutral funds hedge out broad
market risk, a cashlike benchmark (such as Treasury
bills) is more appropriate than a stock or bond
market index.

A brief note, in case folks are dinking around with trying to null out beta. I've found that a bias is required by the Optimize API constraints to see a dramatic effect:


I think what's going on is that for a volatile universe, the trailing beta factor is not predictive for stocks that are tending to run up with the bull market. So, a biased Optimize API constraint on beta is required to compensate for the lack of predictability.

I plan to do a separate post on this topic, since this thread is too convolved with CAPM/EMH stuff (which I'm sure is important and relevant to the Quantopian enterprise, but I'm just interested in understanding how to use the tools).

@Grant, I refer to my new thread for answers possible-implementation-bug-with-beta

@Grant, I've already ran several and similar tests in this thread possible-implementation-bug-with-beta. Observations lead to several findings that are confirming of the hypothesis of this thread that beta is not only totally unnecessary but also (1) beta constraint in Optimize does not do a good job constraining beta to configurable set levels, (2) implementation bug, exposing beta calculation to become user configurable lead to inconsistencies and vulnerabilities and (3) doing away with beta totally has positive effects in terms of increased returns and improved drawdowns.

@Grant, you said: “... wouldn't have customers lining up to pay 2/20 for SPY”. Yes indeed, even if some hedge funds do about the equivalent of a SPY return, some even less.

I would add, Q won't see that many paying 2/20 for the risk-free rate either. But, their game plan is different. And, if I understand it correctly, it is very serious business.

Why are the foundations as to the structure of capital markets important even in a gaming environment? It is very simple. Whatever the path that might link all that stuff from point a to point z, if it helps us better understand the game we play, then we can find ways to extract from it what we want.

It is like finding that the coin you will be using in what is suppose to be a 50/50 scenario is in fact biased. Knowing that the coin is biased, you have to find what is the best “betting system” that will not only respect the bias you found but also make it that you will not fail. Can someone lose playing a 90/10 biased coin? Yes, and it will be by going all-in all the time. The ride will be great until the first reverse flip.

The point being: it is our job to find the bias. Some call it predicting. But, whatever. If you find that, on average, some 60% of your bet selection are favorable by whatever means you used to find them, then your game plan should be a lot easier to design and implement. You should simply favor this bias whatever its direction.

It is like when @Thomas says: “...We are looking for alpha which is by definition unrelated to beta (and not present in the CAPM formulation used here).” See:

To which I say: yes, and no.

True, the CAPM has no alpha: E[R(p)] = r_f + β∙(E[R(m)] – r_f). The EMH states that there is none to be found, there is no free-lunch. Saying that whatever alpha, it is arbitraged away. However, markets are not that efficient.

In Q's notebook, they present the SML (security market line) formula as: E[Ri]=RF+ß(E[RM]-RF).

The difference being considering the asset's Ri return in a portfolio instead of the portfolio itself. And fair enough, in this case the beta will be all over the place. The more volatile a stock the higher the beta. However, when they picture the SML, they give it an intercept of zero, when in fact, it should be RF, the risk-free rate. Renaming the intercept as alpha is a misnomer. We already have a definition for the SML. (see: if need be)

The thing is when you look at a portfolio, you are looking at an ensemble of stocks. And the more stocks you put in a portfolio, the more something funny will happen. The portfolio beta will tend to 1! The CAPM equation will reduce to: E[R(p)] → E[R(m)]. At the limit, buying all the stocks will make it: E[R(p)] = E[R(m)].

But, we all build portfolios with only a few hundred stocks in them. Even there, the portfolio beta is approaching one: β(p) → 1. Therefore, what one should expect is that his/her portfolio will vary in step with the market as a whole. Your portfolio becomes itself a proxy for a market index since being so close to it. In fact, if you keep increasing the number of stocks in your portfolio you will be approaching the expected market return, as they would say in math, almost surely.

We talk about CAPM, EMH, efficient frontiers, efficient and optimal portfolios, and then want to ignore their consequences. They are not perfect models, but they do reasonably explain what we see as a generalized framework.

You want to neutralize the beta, then one way to do it is:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)]

But that too has a conclusion. And again, there is no alpha in it.

If you want some alpha, you will have to grow it, from the outside by bringing more of your skills to the game, (read a better betting system), and make it yourself:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)] + α

Perhaps, its better to quote from a very informative whitepaper by Morningstar about market neutral strategies and full content is found here: Market-Neutral Category Handbook. Here are some snippets from the whitepaper that are actual results of market neutral strategies so that all will know what to expect in real terms.

Historical Risk and Risk-Adjusted Returns

Separately managed accounts following equity market-neutral strategies have shown the best risk adjusted performance over the past five years, using several measures. The Sharpe ratio measures excess returns (over Treasuries) divided by standard deviation. The Sortino ratio measures downside deviation in the denominator, while Morningstar risk-adjusted return takes into account skew-ness and kurtosis (tail risk). Using these measures, market-neutral mutual funds have shown the worst risk-adjusted returns over the past five years, although they have fared better than the S&P 500 when considering tail risk. Bonds have outperformed all market-neutral strategies on a risk-adjusted basis. Investors should consider, though, that bonds’ outperformance may not continue, as interest rates may rise. The most attractive feature of a market-neutral fund is its low correlation, and therefore low beta, to both stocks and bonds. This means that returns generally do not move with the markets. If an investor adds a market-neutral fund with positive risk-adjusted returns to his portfolio, the fund will improve the portfolio’s overall risk-adjusted returns. Hedge funds show relatively higher correlations and betas to stocks and bonds, as the heavy use of leverage in merger arbitrage and convertible arbitrage funds resulted in large losses in 2008.

And here are real term results:

Figure 4 Risk-Adjusted Returns by Market-Neutral Vehicle (through Sept. 2011)

                                                                                                                                   5-Yr Morningstar  
                                                            5-Yr Sharpe Ratio    5-Yr Sortino     Ratio Risk-Adjusted Return

Hedge Funds                                                    0.38                       0.47                       1.73  
Mutual Funds                                                  -1.01                      -1.15                      -2.26  
Separate Accounts                                              1.28                       2.16                       3.81  
S&P 500 TR                                                    -0.06                      -0.07                      -6.09  
Barcap US Agg Bond TR                                          1.31                       2.60                       4.74

Figure 5 Five-Year Correlation and Beta of Market-neutral Strategies to Stocks and Bonds*

                                              S&P 500 Correlation    S&P 500 Beta    Barcap US AGG Correlation     Barcap US Agg Beta  
Hedge Funds                                          0.64                   0.21              0.21                           0.33  
Separate Accounts                                    0.03                   0.00              0.02                           0.01  
Mutual Funds                                         0.10                   0.02             -0.22                          -0.18  

Here's another snippet of quote from the same paper, gives ideas as to what managers do to implement this strategy:

Equity Market-Neutral

The goal of an equity market-neutral fund is to profit solely from a manager’s ability to select stocks. Managers may choose stocks through a discretionary or quantitative process, both of which are focused on a stock’s fundamental characteristics. An equity market-neutral fund can be dollar-neutral, in which there is an equal dollar amount of stocks long and short, or beta-neutral, in which the aggregate market beta of the long positions is equal to that of the short positions. Some funds go beyond beta-neutral and attempt to be sector-neutral (where the sector exposure of the longs and shorts offset each other) or neutral in other respects (exposure to value or growth, for example). JP Morgan Research Market-Neutral JPMNX is an example of a fund that is beta neutral in several ways, while American Century Equity Market-Neutral Fund ALHIX aims to be dollar-neutral. Some market-neutral funds (361 Absolute Alpha AAFAX, for example) do not attempt to profit from short positions and instead use short positions in exchange traded-funds or futures contracts to hedge out broad market risk. Because market-neutral funds require frequent rebalancing, turnover is often very high. Often, fund companies use a quantitative program to construct, rebalance, and track the risk characteristics of the portfolio.

@James, there is no bug in the Optimize API. It does what it is asked to do.

The equation to neutralize the beta is:

E[R(p)] = r_f + β∙(E[R(m)] – r_f) – β∙E[R(m)]

This equation can be realized by shorting an index tracker like SPY, giving:

E[R(p)] = r_f + β∙E[R(m)] – r_f) – E[R(SPY)]

The beta neutrality is respected. It is often used in portfolio hedging.

Whereas, the equation for market neutrality is:

E[R(p)] = r_f + 0.50∙β∙(E[R(m)] – r_f) – 0.50∙E[R(m)]

It is the same hedging proposition. Here, half the portfolio is long the market and half is short. But this time there is an added concentration limit which will result in using about the same number of stocks for either longs or shorts.

If you go for market neutral, you also went for beta neutral. The converse is not necessarily true since the hedging could have been done using only one short, thereby putting half of the growing portfolio in one stock which at some point might not be sustainable.

Other propositions could have you gaining some net market exposure. Such as in:

E[R(p)] = r_f + 0.60∙β∙(E[R(m)] – r_f) – 0.40∙E[R(m)]

which evidently is not market neutral anymore.

You could do a 130/30 hedged portfolio which would use some leveraging:
E[R(p)] = r_f + 1.30∙β∙(E[R(m)] – r_f) – 0.30∙E[R(m)]

which would give you a portfolio beta of one. However, that is not admissible within the contests rules.

Quantopian has given a set of constraints (rules) to be followed to be a surviving participant in its contests.

What we have to do, should we wish to participate, is follow the requirements and constraints. The rules have been fixed, so everyone has the same.

However, there is no restriction on how you should design your trading strategy to operate within those barriers. If you want to add some skewness to the mix and it is within boundaries, go for it. That is what is requested here. Make the best you can within all the restrictions. Win the contest. But, better yet, win the allocation.

What I do observe however is that some of the rules might be like shooting oneself in the foot. As if not totally baked. And that, over the long-term, will make it that a portfolio will deteriorate, meaning that it will not be able to sustain its CAGR going forward. But that might be something we can jump over by better strategy designs.

Follows is my version of the script at the top of this thread.

Only changed numbers here and there. Removed the stocktwits thing. As said before, I give it no value, even consider it detrimental to one's portfolio. No other code logic was changed.

As illustrated, the chart has a nice smooth equity line, and outperforms its benchmark.

The strategy make some 29,395 trades, with an average net profit per trade of $618.99.

However, the strategy's design makes its CAGR unsustainable. Even after 10 years of being stable and reliable, we should see it break down going forward.

Loading notebook preview...

@Guy, with all due respect, all I needed was to read your first sentence and looked at the beta results of your above backtest to tell you that there is a bug. Your backtest shows a beta of -0.10 that the algo tried to constraint to be within +- 0.05, it failed in this regard. One symptom of a bug is when a code written for a particular purpose does not attain its desired objective. Another bug which I call implementation bug, the beta calculation is exposed to user change, any user can change regression length from 260 to any number they want, which leads to contest beta thresholds (+-0.30) becoming relative to the window length settings which I don't think is its intention. The problem is actually multi dimensional as I discussed more thoroughly in my latest post here.

@James, I voluntarily forced a negative beta skewness. I found it desirable in this scenario. And the API did what it was told.

You have a negative beta portfolio that is to be mixed with other strategies. Which is best: to have a zero-beta strategy or a negative-beta strategy?

@Guy, you should have told me that in the first place. If that worked for you and you conclude that it works then I'm happy for you. But how do explain that it doesn't work on others? How do you explain why beta calculation is exposed to user configuration? How do we know the effects of the interaction between the beta calculation exposed to user and hidden beta calculation in Optimize API. Aren't these all symptoms of a bug?

Addedum: Guy at what setting did you use to force negative beta skewness? Your hypothesis will only be valid if you set it within the bounds of beta thresholds, in this case, +- 0.30 to be consistent with its desired objective. If you go out of bounds, then you have entirely change the purpose of the constraint and thus your results, desirable or not, does not hold because it not the original intent.

@James, you provide some objectives to the API, in fact, ordering this black box to get you closer to the given goals. Take an incremental step in that direction or other. All you want is that it complies, that it does what you want it to do.

Then, you add some constraints. Do the above, but try to do it within these limits whatever they may be. Again, you want the API to comply, and ignore what is out of limits. Each constraint adding another layer that supersede its predecessor rendering the latter partly ineffectual if not redundant. And if a new layer is already outside a preexisting one, then it becomes superfluous.

You are thinking code, I am thinking strategy outcome.

I am not that much interested in the code. What I want is to influence the payoff matrix as a bloc from start to finish. So, instead of letting the strategy play out some code, I will give functions to the payoff matrix itself and try to direct it to where I want. A shove here, a shove there, all with the purpose of keeping the swarm of variance within tradable limits.

The above strategy plays a lot on market noise. It did some 29,395 trades, resulting in over 300,000+ transactions due to the 2.5% of volume rule. It should be considered excessive. A lot of it unnecessary. But, it will make money, and a machine is doing the work.

This strategy might have some Quantopian-alpha. But, in reality, it has very little Jensen-alpha. In fact, practically none. Which is why I am losing interest in it.

Nonetheless, a portfolio having a half and half mix using a – 0.10 beta on one side and a + 0.10 beta on the other should produce an overall trading strategy resulting in a zero-beta scenario. So, this strategy could provide the first half of that problem.

@Guy, precisely I was referring to bug in code not the insect type :) But seriously, I hear what you;re saying in terms of strategy outcome.