Back to Community
Survivorship bias in Markowitz classic mean-variance optimization

Hi every one,

I'm new here, and I wanted to share with you the results of some quick experiments I've been doing with the classical Markowitz optimization technique. I based my code on these notebooks:

https://blog.quantopian.com/markowitz-portfolio-optimization-2/
https://www.quantopian.com/posts/the-efficient-frontier-markowitz-portfolio-optimization-in-python-using-cvxopt

I just made a small modification to make sure the algorithm maximizes the sharpe ratio (the original code somehow maximized the return).

It turns out, the original Markowitz is incredible prone to survivorship bias. I ran the simulation from mid 2006 to 2016. When I use the current top 25 stocks of the SP500 (discarding those that did not exist back then, like FB or so), the algorithm performs, let's say -acceptable- it doesn't beat the benchmark or anything, but it follows it closely, even with though I rebalance daily and with the default transactions costs.

(See attached backtest)

But as soon as I include the next 20 stocks of the SP500, the algorithm plunges. The performance is so terrible at first I thought it could be some code bug, so double checked everything, but so far I think everything checks.

Still, I'd like a second opinion on this. If it is indeed a case of survivorship bias, it's pretty amazing how a small change in the universe of stocks can dramatically change the results of Markowitz technique. I mean, it's not like I'm comparing the top10 vs the bottom10 or something, it's just adding a few more good stocks =S

Clone Algorithm
25
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This algorithm rebalances a portfolio according to the classical mean-variance Markowitz model.
"""
import numpy as np
import pandas as pd
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume
import cvxopt as opt
from cvxopt import blas, solvers

solvers.options['show_progress']=False


def optimal_portfolio(returns):
    """
    This is the classic quadratic convex portfolio optimization, as suggested by Thomas Wiecki, at:
    https://blog.quantopian.com/markowitz-portfolio-optimization-2/
    https://www.quantopian.com/posts/the-efficient-frontier-markowitz-portfolio-optimization-in-python-using-cvxopt
    """
    n = len(returns)
    returns = np.asmatrix(returns)
    
    N = 100
    mus = [10**(5.0*t/N - 1.0) for t in range(N)]
    
    #Convert to cvxopt matrices
    S = opt.matrix(np.cov(returns))
    pbar = opt.matrix(np.mean(returns, axis=1))
    
    #Create constraint matrices
    G = -opt.matrix(np.eye(n))
    h = opt.matrix(0.0, (n, 1))
    A = opt.matrix(1.0, (1, n))
    b = opt.matrix(1.0)
    
    # Calculate efficient frontier weights using quadratic programming
    portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
                  for mu in mus]
    
    # Calculate risks and returns for frontier
    returns = [blas.dot(pbar, x) for x in portfolios]
    risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
    
    ###############################################################################################
    ###-------------This part is somewhat wrong, because instead of finding the portfolio with the
    ###-------------highest return/risk ratio (highest sharpe ratio), instead it just finds the
    ###-------------portfolio with the highest return.--------------------------------------------    
    ## Calculate the 2nd degree polynomial of the frontier curve
    #m1 = np.polyfit(returns, risk, 2)
    #x1 = np.sqrt(m1[2] / m1[0])
    
    ## Calculate the optimal portfolio
    #wt = solvers.qp(opt.matrix(x1 * S). -pbar, G, h, A, b)['x']
    ###############################################################################################
    
    # So instead of that, we are gonna just try brute force search for the portfolio with the
    #highest sharpe. 
    hsharpe = 0
    hmu = 0
    for mu in range(0, len(mus)):
        sharpe = returns[mu] / risks[mu]
        if sharpe > hsharpe:
            hsharpe = sharpe
            hmu = mu
    
    #and return
    wt = solvers.qp(mus[hmu]*S, -pbar, G, h, A, b)['x']
    return np.asarray(wt), returns, risks
 
def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    
    context.assets = [sid(24), sid(5061), sid(8347), sid(3149), sid(4151), #AAPL MSFT XOM GE JNJ
                      sid(11100), sid(16841), sid(8151), sid(26578), sid(6653), #BRK-B AMZN WFC GOOGL T
                      sid(5938), sid(25006), sid(21839), sid(5923), sid(4283), #PG JPM VZ PFE KO
                      sid(23112), sid(3496), sid(2190), sid(5029), sid(3951),  #CVX HD DIS MRK INTC
                      sid(700), sid(5885)]#, #BAC PEP --- FB and V discarded ##Cut here in order to observe survivorship bias
                      #sid(1335), sid(3212), sid(1637), sid(1900), sid(3766), #C GILD CMCSA CSCO IBM
                      #sid(368), sid(5692), sid(4954), sid(980), sid(7792),  #AMGN ORCL MO BMY UNH
                      #sid(4707), sid(4799), sid(4758), sid(1406), sid(4922), #MCD CVS MDT CELG MMM
                      #sid(698), sid(6928), sid(6683), sid(5328), sid(4487)] #BA SLB SBUX NKE LLY
                          
    context.tick = 0
    # Rebalance every day, 1 hour after market open.
    schedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open(hours=1))
     
    # Record tracking variables at the end of each day.
    schedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())
     
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(), 'my_pipeline')
         
def make_pipeline():
    """
    A function to create our dynamic stock selector (pipeline). Documentation on
    pipeline can be found here: https://www.quantopian.com/help#pipeline-title
    """
    
     
    # Create a dollar volume factor.
    dollar_volume = AverageDollarVolume(window_length=1)
 
    # Pick the top 1% of stocks ranked by dollar volume.
    high_dollar_volume = dollar_volume.percentile_between(99, 100)
     
    pipe = Pipeline(
        screen = high_dollar_volume,
        columns = {
            'dollar_volume': dollar_volume
        }
    )
    return pipe
 
def before_trading_start(context, data):
    """
    Called every day before market open.
    """
    context.output = pipeline_output('my_pipeline')
  
    # These are the securities that we are interested in trading each day.
    context.security_list = context.output.index
     
def my_assign_weights(context, data):
    """
    Assign weights to securities that we want to order.
    """
    pass
 
def my_rebalance(context,data):
    """
    Rebalance daily
    """
    
    #Only after 100 days passed. (This isn't necessary in Quantopian, but in Zipline)
    context.tick += 1
    if context.tick < 100:
        return
    
    prices = data.history(context.assets, 'close', 100, '1d')
    returns = prices.pct_change().dropna()
    
    weights, _, _ = optimal_portfolio(returns.T)
    for stock, weight in zip(prices.columns, weights):
        order_target_percent(stock, weight)    
    
    pass
 
def my_record_vars(context, data):
    """
    Plot variables at the end of each day.
    """
    pass
 
def handle_data(context,data):
    """
    Called every minute.
    """
    pass
There was a runtime error.
7 responses

Here are the results for the same algorithm, now including the current top forty something stocks of the SP500. Let yourselves be the judges.

Clone Algorithm
25
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This algorithm rebalances a portfolio according to the classical mean-variance Markowitz model.
"""
import numpy as np
import pandas as pd
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume
import cvxopt as opt
from cvxopt import blas, solvers

solvers.options['show_progress']=False


def optimal_portfolio(returns):
    """
    This is the classic quadratic convex portfolio optimization, as suggested by Thomas Wiecki, at:
    https://blog.quantopian.com/markowitz-portfolio-optimization-2/
    https://www.quantopian.com/posts/the-efficient-frontier-markowitz-portfolio-optimization-in-python-using-cvxopt
    """
    n = len(returns)
    returns = np.asmatrix(returns)
    
    N = 100
    mus = [10**(5.0*t/N - 1.0) for t in range(N)]
    
    #Convert to cvxopt matrices
    S = opt.matrix(np.cov(returns))
    pbar = opt.matrix(np.mean(returns, axis=1))
    
    #Create constraint matrices
    G = -opt.matrix(np.eye(n))
    h = opt.matrix(0.0, (n, 1))
    A = opt.matrix(1.0, (1, n))
    b = opt.matrix(1.0)
    
    # Calculate efficient frontier weights using quadratic programming
    portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
                  for mu in mus]
    
    # Calculate risks and returns for frontier
    returns = [blas.dot(pbar, x) for x in portfolios]
    risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
    
    ###############################################################################################
    ###-------------This part is somewhat wrong, because instead of finding the portfolio with the
    ###-------------highest return/risk ratio (highest sharpe ratio), instead it just finds the
    ###-------------portfolio with the highest return.--------------------------------------------    
    ## Calculate the 2nd degree polynomial of the frontier curve
    #m1 = np.polyfit(returns, risk, 2)
    #x1 = np.sqrt(m1[2] / m1[0])
    
    ## Calculate the optimal portfolio
    #wt = solvers.qp(opt.matrix(x1 * S). -pbar, G, h, A, b)['x']
    ###############################################################################################
    
    # So instead of that, we are gonna just try brute force search for the portfolio with the
    #highest sharpe. 
    hsharpe = 0
    hmu = 0
    for mu in range(0, len(mus)):
        sharpe = returns[mu] / risks[mu]
        if sharpe > hsharpe:
            hsharpe = sharpe
            hmu = mu
    
    #and return
    wt = solvers.qp(mus[hmu]*S, -pbar, G, h, A, b)['x']
    return np.asarray(wt), returns, risks
 
def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    
    context.assets = [sid(24), sid(5061), sid(8347), sid(3149), sid(4151), #AAPL MSFT XOM GE JNJ
                      sid(11100), sid(16841), sid(8151), sid(26578), sid(6653), #BRK-B AMZN WFC GOOGL T
                      sid(5938), sid(25006), sid(21839), sid(5923), sid(4283), #PG JPM VZ PFE KO
                      sid(23112), sid(3496), sid(2190), sid(5029), sid(3951),  #CVX HD DIS MRK INTC
                      sid(700), sid(5885), #BAC PEP --- FB and V discarded ##Cut here in order to observe survivorship bias
                      sid(1335), sid(3212), sid(1637), sid(1900), sid(3766), #C GILD CMCSA CSCO IBM
                      sid(368), sid(5692), sid(4954), sid(980), sid(7792),  #AMGN ORCL MO BMY UNH
                      sid(4707), sid(4799), sid(4758), sid(1406), sid(4922), #MCD CVS MDT CELG MMM
                      sid(698), sid(6928), sid(6683), sid(5328), sid(4487)] #BA SLB SBUX NKE LLY
                          
    context.tick = 0
    # Rebalance every day, 1 hour after market open.
    schedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open(hours=1))
     
    # Record tracking variables at the end of each day.
    schedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())
     
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(), 'my_pipeline')
         
def make_pipeline():
    """
    A function to create our dynamic stock selector (pipeline). Documentation on
    pipeline can be found here: https://www.quantopian.com/help#pipeline-title
    """
    
     
    # Create a dollar volume factor.
    dollar_volume = AverageDollarVolume(window_length=1)
 
    # Pick the top 1% of stocks ranked by dollar volume.
    high_dollar_volume = dollar_volume.percentile_between(99, 100)
     
    pipe = Pipeline(
        screen = high_dollar_volume,
        columns = {
            'dollar_volume': dollar_volume
        }
    )
    return pipe
 
def before_trading_start(context, data):
    """
    Called every day before market open.
    """
    context.output = pipeline_output('my_pipeline')
  
    # These are the securities that we are interested in trading each day.
    context.security_list = context.output.index
     
def my_assign_weights(context, data):
    """
    Assign weights to securities that we want to order.
    """
    pass
 
def my_rebalance(context,data):
    """
    Rebalance daily
    """
    
    #Only after 100 days passed. (This isn't necessary in Quantopian, but in Zipline)
    context.tick += 1
    if context.tick < 100:
        return
    
    prices = data.history(context.assets, 'close', 100, '1d')
    returns = prices.pct_change().dropna()
    
    weights, _, _ = optimal_portfolio(returns.T)
    for stock, weight in zip(prices.columns, weights):
        order_target_percent(stock, weight)    
    
    pass
 
def my_record_vars(context, data):
    """
    Plot variables at the end of each day.
    """
    pass
 
def handle_data(context,data):
    """
    Called every minute.
    """
    pass
There was a runtime error.

Have you tried using this with the pipeline to dynamically select the top stocks, rather than choosing from a basket?

Markowitz is not a good path to go down.
If it works out for you in any of your testing it is due to luck.
Since the mean-variance optimization is based on historical returns and variances it will not be a good predictor of FUTURE returns and variances, which are both unstable and highly volatile.
It is great as a teaching tool and a Nobel winning breakthrough in financial theory, but in practice it will not outperform the market. It has been widely analyzed over the past decades and unless you can create a superior forecast of FUTURE returns and variances to optimize against it is not going to outperform.

@ Cory,
That was bold statement, I" Morkow is a great teaching tool". Is there a proof or backing up evidences for that statement. Come on man!. i know few of my professors in my past university.who are all about MVP type, are beating market and they are the one that run the Business School deparment funds.. Im a big fan of MVP myself. i spent all my time doing research about it so really you statement was kinda what ??? uhm Are you sure ? kinda statement. If you are a CFA charter member the nyou should not making these statements which will not hold in my opinions. You are telling me that just only MVP is using historical returns. Is there any algo in this forum does not use historical return ? " what will be a good predictors for returns and variance in your opinions ? In fact , i dont think there any prediction at all with MVP type problem. i dont see any Maximum Likelihood Estimator (MLE) , The Rao-BlackWell and Mimimum Variance unbiased estimation or other estimator when im calculating the weights by hand for MVP. MVP is first order differential minimizing problem with constraints or without ,no predicting here .
For MVP, there are many components to look at not just mu, signma , correlation, ..etc. If you go in depth like a master in Financial Engineering, there is a tons of research you can do with Mean-Variance type of problems.
@ Lino, if you randomly select stocks with high correlation, MVP wont help you. you need to run through a correlation and select stocks with minimum correlation. If i do it i would take 5 small caps, mid caps, large caps and risk free instrument.
If you select all stocks from sp 500, too much correlation in your portfolio. i wish i have time to play with your algo but unfortunate actuarial exams are kicking my butts right now. So maybe later.

I just cloned this algo from Thomas post just now. So there is no checking leverage or anything, just plug in stocks and etfs
This algo, i was looking at it yesterday, It is beating the benchmark but im not sure it using leverage or not. in fact it you in Porfollio type problem, im i recommending these books: it cheaps 5 bucks a piece " Modern Portfolio Theory and Investment Analysis" and the application book using excel " Running Money Professional Portfolio Management". First one is heavy in math those, it is a grad book, but second one is easier and a lot of fun.

Clone Algorithm
22
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np

import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd


solvers.options['show_progress'] = False

def optimal_portfolio(returns):
    n = len(returns)
    returns = np.asmatrix(returns)
    
   
    # Convert to cvxopt matrices
    S = opt.matrix(np.cov(returns))
    pbar = opt.matrix(np.mean(returns, axis=1))

    # Determine trial mus based on observed returns
    N=25
    mus_min=max(min(pbar),0)
    mus_max=max(pbar)
    mus_step=(mus_max - mus_min) / (N-1)
    mus = [mus_min + i*mus_step for i in range(N)]
    
    
    # print mus

    # Create constraint matrices
    G=opt.matrix(np.concatenate((-np.transpose(pbar),-np.identity(n)),0))
    A = opt.matrix(1.0, (1, n))
    b = opt.matrix(1.0)
    
    # Calculate efficient frontier weights using quadratic programming
    portfolios=[]
    for r_min in mus:
       h=opt.matrix(np.concatenate((-np.ones((1,1))*r_min,np.zeros((n,1))),0))
       sol = solvers.qp(S, -pbar, G, h, A, b)['x']
       
       portfolios.append(sol)
    
    ## CALCULATE RISKS AND RETURNS FOR FRONTIER
    returns = [blas.dot(pbar, x) for x in portfolios]
    risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
    ## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
    m1 = np.polyfit(returns, risks, 2)
    x1 = np.sqrt(m1[2] / m1[0])
    # CALCULATE THE OPTIMAL PORTFOLIO
    h=opt.matrix(np.concatenate((-np.ones((1,1))*x1,np.zeros((n,1))),0))
    wt = solvers.qp(S, -pbar, G, h, A, b)['x']
    return np.asarray(wt)


def initialize(context):
    # Register history container to keep a window of the last 100 prices.
    context.secs =  context.secs = [sid(24), sid(698), sid(2765), sid(16841), sid(4151), #AAPL MSFT XOM GE JNJ Large cap
                      sid(21508),sid(21771),
                     # sid(8233),sid(40271), sid(26377), sid(27780) , # small cap
                      sid(22887)] # TLT
    # Turn off the slippage model
    set_slippage(slippage.FixedSlippage(spread=0.0))
    # Set the commission model (Interactive Brokers Commission)
    set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
    schedule_function(trade, date_rules.week_start(), time_rules.market_open(minutes=60))
    
# Will be called on every trade event for the securities you specify. 
def trade(context, data):

    # Get rolling window of past prices and compute returns
    prices = data.history(context.secs,'price',100, '1d')
    returns = 100*prices.pct_change().dropna()
    
    NAdj=0      # Number order_target_percent adjustments made
    NCloses=0   # Close outs today
    
    try:
        # Perform Markowitz-style portfolio optimization
        weights = optimal_portfolio(returns.T)
        # Rebalance portfolio accordingly
        for stock, weight in zip(prices.columns, weights):
            if weight > 0.005:    # Need this threshold to purchase
                print 'stock',stock.symbol,'weight',weight
                order_target_percent(stock, weight)
                NAdj += 1
            else:
                if stock in context.portfolio.positions:
                    order_target(stock,0)   # Close our position
                    NCloses +=1            
            
    except ValueError as e:
        # Sometimes this error is thrown
        # ValueError: Rank(A) < p or Rank([P; A; G]) < n
        pass
    
    
    NHold=len(context.portfolio.positions)
    record(NHold=NHold,NAdj=NAdj,NCloses=NCloses)
There was a runtime error.

Markowitz's lasting achievement was that he showed mathematically what was well known anecdotally, that there are benefits to diversifying your portfolio and he showed the importance of correlations in relation to diversification. His work points more toward investing in a diversified market portfolio, low cost index, than trying to beat the market.

Two points to make a quick case against using historical data to build an optimized portfolio:
1. When you optimize you are finding the best portfolio to have owned on the date of the START of the optimization , not the portfolio to own at the end.
If you use historical data from 01/01/2016 to 09/30/2016 as your data set, you are finding the portfolio you should have owned on 01/01/2016, not the portfolio you should own on 09/30/2016. Future performance is not likely to resemble the historical performance due to instability of the variables.
2. Markowitz has been around for a long, long time. If it was the magical answer to beating the market, we wouldn't still be looking for a way to beat the market.

Valid ways to try to implement Markowitz:
1. People use it at a macro level across asset classes. Allocating across equities, bonds, commodities, real estate, alt-investments, etc... The variables by assets class are more stable than at the individual asset level, but it can only give you a general idea of the best way to allocate.
2. Using on forecast data. If you can develop a way to model future returns, variances (and that is a very big if), then it might be good to optimize based on the forecast data. Even the macro level optimization should be done on your forecast of returns. You can try to use analyst estimates of forward earnings/returns, but analysts are not that great of forecasters. If you want to use optimization, focus your learning on how to make a good forecast, doing the optimization part is easy.

In short, feel free to run optimizations, change variables, sample periods, etc.. and you may manage to mine the data that gives you fantastic historical returns, but it is very unlikely to produce future out performance. Must at a minimum optimize based on a forecast.

Best

An excellent and I suspect correct summation. Asset allocation can work says Markowitz's research. I'm beginning to think that is all most research can say. If you have been around long enough you realise back testing is mostly wishful thinking.
I
Asset allocation and diversification are about all you can do in the long term. And success in other areas is probably luck and survivorship bias.