Back to Community
Markowitz Portfolio Construction

This algorithm performs a standard mean-variance optimization over a group of large cap US stocks. The algorithm constructs an efficient frontier of allocations and allows the user to choose an allocation based on risk preference.

Ryan

Clone Algorithm
397
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 52ceedfeb50c4b074c713194
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

20 responses

http://en.wikipedia.org/wiki/Modern_portfolio_theory

Modern portfolio theory (MPT) is a theory of finance that attempts to maximize portfolio expected return for a given amount of portfolio risk, or equivalently minimize risk for a given level of expected return, by carefully choosing the proportions of various assets. MPT is a mathematical formulation of the concept of diversification in investing, with the aim of selecting a collection of investment assets that has collectively lower risk than any individual asset. This is possible, intuitively speaking, because different types of assets often change in value in opposite ways. For example, to the extent prices in the stock market move differently from prices in the bond market, a collection of both types of assets can in theory face lower overall risk than either individually. But diversification lowers risk even if assets' returns are not negatively correlated—indeed, even if they are positively correlated.

MPT assumes that investors are risk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensated by higher expected returns. Conversely, an investor who wants higher expected returns must accept more risk. The exact trade-off will be the same for all investors, but different investors will evaluate the trade-off differently based on individual risk aversion characteristics. The implication is that a rational investor will not invest in a portfolio if a second portfolio exists with a more favorable risk-expected return profile – i.e., if for that level of risk an alternative portfolio exists that has better expected returns.

See the attached backtest for a few updates. The algo is designed so that anyone can input any group of stocks he or she likes to be put into the allocation engine. Within the initialize method, the field risk_tolerance is added to the context data frame. One can choose a risk/return profile on a scale of 1 to 20 (1 being the lowest risk and 20 being the highest risk). Higher risk portfolios will have a higher expected return.

Ryan

Clone Algorithm
81
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 52cf126b00d200074532901b
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Please see the attached improvement of the algorithm. Now includes history() instead of batch_transform() and re-balances every month. Comments welcome.

Clone Algorithm
133
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5307cd97048e73074df66621
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

I was playing around with this and made a few changes and some notes. Interesting.

Clone Algorithm
18
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5307fb496f28ca0d595f3197
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

It seems to work much better when given a good list of stocks/ETFs than using the set_universe.

From what I gather you are implementing an efficient frontier/MPT algo. I don't think that the optimization process is necessary, there's closed form solutions to most MPT problems. Here is a reference. If nothing else, it would speed up the calculations.

Brian, thank you for your improvements to the algorithm. I noticed you generated 50 different points along the efficient frontier

for r in linspace(min(R), max(R), num=50):  

while the documentation tells the user to select a risk profile using one of 20 points along the efficient frontier:

# 1) Enter the risk tolerance on a scale of 1 to 20.  
#    1 is the lowest risk tolerance (lowest risk portfolio)  
#    20 is the highest risk tolerance (highest risk portfolio)  

I modified the code to generate only twenty points along the efficient frontier.

David, yes, in most formulations, the portfolio allocation problem of MPT does have a closed form solution. However, I would like to allow for maximum flexibility when Quantopian community members use the algorithm. The objective function could include a term for tracking error, for example, which may necessitate numerical methods for the problem.

In the fitness function for the optimization, the penalty function tends to dominate the optimization. Estimating returns can often be far more difficult than estimating volatilities and correlation, so it may be more effective to more heavily weight the covariance term in relation to the estimated return term.

def fitness(W, R, C, r):  
    # For given level of return r, find weights which minimizes portfolio variance.  
    mean_1, var = compute_mean_var(W, R, C)  
    # Penalty for not meeting stated portfolio return effectively serves as optimization constraint  
    # Here, r is the 'target' return  
    penalty = 50**abs(mean_1-r)  
    return var + penalty  

Instead of using exponentiation (**), we can use multiplication.

def fitness(W, R, C, r):  
    # For given level of return r, find weights which minimizes portfolio variance.  
    mean_1, var = compute_mean_var(W, R, C)  
    # Penalty for not meeting stated portfolio return effectively serves as optimization constraint  
    # Here, r is the 'target' return  
    penalty = (1/100)*abs(mean_1-r)  
    return var + penalty  
Clone Algorithm
45
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 53161455f1c48b076b5c4c21
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

That is an error. I was playing around. I've changed it to 10. That provides enough resolution in my opinion.

Here is original Ryan Davis version on sector etf products from state street plus TLT

Clone Algorithm
46
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55b7189a183baf0c57902a27
There was a runtime error.

Brian Vetere version on sector etf products from state street plus TLT

Clone Algorithm
119
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55b7205f2949e80c778bb8e2
There was a runtime error.

Just want to point out that last algorithm trades daily instead of weekly like all others.

Turns out there's a pretty fundamental problem with the code; the weights are the same for each point on the Frontier, just do a log.info(frontier_weights). I'm trying to find the problem, it's somewhere in the scipy.optimize function...

Edit: looks like r remains at 0.015 (rf) throughout the minimization iterations. WIP

Edit 2: I'm kind of baffled, it seems scipy.optimize.minimize doesn't pick up the new iteration values of r here:

def solve_frontier(R, C, rf, context):  
    frontier_mean, frontier_var, frontier_weights, = [], [], []  
    n = len(R)      # Number of assets in the portfolio  
    for r in linspace(max(min(R), rf), max(R), num=20): # Iterate through the range of returns on Y axis  
        W = ones([n])/n # Set initial guess for weights  
        b_ = [(0,1) for i in range(n)] # Set bounds on weights  
        c_ = ({'type':'eq', 'fun': lambda W: sum(W)-context.allowableMargin}) # Set constraints  
        optimized = scipy.optimize.minimize(fitness, W, (R, C, r), method='SLSQP', bounds=b_, constraints=c_) #PROBLEM: the r does not increase!  
        #if not optimized.success:  
            #raise BaseException(optimized.message)s  
        # Add point to the efficient frontier  
        frontier_mean.append(r) #OK  
        frontier_var.append(compute_var(optimized.x, C))   # Min-variance based on optimized weights  
        frontier_weights.append(optimized.x)  
    return array(frontier_mean), array(frontier_var), frontier_weights  

Anyone experienced with S.O.M. care to input?

Joel, see:
MAD Portfolio an alternative to Markowitz?

Scipy Optimise Minimise is a pig and the documentation is terrible. I am tempted to abandon it and use the critical line algo.

Edit: looks like r remains at 0.015 (rf) throughout the minimization iterations. WIP

I'm not sure I understand. r is merely an iterator to loop through the range of returns. It has nothing to do with rf (which is the risk free rate ). The risk free rate will remain the same throughout the test.

Correction to mean-variance solver: replaced "fitness" with "fitness_sharpe". For my own purposes I discarded margin provisions: put back in if you need to.

def solve_weights(R, C, rf,context):  
    n = len(R)  
    W = ones([n])/n # Start optimization with equal weights  
    b_ = [(lower_bound,upper_bound) for i in range(n)] # Bounds for decision variables  
    c_ = ({'type':'eq', 'fun': lambda W: sum(W)-1.0 })  
    # Constraints - weights must sum to 1  
    optimized = scipy.optimize.minimize(fitness_sharpe, W, (R, C, rf), method='SLSQP', constraints=c_, bounds=b_)  
    if not optimized.success:  
        raise BaseException(optimized.message)  
    return optimized.x  

Changed the penalty in "fitness" to make more sense. It is now a much blunter instrument and actually works to change the weights as planned:

def fitness(W, R, C, r):  
    # For given level of return r, find weights which minimizes portfolio variance.  
    mean_1, var = compute_mean_var(W, R, C)  
    # Penalty for not meeting stated portfolio return effectively serves as optimization constraint  
    # Here, r is the 'target' return  
    penalty = 100*abs(mean_1-r)  
    #print("mean_1",mean_1,"var",var,"penalty",penalty, "var + penalty",var+penalty)  
    return var + penalty  

I'll come back with more details but I am still finding surprising differences between the scipy.optimize.minimum approach and a monte carlo approach using 25,000 randomly generated portfolio weightings.

I take my words back on scipy.optimise - it isn't such a pig after all. But I intuitively prefer the broader, blunter, quasi brute force approach on the monte carlo method. Perhaps I could overcome that bias if I studied method='SLSQP' line by line.....but there again I can't help thinking a blunderbus approach in general helps to avoid local minima. Not that there are any in this sort of optimisation; but in general.....

But of course for very large portfolios I guess the MC approach is too cumbersome and time consuming.

I notice that from first to last you have all commented out the following code:

#if not optimized.success:  
            #   raise BaseException(optimized.message)  

In this context:

def solve_frontier(R, C, rf,context):  
    frontier_mean, frontier_var, frontier_weights = [], [], []  
    n = len(R)      # Number of assets in the portfolio  
    for r in linspace(max(min(R), rf), max(R), num=20): # Iterate through the range of returns on Y axis  
        W = ones([n])/n # Set initial guess for weights  
        b_ = [(0,1) for i in range(n)] # Set bounds on weights  
        c_ = ({'type':'eq', 'fun': lambda W: sum(W)-context.allowableMargin }) # Set constraints  
        optimized = scipy.optimize.minimize(fitness, W, (R, C, r), method='SLSQP', constraints=c_, bounds=b_)  
        #if not optimized.success:  
        #    raise BaseException(optimized.message)s  
        # Add point to the efficient frontier  
        frontier_mean.append(r)  
        frontier_var.append(compute_var(optimized.x, C))   # Min-variance based on optimized weights  
        frontier_weights.append(optimized.x)  
    return array(frontier_mean), array(frontier_var), frontier_weights  

Can anyone tell me what errors you were encountering and why you could not fix them?

def fitness_sharpe(W, R, C, rf):  
    mean, var = compute_mean_var(W, R, C)  
    utility = (mean - rf)/sqrt(var)  
    return 1/utility  

Likely to lead you into this problem:
scipy.optimize.minimize SLSQP leads to out of bounds solution scipy isues