Back to Community
MAD Portfolio an alternative to Markowitz?

Markowitz style portfolio optimization is ubiquitous and the bedrock of modern portfolio theory. Though it is not without flaws...

  • Assumes normality in returns
  • Requires computation of a covariance matrix
  • Requires inverting of a covariance matrix
  • Often ends up highly concentrated in one security

An alternative presented by Konno and Yamazaki in 1991 suggested that Markowitz style portfolio optimization could be replaced/improved with a model utilizing the Mean Absolute Deviation (MAD) as a measure of risk. Some benefits of MAD portfolio optimization they claim are it...

  • Requires no computation or inversion of a covariance matrix
  • Is computationally more efficient as it solves a linear optimization vs a quadratic optimization
  • Does not assume normality of returns

In their paper Portfolio Optimization: MAD vs. Markowitz Beth Bower, and Pamela Wentz show there is little significance between Markowitz and MAD portfolios. They went on to compute the tangent portfolio for both Markowitz and MAD. I stopped at the minimum variance portfolio as there is a non trivial amount of minimum variance material on Quantopian for comparison.

The code in the notebook is directly copyable into the IDE.

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

55 responses

Here is a backtest using the code in the notebook. I chose the stocks by Googling for 'volatile stocks' and randomly picking a few.

Clone Algorithm
358
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55cb989ba512050c73343471
There was a runtime error.

Here is a backtest using the same stocks as above but with equal weightings. The MAD Portfolio results in a decreased Beta, Max Drawdown, and Volatility, and increased Sharpe, and Sortino Ratios comparatively.

Clone Algorithm
358
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55d35962b8463c0c72b553cb
There was a runtime error.

Here is the same backtest using a Markowitz style Minimum Variance Portfolio optimization routine I found in the Quantopain Community written by David Edwards. The MAD portfolio results in a decreased Beta, and Max Drawdown; an equal Volatility; and increased Sharpe, and Sortino Ratios comparatively.

Clone Algorithm
34
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55d36fdfbf82430c76d4b76c
There was a runtime error.

Interesting results, it seems like there's not a huge significant (statistically) difference between MAD and Min-var. Interesting use of NNLS, never heard about it but it seems incredibly efficient, (i've always been using minimize when it came down to constrained optim). After a bit research I think I understand how its used where NNLS minimizes each specific stock's risk contribution in the portfolio (e.g, footnote #2 and subsequent paragraph after).

For the MAD portfolio, do you know why it rarely shorts any stocks?

I can't speak too much to NNLS, maybe David will jump in this thread and comment.

You know I'm not sure why it rarely shorts, my _sum_check function probably isn't the best constraint, as the absolute value sum can lead to a sum less than one. Though in the backtest the leverage hovered extremely close to one. Trying this with different securities did yield me more short values, and in the notebook the routine does short 9.5% of the portfolio value to FB.

aha! That is precisely why. I skimmed over that part. An absolute sum is considered a "Lintnerian" short sale constraint See footnote 4 where he sets the assumption that any short sale amount isnt received in cash but rather the investor has to put up an equal amount as collateral. I've had similar experiences with the Lintner short sale constraint where most weights are positive, however, I do not exactly know why it rarely shorts under this constraint so it might be a suboptimal portfolio (if all information was known).

So I did run a backtest without the absolute value sum and it still rarely shorts which makes me think that it may be more a function of the securities chosen. I've also noticed that choosing a shorter look back window results in more shorts.

Clone Algorithm
358
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55d3806afb55870c73159f05
There was a runtime error.

Here is MAD using the sector ETFs since '05. It seems to taper out of being all long when volatility increases, I'm guessing that's because the absolute deviations increase, which is its measure of risk. Pretty interesting though.

Clone Algorithm
54
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55d6309a2c1d840d7734077b
There was a runtime error.

I am having a problem with the optimizer used in this post. I have posted a notebook to the forum in this forum topic to demonstrate my concern.

I glanced over the math and did find one issue, the returns are incorrectly calculated, the optimizing routine looks fine at first glance but I did not dig into it TBH. When using log prices the returns are difference or ratio of the log prices, not the percent change. That might be a good place to start investigating further.

@David
I didn't notice that the link to my notebook which shows the issue with the optimizer didn't post. I have corrected that issue with the link in my previous posting. I believe that my notebook demonstrates a problem with the optimizer. Two different optimizers have two different results. In the instance that I captured, results from the current NNLS optimizer appear to be incorrect. It's an intermittent problem dependent upon the characteristics of the windowed securities prices utilized at the time. I believe that the failure may occur with the same securities at different points in their price history.

I agree with you that the handling of the math with the logarithms is a concern. That's a good catch.

However, I am also concerned about potential failure of the optimizer if used in an online manner based upon my findings.

@David
I replaced the percent change with the difference of the log prices and ran the backtest. Here are the results. It doesn't beat the benchmark for return any more, but the metrics look better.

Clone Algorithm
45
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 55d726842bb3330d88db4639
There was a runtime error.

James, David, Rob
Please excuse my laziness. While I am ploughing my way through various guides on constrained optimisation may I ask a few specific questions about this particular MAD Portfolio optimisation?

What is the best way to impose the following additional constraints/provisions:

  1. Maximum and minimum boundaries for weights BY INSTRUMENT. In other words not an overall boundary of say 0 to 1 on all instruments but 0 to 0.15 on some and by way of example 0 to 1 on others.
  2. A target for the object of minimisation itself (IE the MAD of the portfolio). EG as with Markowitz you might want to target a specific volatility (or MAD) range. Rather than just getting the minimum variance or MAD portfolio.
  3. It is probably worth looking at the equivalent of semi-deviation here rather than deviation. In other words minimize or target a specific MAD for downward deviation only.

I'll probably have some other thoughts as I work through the example but thanks to the Q team for bringing it up and coding it. This is a wonderful forum.

I suppose it is not entirely a surprise that so little attention has been focussed by the community in general on this post or other posts discussing portfolio optimisation.

In general, the vast majority of people in on-line trading forums quite understandably want to ape the hedge fund giants and seek outsize returns from minuscule starting capital. Usually using leverage.

Sadly, history tells us that the vast majority of such "racy" trading schemes are doomed to failure in the long term. Too few of us concentrate on risk control. Too few of us are interested in the dull low volatility returns achieved by back tests of schemes such as MAD.

History tells us that even MAD and other portfolio asset allocation schemes will meet their black swans in due course. But at least a loss of all capital is unlikely, even in a 1929 situation.

Slowly getting there. Here is the code to add bounds to the whole 9 asset portfolio:

   bnds= [(0, 1)]*9  
    cons = {'type':'eq', 'fun': _sum_check}  
    min_mad_results = minimize(_mad, guess, args=returns, bounds=bnds, constraints=cons)  

No doubt tomorrow I'll manage the rest......

Separate bounds for each stock in the portfolio:

def _mad(x, returns):  
        return (returns - returns.mean()).dot(x).abs().mean()  
    num_assets = len(returns.columns)  
    guess = np.ones(num_assets)  
    bnds= [(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1)]  
    cons = {'type':'eq', 'fun': _sum_check}  
    min_mad_results = minimize(_mad, guess, args=returns, bounds=bnds, constraints=cons)  

Glad you like it!

I see that you found your way to setting the bounds, so that is great. In answer to your question about setting a target deviation that is definitely possible: minimize the absolute value of the target deviation minus the current iteration's mean deviation.

# Computes the weights for the portfolio with the smallest Mean Absolute Deviation  
def minimum_MAD_portfolio(returns, deviation_target):  

    def _sum_check(x):  
        return sum(x) - 1  

    # Computes the Mean Absolute Deviation for the current iteration of weights  
    def _mad(x, returns, deviation_target):  
        return abs(deviation_target - (returns - returns.mean()).dot(x).abs().mean())  

    num_assets = len(returns.columns)  
    guess = np.ones(num_assets)  
    bnds= [(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1),(0, 1)]  
    cons = {'type':'eq', 'fun': _sum_check}  
    min_mad_results = minimize(_mad, guess, args=(returns, deviation_target), bounds=bnds, constraints=cons)  
    return pd.Series(index=returns.columns, data=min_mad_results.x)  

With just the downside deviation I imagine you could do something like returns[returns < 1] and feed that into the minimizer as one of the args.

Here is an implementation using the just downside deviations...

See my response correcting the typo in the code below.

Clone Algorithm
358
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 567348e1414e61117106b7c9
There was a runtime error.

I like the local functions within a function using the convention of function name starting with underscore.

Hits a maximum leverage of 1.45 early, on 2005-07-15 so it might be fairly easy to address for anyone interested in doing so.
Might have happened on only one frame and then recovered right away yet is significant because it means $1,450,000 is needed to achieve that result rather just $1M and the return per dollar used is lower than shown.
Not visible on the custom chart. Ah, I see, not due to smoothing in this case, instead I think because record() is daily level.

Here are 4 lines that can be easily dropped into handle_data() to catch maximum leverage even when minute level, at least within a day:

def handle_data(context, data):

    if 'max_lvrg' not in context: context.max_lvrg = 0 # init this instead in initialize() for better efficiency  
    if context.account.leverage > context.max_lvrg:  
        context.max_lvrg = context.account.leverage  
        record(mx_lvrg = context.account.leverage)      # take time for this new record only when there is a change

Very, very early days for me on this algo but herewith an amusing back test of the semi_deviation model incorporating volatility targeting. Thank you James.Thank you garyha - yes, I must get to the bottom of controlling leverage on Quantopian.

You will note that the system shown incorporate boundaries equal to all instruments of (0,1) and includes two additions to the portfolio - they are bond funds and so help the dash to safety in troubled times.

I far prefer this approach to volatility dampening rather than using shorting.

Clone Algorithm
21
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5673c68f32db9e1159b7aeaa
There was a runtime error.

Isn't the downside deviation part incorrect at the latest posted algos? I mean the returns<1 part here? returns is pct_change of history therefore it's taking everyhting under 100% (meaning probably every bar)

min_mad_results = minimize(_mad, guess, args=returns[returns < 1], constraints=cons)

The filter above also leaves the data full of NaNs even if the number would be correct..

1 would be flat less than one a negative return.

I'm not sure what you mean by that but returns is history(...).pct_change().dropna() (= pandas dataframe) and returns[returns < 1] as used here returns all points where percentage change is under 100% (per day) and returns all other points as NaN (it's 2 dimensional table with dates as index and securities as columns). I don't even think the _mad() calculation would work with NaNs all over the dataframe (numpys mean() returns nan if any element of calculation is nan).

Then I guess you should redraft it and post a correct solution.

Sure, I just wanted to make sure I didn't misunderstood some part of the calculation before posting but here is a version where only downside changes are calculated.

As the whole idea of the calculation is to optimize the volatility from mean I'm not totally sure how beneficial it is to calculcate only the downside part of the mean but we'll see in couple of minutes (the backtest I attached is still running).

Clone Algorithm
9
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56d21f1c41f8e80df5d8115c
There was a runtime error.

@ Mikko

Yup. That returns less than 1 is a typo. Here is a version that is corrected to using less than 0.

Clone Algorithm
358
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56d235002f76460dffc92ba0
There was a runtime error.

Here's a working link to the paper:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.176.6746&rep=rep1&type=pdf

The link in the original post doesn't work.

Just curious...is this a Q hedge fund viable strategy? James' recent post has beta = 0.1 & sharpe = 1.02, with decent returns. What's missing? Or is it good to go? I'm wondering if it'd be worth fiddling with it, and submitting to the contest or running it for 6 months separately and then sending to the Q fund committee for evaluation? Or since it uses ETFs, would it be rejected out of hand, since the desire seems to be the long-short algos with lots of individual securities?

Grant,

You should note that the values (sharpe, etc) and curves you see here are without commission/slippage.. The picture is quite different if you set the algo to default commission/slippage.

I didn't note that using ETFs is a negative thing for the fund, why do you think that it is? It's quite understandable that over 1 leveraged etfs are issue because the laccount everage doesn't reveal the actual leverage anymore.

@Grant

Mikko is right, with slippage taken into account this algo does poorly at this level of capital. (It trades lots of capital on a handful of securities everyday) Overall, using ETFs doesn't exclude any algorithm from the Quantopian fund.

As drafted it trades daily. No wonder slippage and com mount up! Try monthly.

Interesting that slippage kills it. I would think that these are whopping big sector ETFs and that rebalancing a $1M portfolio daily wouldn't be a big deal. Regarding commissions, if the strategy were part of a big pot of hedge fund money, say $50M-$100M, I'd think they'd be negligible, no?

When I get the chance, maybe I'll read the paper and play with the algo, since perhaps it has legs.

Let me re-phrase my question. Assuming that commissions and slippage are negligible, would this be an attractive strategy for the hedge fund?

Note that turnover would reduce and allocations would be more stable if the optimization was bootstrapped over subsamples.

Simon,

Thanks for pointing that out. I'd started playing around with that approach. The optimization can be run every minute, and the resulting portfolio vectors stored, and then averaged on a rolling basis. At least that's my interpretation of bootstrapping over subsamples. Then when the re-allocation is applied daily/weekly/monthly, it'll be smoothed over a trailing window of allocations. Coolamundo!

Yeah, I meant boot strapping where you draw random samples of the returns vector and optimize just on the samples and average the results, not rolling averaging per se.

I'd be hesitant to say this is the kind of algo we want in the fund. I say that because the algo doesn't really demonstrate any predictive capability. It uses a fixed universe and simply adjusts exposure. There isn't really any signal generating power due to some underlying model. The algo demonstrates more of the "how to hold?" vs the "what to hold?" question.

That being said a bootstrapped version of the long only version that Anthony created with the bond ETFs could potentially be a good algo to trade with Robinhood.

Just for informative purposes, here is Anthonys latest algo with default slippage and commission (I just cloned the algo and commented commission/slippage commands)

James, by the way the latest returns[returns < 0] version you posted is still not correct if you allow short positions and want to only calculate negative volatility. If you allow shorts those returns < 0 will become positive and will be calculated. Check of returns < 0 should not be applied before dot(..) as dot doesn't work with sparse matrixes that returns < 0 gives. You will get NaN for any day where return of any security is > 0 so your final value .abs().mean() will be calculated only from days where return for every security is < 0.

NaN problem can be solved by using mul(....).sum(axis=1) instead of dot() as sum just ignores NaNs and whole row doesn't become nan as in dot.

So the whole negative returns - only calculation would be

dfp = returns.mul(x)
dfp = dfp[dfp < 0]
(dfp - dfp.mean()).sum(axis=1).abs().mean()

Clone Algorithm
7
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56daed325c42c80f242089f1
There was a runtime error.

Yeah, I meant boot strapping where you draw random samples of the returns vector and optimize just on the samples and average the results, not rolling averaging per se.

@Simon, you'd mentioned this approach before, I recall. Is there an advantage in drawing random returns samples (picked from a trailing window) compared to running the optimization on a rolling basis, as I describe? Is it just a matter of computational efficiency to do the sampling, yielding essentially the same result as a rolling optimization? Or is there something more subtle?

Mikko Mattila
Many thanks for all your work and comments on this. I'm afraid my work on this algo and comments so far have been very cursory and off the cuff. I do not short ETFs hence I was unconcerned with the problem you outlined, nonetheless it makes obvious sense to make the corrections you suggested even if only for the sake of completeness.

I am going to take a closer look at this thing with and without semi deviation. I note Rob Welsh's concern with the optimiser but have not looked into the problem myself yet.

I am also totally perplexed by the idea of DAILY optimisation. Why in god's name would anyone want to take on that sort of commission and slippage for a scheme such as this which seems designed for long term investment not short term trading. Another factor is that monthly re-allocation would let profits run.

Anyway, enough. I must do some work on this and come back with further thoughts.

Rob Welsh
I believe I am correct in saying that making this change in your notebook ironed out the difference between the two scipy optimize algos?

I replaced the percent change with the difference of the log prices and ran the backtest.

James

I say that because the algo doesn't really demonstrate any predictive capability. It uses a fixed universe and simply adjusts exposure.

I'm not at all sure about that one. The whole point of Markowitz and/or MAD is to make "predictions". Markowitz was expecting punters to make a one time guess as to the future returns and vol of portfolio components. Modern algos at least use historic returns and vol and then repeat the calculations periodically. As per your algo here.

If you target vol you are predicting that future returns will look somewhat like the past (or at least the past 200 days as in your algo). Agreed of course that you are merely adjusting exposure if you trade this model with a fixed portfolio of ETFs but that is still prediction.

By way of example you target annualised volatility of say 16% and "predict" high returns for relatively high volatility. Or you target 5% vol for "predicted" bond like returns.

I am increasingly perplexed by conversations about including and excluding trading / investment styles and instruments. The distinctions are so often meaningless and counter productive.

At least where "probability" type trading and investment are concerned. And this forum is concerned with "probability" trading.

HFT by contrast uses (or so I am told) many techniques such as arbitrage and legal (?) front running. There you really are talking a very different animal. And haven't they done well?

But what about hedge funds in general? In general they have been a disappointment: for all the hype HFs have not lived up to the promise of absolute return over the past 20 years. With few exceptions, "all weather" portfolios have proved elusive.

I very much doubt that this forum will come up with anything superior (over the long term) to the simple Markowitz type approach typified by the example in this thread.

Call me sceptical but the vast majority of trading talk is so much BS. And curve fitting.

My serious point is that as an investor I would probably look towards a simple approach along the lines of this thread. I don't think in the long run there will be many other HF techniques which will do much better. In general, investment will give you market returns. You can boost those by leverage at the expense of higher volatility. You can cut draw down by techniques outlined in this and other threads.

But you won't come up with any magic bullet, predictive or otherwise. Unless you trade arbitrage or some other game such as front running where the odds are heavily loaded in your favour.

There are definitely further problems with this algorithm. It is simply not working as expected.

Let us take the target deviation model for a start. As we know from a "normal" Markowitz type optimisation scheme , there will be a whole bullet shaped set of returns and deviations for any one period calculation, made up by using many different weight combinations.

Therefore it is no good just setting a target deviation. One has to find the target deviation which actual sits on the efficient portfolio line. You can do this by turning the optimisation around: maximise return but limit volatility as a constraint.

Leaving that aside, there seems to be a problem with the basic scipy.optimise.minimise routine as drafted.

I used Excel to generate 15,000 random weightings on a single period of returns and MADs for three stocks. The minimum absolute deviation came out around 0.004 on the brute force approach. Even if I had used 100,000 random weightings, the results would be similar.

Using the code in this algo however totally failed to find the minimum: it came up with 0.008.

I guess I ought to code the brute force approach in Python and combine it with the algo in a Notebook and post it here. One problem may be that MAD is not strictly speaking a convex problem? The scatter plot does not show a smooth efficiency curve - or as least not to the extent you get with the normal co-var optimisation. I notice others have had similar problems with scipy optimise in traditional Markowitz set ups however. See Joel Hubert's comments here:
Markowitz Portfolio Construction

One problem is that the Scipy documentation is absolutely useless.

Anyway it is worth getting this right.

If scipy optimise can not not be made to work as expected then either:

a) use a brute force approach but using a random seed for repeatability (costly); or
b) revert to the Critical Line Algo; or
c) use a closed form solution?

Here is a link to the brute force spreadsheet I created:
MAD Spreadsheet

Can you use Scipy Optimise Brute on multiple weights? I'm not sure. The documentation is not very explicit. Anyway this spreadsheet suffices for the purpose and shows the tickers, returns etc used.

Click on "returns" and it will download.

Here is the Notebook code adapted from Rob Welsh's Notebook. Data taken from Yahoo.

Tickers=['EWC','F','C']  
data =  np.log(get(Tickers, start='2014-01-01'))  
returns = (data - data.shift(1)).dropna()

# Computes the weights for the portfolio with the smallest Mean Absolute Deviation.  
def minimum_MAD_portfolio(returns):  
    def _sum_check(x):  
        return sum(abs(x)) - 1  
    # Computes the Mean Absolute Deviation for the current iteration of weights  
    def _mad(x,returns):  
        return (returns - returns.mean()).dot(x).abs().mean()  

    def _performance(x,returns):  
        return returns.mean().dot(x).mean()  
    num_assets = len(returns.columns)  
    guess = np.ones(num_assets)  
    cons = {'type':'eq', 'fun': _sum_check}  
    min_mad_results = minimize(_mad, guess, args=returns, constraints=cons, options={'disp':True})  
    return pd.Series(index=returns.columns, data=min_mad_results.x)

weights = minimum_MAD_portfolio(returns)  
print ('\n', weights)  

And here are the results:

Optimization terminated successfully.    (Exit mode 0)  
            Current function value: 0.00806300687814  
            Iterations: 8  
            Function evaluations: 40  
            Gradient evaluations: 8

ewc    0.669923  
f          0.283561  
c         0.046516  
dtype: float64  

Leaving that aside, there seems to be a problem with the basic
scipy.optimise.minimise routine as drafted.

I used Excel to generate 15,000 random weightings on a single period
of returns and MADs for three stocks. The minimum absolute deviation
came out around 0.004 on the brute force approach. Even if I had used
100,000 random weightings, the results would be similar.

Using the code in this algo however totally failed to find the
minimum: it came up with 0.008.

I guess I ought to code the brute force approach in Python and combine
it with the algo in a Notebook and post it here. One problem may be
that MAD is not strictly speaking a convex problem? The scatter plot
does not show a smooth efficiency curve - or as least not to the
extent you get with the normal co-var optimisation. I notice others
have had similar problems with scipy optimise in traditional Markowitz
set ups however. See Joel Hubert's comments here: Markowitz Portfolio
Construction

One problem is that the Scipy documentation is absolutely useless.

Scipy optimization is not useless per se, you just have to undestand how it works. It's a gradient algorithm (I think? it's from Kraft / Sequential quadratic programming if someone wants to look at the paper) that looks for local minima so it's important to have some idea on what good initial guess would be.

You can use basinhopping to find global minima (ie. to escape local minima) but for that it would be good thing to know few things about fitness landscape ie rought estimate on what is the distance of the peaks. In situations where the fitness lanscape is not known I have used totally random shuffle to hop from one point to another. It's bascically a glorified brute force as it takes random point and then uses SLSQP to solve it. Solving global minima for N dimensional space in reasonable time is not exactly the easiest thing in the world to do.

http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.optimize.basinhopping.html

(Some info about optimization and fitness landscapes in general) http://www.turingfinance.com/fitness-landscape-analysis-for-computational-finance/

Mikko Mattila
Many thanks indeed for the tips. I am one of those people who is completely obsessive and has to work out precisely how things work! It looks to me as if my quasi brute force approach is rather useful after all but I very much appreciate the explanations and additional reading material.

I take it we are agreed at least that as drafted the algorithm contained in this thread is not "fit for purpose"?

so it's important to have some idea on what good initial guess would be

Understood. And this is all well and good for a single period optimisation but I fear it would not be possible to make a good guess for each period in a back test.

The major problem here with the MAD (as opposed to the standard Mean Variance optimisation) is perhaps that the "efficient frontier" is jagged in the former and smooth in the latter. In the latter the global minimum is likely to be the same as the single local minimum. Not so perhaps with MAD.

Actually of course what I really need to do is to go back and read the papers quoted in James Christopher's first post above and see what method they used to find the global minimum.

If the "efficient frontier" is jagged then I assume using the Critical Line algo would be a mistake.

Call me anal and slow but I do like to investigate properly.

In their paper Portfolio Optimization: MAD vs. Markowitz Beth Bower, and Pamela Wentz use a hunt and peck approach using Excel.

Using the Excel
add-in called Solver, we were able to numerically minimize this value by changing our
xj’s and using constraints (4.2) through (4.4).

Not over sophisticated!

I've taken a recent interest in Konno's use of the L1 risk measure. To refresh the discussion on this topic, here is some updated code which relies on CVXPY (which I do not believe was whitelisted on Quantopian at the time of James Christopher's initial post). This routine is significantly faster than using scipy.optimize or related brute-force general function solvers and can even handle universe sizes on the order of the Q500US or Q1500US. The alpha_vector could be, for example, ranks from a pipeline expression. The L1 risk in this function is quoted in daily % units (i.e., 0.001 for 10 basis points of daily risk).

import cvxpy as cvx  
import numpy as np

def calc_opt_L1_weights(alpha_vector, hist_returns, max_risk, max_position=0.05):

    num_stocks = len(alpha_vector)  
    num_days = len(hist_returns)  
    A = hist_returns.fillna(0.0).as_matrix()  
    x = cvx.Variable(num_stocks)  
    objective = cvx.Maximize(alpha_vector*x)

    constraints = [  
        sum(cvx.abs(A*x))/num_days <= max_risk,  
        sum(x) == 0,  
        sum(cvx.abs(x)) <= 1,  
        x <= max_position,  
        x >= -max_position  
    ]  
    prob = cvx.Problem(objective, constraints)  
    prob.solve(verbose=False)  
    print prob.value  
    sol = np.asarray(x.value).flatten()  
    return sol  
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Jonathan,

You appear to be an expert in optimization. I was wondering if there is any way I can set a minimum variance threshold constraint? I have implemented this paper but it does not fit into alpha.T* X paradigm since it uses SDP and assumes all weights are positive.

Would you know how I can maximize alpha but set a minimum variance threshold? something like risk > 0.01?

Best regards,
Pravin

Thanks Pravin for sharing that paper! I had not seen it and upon quick read it looks novel and promising. It's a dense paper so I have to spend some real time working through it and I can't do that in the near term. You are right though that the spirit of the paper is not consistent with the objective max alpha'x. This is because max alpha'x s.t. risk > y simply collapses to the unconstrained max alpha'x. The risk is an increasing function of alpha'x. Hence you just optimize to maximize the alpha without the risk constraint and then, post optimization, if the resulting portfolio risk is greater than y, you have a solution; if it is not, then the problem is infeasible.

@Jonathan, alpha_vector is a list of equities or something else?

Hi all,

I have really trouble using this algorithm. I want the algorithm to only consider a specific set of stocks to calculate the mean-variance optimal portfolio. The reason is that I want to compare the most basic portfolio strategies with each other and write a paper for a school course. It works until the period 01/01/2009-12/31/2009. Im am not trying to beat the market only show how the mean-variance optimal portfolio performs on the market. Does anyone have a solution?

Clone Algorithm
8
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5903891cb5d7f761e170aa47
There was a runtime error.

I don't understand your problem. You test seems have run OK?

Mikkel.

Try this:

def rebalance(context, data):

    returns = data.history(context.stocks , 'price', 200, '1d').pct_change().dropna()  
    weights = pd.Series(min_var_weights(returns, False))  
    weights[weights != weights] = 0  
    wt = weights/np.sum(abs(weights))  
    wt[wt != wt] = 0  
    print wt

    if get_open_orders(): return

    for stock in context.stocks :  
        if not data.can_trade(stock): continue  
        order_target_percent(stock, wt[stock])  

Hi @Vladimir Yevtushenko

Yes it works just fine know. Thank You!

@Anthony FJ Garner - The problem was that the algorithm worked just fine in every yearly period except the period 01/01/2009-12/31/2009. But know it works.

@Peter, I realized I never responded to your question above.

@Jonathan, alpha_vector is a list of equities or something else?

Yes - It would be a list of equities with an associated score. There is some discussion on this in another forum post (https://www.quantopian.com/posts/max-long-exposure-and-max-short-exposure-with-a-quadratic-objective). Quoting from there:

The alpha_vector could be anything -- ranks, z-scored values, actual expected returns, etc. -- anything where "higher is better".