Back to Community
minimum variance portfolio using get_fundamentals()

Something I finally got to work, so I thought I'd throw it out to the masses for critique. --Grant

Clone Algorithm
78
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 550775f29534ff0e3a824c55
There was a runtime error.
9 responses

are you sure you are calculating the returns correctly? for me it looks like you are working with price differences (i.e. after taking the diff between consecutive prices, you do not divide by the base price).
furthermore: it looks like the weights do not change from the initial weight (i.e. 0.05). for me it looks like the something is wrong with the scaling the variance/covariance matrix ... as you are calculating them using minute returns, already using the starting weights produces an extremly low variance so that the optimizer seems to stop without actually doing something useful.

just my two cents

Have had that issue with scipy.optimize. You can either adjust the solver's tolerance or scale up the inputs (I prefer the latter). Generally what I do is turn daily variance into annual volatility multiplied by 100 so that small changes in daily variance become obvious to the solver.

Nice, no better way to learn Numpy/Pandas then reading your code.

I used to compute the return in a ugly way this is much better.

ret = np.diff(prices,axis=0) # returns  

How do you organize your codes in Quantopian, whenever I open my algorithms I have a urge to try to organize it, but with a flat list it is kinda hard. Prefix with parameters?

I agree, I don't think np.diff() is computing the returns but only the change in price (not percent change).

Instead you can use the .pct_change() method of a pandas dataframe (as returned by history).

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Or of course the diff of the log-prices.

Thanks for the feedback, all. Here's an update, with:

    prices = history(3*390,'1m','price')  
    ret = 1000000*prices.pct_change().dropna()  
    ret = pd.ewma(ret,span=60).as_matrix(context.stocks)  

I also added leverage tracking.

Some mysteries:

  • Per Matt's advice, the inputs to the optimizer need to be scaled. This is disturbing, since I would think that there would be normalization internal to the optimizer.
  • The leverage jumps up during the backtest, which suggests something is not working as hoped. The portfolio allocation should always sum to 1.
  • Is the variance actually being minimized by the optimizer? This could be checked by solving for the minimum analytically (see http://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf), and the code would probably run faster, too.

Grant

Clone Algorithm
78
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5509460227840e0e4f84d147
There was a runtime error.

Has anyone managed to get this algorithm to work again after the recent upgrade? My attempts are failing...

Here's an updated version. I changed the np.asarray(args) to np.asarray(args[0]) to prevent the creation of a 3D ndarray.

Clone Algorithm
18
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 550e6e6a50096e2abe640cef
There was a runtime error.

Thanks Thomas.