Back to Community
minimum variance w/ constraint

For your consideration. I got the list of securities from one of the postings on https://www.quantopian.com/posts/for-robinhood-trading .

Clone Algorithm
569
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b5f3c86144f3129a9eb7e5
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
69 responses

Here's a tweak. Instead of context.eps = 0.01 I set context.eps = 0.05. Seems decent. Comment/criticisms/improvements welcome. --Grant

Clone Algorithm
569
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b60314c12d47129beaadaf
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Here's another tweak. I changed if denom > 0 and np.dot(allocation,ret_norm) >= 0: to if denom > 0: . The former was probably resulting in the optimization not being applied, when it should have been.

[EDIT] I also changed to this:

    x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)  
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)  

The seed for the optimization is an equal weight portfolio.

--Grant

Clone Algorithm
569
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b6774073637f12c08c81ea
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Awesome algorithm Grant.
Could you please give a brief explanation regarding the constraints of the optimization.
Many thanks,
Andrew

Incredible returns. Now I'm tempted to try it after that original post for Robinhood.

Edit:

Grant, Can you explain what is the purpose of this line for the constraint?

'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps}

I'm not sure I understand what is ret_norm or eps.

EDIT 2:

Nevermind I figured it out. You are seeking a return or greater based on the normal distribution.

Andrew,

This code computes the mean return normalized by the standard deviation:

    ret_mean = prices.pct_change().mean()  
    ret_std = prices.pct_change().std()  
    ret_norm = ret_mean/ret_std  
    ret_norm = ret_norm.as_matrix(context.stocks)  

Then, the normalized mean return is included as a constraint:

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
    {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

The variable x is the portfolio asset allocation/weight vector. The first constraint is a leverage constraint--the weights need to sum to one. The second constraint is saying that the sum of the normalized returns weighted by the portfolio allocations needs to be equal to or greater than a threshold, context.eps. It should tilt the portfolio toward assets with positive risk-adjusted returns.

By the way, I suspect that there may be a closed-form, analytic solution to the constrained optimization problem solved here iteratively. An attaboy to the first person to post it.

Thanks grant!
Awesome returns!

Most recent algo PvR out, accounting for the negative cash, 108.6%, or 0.0833 %/day:

2016-02-04_pvr_:132INFO PvR 0.0833 %/day     2010-12-01 to 2016-02-04  10000  minute  
2016-02-04_pvr_:135INFO  Profited 35134 on 32362 activated/transacted for PvR of 108.6%  
2016-02-04_pvr_:138INFO  QRet 351.34 PvR 108.57 CshLw -22362 MxLv 1.50 RskHi 32362 Shrts 0  

This algo ranks 6th in PvR/day among 61 tested this week, very good.
Just that it would be useful to see a version of it without margin, and as happened could maybe bring overall higher profitability.

How do you suggest eliminating margin, other than holding a small positive cash balance (which will just be dead capital and cut into the return)?

Maybe whatever magic it was that Tim V. did with his Robinhood algo to eliminate negative cash, I haven't tried to understand it yet.

Ideally speaking, beyond that, an ordering wrapper for all of the order methods that would monitor fills (including partial fills and unfilled) and adjust weights accordingly, not easy. Orders would be first queued in any frame, then analyzed wrt current cash, adjusted, ordered.

Thanks. I see the problem now with this code:

def handle_data(context,data):  
    context.leverage.append(context.account.leverage)  
    record(max_leverage = max(context.leverage))  

It is a mystery how the leverage could spike during the day, but settle to 1 by the close.

Here's a backtest indicating that the leverage pops up above 1 significantly. However, as shown above, at then end of the trading day, it is always near 1. Any idea what's going on? Maybe something fishy with the way order_target_percent is playing out? I guess if all the orders aren't processed in one minute, leverage can be out of whack temporarily?

Clone Algorithm
569
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b77413684bea1190dc6d77
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

You can add your own context variables in track_orders() here to experiment toward solutions etc.
The number at the beginning of lines -- minute of the day.
Would like to point out that you'll (rarely) see partial fills and they have the slash like
Bot 50/63 EDV at 80.50 or Sold -250/-285 XIV at 15.32
So if you scroll and copy the output you can search for '/' and find those.
Can also search for 'cash -' to find negative cash.
The order id logging option is turned on.
There is a line like this each time there's a new leverage high:
2010-12-14pvr:346INFO 99 MaxLv 1.01 QRet -0.7 PvR -0.7 RskHi 10068

Clone Algorithm
85
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b7d451bdde091178402604
There was a runtime error.

I took a closer look, sell orders are going unfilled for numerous minutes sometimes while buys do go through right away and that accounts for the leverage spikes and deep negative cash.
So here's one suggestion:
a) In place of order_target_percent(), queue the orders.
b) In handle_data, every minute, if there are any queued orders ( if context.queue_list: ), call a function that will process those orders only if/when there are no open orders. Wish/hope there were/is a better way.
c) Set to process any sells first and then wait for them to be filled before buy. Or better, could only place buys when there is likely enough breathing room in available cash for slippage and commissions (so as long as one or more sells are done, allow any buys that can fit).

To determine whether a transaction ratio ("percentage" being fed to order_target_percent()) is a sell or buy (both are positive and simply adjustments, a lower number than existing is a sell), have to take a look at its current percentage of the portfolio compared to the new weight, allocation[i] that is stored/queued.

I wrote the bit above and then decided to work on it so I'll probably have a backtest approaching that fairly soon.

Thanks. I figured something like you describe could be happening. By the way, I posted a question to https://www.quantopian.com/posts/zero-commission-algorithmic-trading-robinhood-and-quantopian. It seems that order_target_percent() is not gonna work for re-balancing under Robinhood, unless enough cash is sitting around to cover the T+3 rule. If I understand correctly, it'll need to be something like order_target_percent_sell() followed by order_target_percent_buy() three days later when the cash from the sale is available.

Hi Grant,

Reviewing: Thanks to the PvR (Profit vs Risk) code I added it became apparent that the third algo appearing to be over 300% was really 108% in profit per dollar spent because of -22k in negative cash. It did actually profit 35k, just that it took 32k to make that, not 10k.

This new version of the algorithm does not go into negative cash and the result is pretty interesting.
It spends only 10k and ends with 44k so instead of 108% it is now a genuine 344.6% return per dollar spent, no margin.

The attached code adds a routine to queue orders, handle sells first, wait for them to be filled, then buy after that.
It also contains a suggested start toward the Robinhood T+3 you mentioned.
In the track_orders function output you'll see unfilled's. Its output can be toggled off. The first digit is minute of the day.

The genuine 344.6% return makes this the second best algo I've seen so far in PvR per day, kudos Grant Kiehne.
(and me, if I do say so myself)

Clone Algorithm
85
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56b95f78dd069a11c714850a
There was a runtime error.

Fantastic algorithm guys!!
during july 2015 to jan 2016 it seems the algorithm plateaus and declines.
I was wondering if more diagnostics can be run for this period to see if it is one instrument causing this decline or the group generally moving in a downward trend. essentially trying to use these diagnostics to maybe add more constraints and hence improve the algorithm.
Many thanks all,
Best,
Andrew

Thanks garyha, Glad you found it interesting. So if this is "the second best algo" what's the first?

@AC, try setting a track_orders start date in that like the example in code.
@GK, Only revealed to those onboard with PvR, not the majority who look the other way.

Added an emulation of Robinhood's 3-day delay in the availability of the proceedings, following an example in the Q API help.

Clone Algorithm
17
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56bb3f8d3ce1db11952648c0
There was a runtime error.

A tear sheet for the algo from the previous post.

Loading notebook preview...
Notebook previews are currently unavailable.

hi garyha!
Would you be able to provide some more information as to what algorithm performs a better
PVR metric.
Many thanks,
Best,
Andrew

Interesting that it holds up to the T+3 restriction.

One thought I had is that for such a strategy, it might be better to adjust the weights on a rolling basis, every day/minute, under the T+3 rule, rather than scheduling a function to run periodically. One would also have to consider the problem of not being able to buy fractional shares, as well.

small tweak to make the T+3 restriction dynamic based on the environment

Clone Algorithm
50
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56bc74ed6aaabd12b9523d59
There was a runtime error.

A tiny correction to the last post: the call of the check_last_sale subroutine should only be done in backtesting, i.e., the corresponding piece of code in the trade subroutine should read

if arena not in ['IB', 'ROBINHOOD']:
# Backtesting
check_last_sale(context)

instead of just

check_last_sale(context)

Grant,

Interesting that it holds up to the T+3 restriction.

I hope I implemented the 3-day delay correctly ...

I added some log.info's to the allocate routine see if the optimizer ever had trouble finding a solution like this:

if res.success:  
        log.info("AOK scipy.optimize res.success=False")  
        allocation = res.x  
        allocation[allocation<0] = 0  
        denom = np.sum(allocation)  
        if denom > 0:  
            allocation = allocation/denom  
else:  
        log.info("WRN scipy.optimize res.success=False")  
        allocation = np.copy(context.x0)  

Running a backtest for 11-January-2016 to 15-Jan-2016 there seems to be some days where the optimizer is unable to get a solution and moved to an equal weight default. From run log, looks to me like optimizer found a solution on 2 out of 5 days:

2016-01-11pvr:359INFO2016-01-11 to 2016-01-15  10000  minute  
2016-01-11allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-12allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-12trade:112INFO    61 EDV 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 TLT 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 RSP 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 XIV 0.000 ==> 0.250  
2016-01-12_orders:241INFO  61   Buy 34 RSP at 72.04   cash 10000 d96f007e8b4e456a85483d74d25354af  
2016-01-12_orders:241INFO  61   Buy 20 TLT at 122.81   cash 10000 ad6906801e6a4d09a4ef7a130d22b9ea  
2016-01-12_orders:241INFO  61   Buy 114 XIV at 21.53   cash 10000 d6d2460edd8a470796051c646c5a7188  
2016-01-12_orders:241INFO  61   Buy 21 EDV at 116.57   cash 10000 b566d65685f54fa28e086e1ee23e409e  
2016-01-12_orders:241INFO  62      Bot 34 RSP at 72.04   cash 2645 d96f007e8b4e456a85483d74d25354af  
2016-01-12_orders:241INFO  62         EDV 21 unfilled  b566d65685f54fa28e086e1ee23e409e  
2016-01-12_orders:241INFO  62      Bot 114 XIV at 21.48   cash 2645 d6d2460edd8a470796051c646c5a7188  
2016-01-12_orders:241INFO  62      Bot 20 TLT at 122.82   cash 2645 ad6906801e6a4d09a4ef7a130d22b9ea  
2016-01-12_orders:241INFO  63      Bot 21 EDV at 116.48   cash 196 b566d65685f54fa28e086e1ee23e409e  
2016-01-12pvr:453INFO 63 MaxLv 0.98 QRet 0.1 PvR 0.1 RskHi 9803  
2016-01-13allocate:171INFOAOK scipy.optimize res.success=True  
2016-01-14allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-15allocate:171INFOAOK scipy.optimize res.success=True  
2016-01-15_pvr_:429INFOPvR -0.5568 %/day     2016-01-11 to 2016-01-15  10000  minute  
2016-01-15_pvr_:432INFO  Profited -273 on 9803 activated/transacted for PvR of -2.8%  
2016-01-15_pvr_:435INFO  QRet -2.73 PvR -2.78 CshLw 196 MxLv 0.98 RskHi 9803 Shrts 0  
End of logs.  

Comments or perspective or is this normal-operation or ?

Thanks Richard,

I'm not sure what's going on with the optimizer. Note that there is an option to display what's going on under the hood (see http://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html and the 'disp' flag).

As I noted above, there may be a closed-form solution (at least for an equality constraint, which may be sufficient). I think this applies:

http://quant.stackexchange.com/questions/18160/beta-constrained-markowitz-minimum-variance-portfolio-closed-form-solution

Another approach would be to use CVXOPT instead of scipy.optimize to see if it always converges.

Grant

Hi all,
I was wondering given the algorithm. how would it be possible to do performance attribution on the different instruments.
Many thanks
Andrew

Andrew,

This might be relevant:

https://www.quantopian.com/posts/round-trip-trade-analysis

Grant

@ Grant. Thanks for sharing. I thought SPY, SH & TLT was already a great combo for minimum-variance optimization (mvo), until you presented this combination (RSP,EDV,TLT,XIV)!
@All. Any insights/tips on how this combination works so well with mvo? or any related articles on constructing a portfolio for mvo?

From a cursory look, the performance is attributed to the high return by XIV, and also the negative correlation of returns between RSP/XIV versus EDV/TLT (shown in first half of the notebook). However, my hand picked stocks based on those 2 criteria did not do so well with mvo, as shown in the notebook. If only we can reverse engineer why this combination do so well with mvo. :)

Loading notebook preview...
Notebook previews are currently unavailable.

I am also attaching my modification to the strategy, which attempts to reduce the volatility by limiting the XIV weight during rough times.

Clone Algorithm
108
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56c43cfca606b611b0642996
There was a runtime error.

My very strong advice would be to stop dikking around with 5 or 6 year back tests. Forget about looking at instruments with history so short they have been around for the blink of an eye. And forget about Quantopian if it only provides you with such data. Zipline would be a far better option. You need to obtain or manufacture bond data going back over a very long a period of time. If you can't find it make your own using interest rate data from the Fed. There are no ETFs going back beyond 1996 so forget them too and either use stock indices or mutual fund data. You won't be able to lay your hands on minute data of course.

This is an excellent forum. There are many excellent drafters of code. But there don't seem to be many people who know what it means to live and trade through many different market cycles.

@ Anthony, I'm just a hack, while you might actually know what you are doing. I'm definitely not promoting this as a sensible investment. I agree that the backtest time frame is too short for this algo. It could be that the bull market gets amplified, and then as it flattens out toward the end, the algo just gets lucky. It is limited by the availability of data for XIV. So the limited time frame is a risk everyone should be aware of. Maybe there is a way to cook up a proxy for XIV?

@ Ted, looks like an improvement, but I'd worry about over-fitting (particularly in light of Anthony's comments). XIV is complete voodoo to me:

The investment seeks to replicate, net of expenses, the inverse of the daily performance of the S&P 500 VIX Short-Term Futures index. The index was designed to provide investors with exposure to one or more maturities of futures contracts on the VIX, which reflects implied volatility of the S&P 500 Index at various points along the volatility forward curve. The calculation of the VIX is based on prices of put and call options on the S&P 500 Index. The ETNs are linked to the daily inverse return of the index and do not represent an investment in the inverse of the VIX.

Kinda scary that the strategy hinges on something that sounds pretty far removed from anything that a mere mortal could understand.

Regarding minimum variance optimization, keep in mind that the algo has a constraint that should tilt the portfolio toward securities that have higher volatility-normalized returns over the trailing window (at least that's what I had in mind). I've attached a backtest of your version, without the constraint, to illustrate the importance of the constraint.

Clone Algorithm
13
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56c59d0f3b0b080dec1d4c89
There was a runtime error.

Not N

Not at all
I enjoy your work and admire your coding. But I have traded more cock ups over the years than I care to recall!

just trying to figure out what a "cock ups" is? never traded that market myself.

also Anthony is correct to validate a system I use at least fifteen thousand bars up to one hundred twenty five thousand bars in testing. i separate out the trending data and no-trending data, create synthetic random data and patch it all together in various configurations. if your model holds up to this then you have no curve fit.

also if your logic is sound you should be experiencing positive slippage on initial and when leveraging positions, otherwise i don't think a system will hold up to institutional trading size at least. m

Hey Grant --

I was wondering if you could help walk me through what this portion of the code is doing?:

    bnds = []  
    limits = [0,1]  
    for stock in context.stocks:  
        bnds.append(limits)  
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)  
    if res.success:  
        allocation = res.x  
        allocation[allocation<0] = 0  
        denom = np.sum(allocation)  
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:  
            allocation = allocation/denom  
    else:  
        allocation = np.copy(context.x0)  
    context.n += 1  
    context.s += allocation  

It looks like the main goal of the algo is to get to the scheduled function call trade:

def trade(context, data):  
    print ('Current context.n: ',context.n)  
    if context.n > 0:  
        allocation = context.s/context.n  
        print('Required Allocation: ',allocation)  
    else:  
        return  
    context.n = 0  
    context.s = np.zeros_like(context.stocks)  
    context.x0 = allocation  
    if get_open_orders():  
        return  
    for i,stock in enumerate(context.stocks):  
        order_target_percent(stock,allocation[i])  

With order_target_percent(stock,allocation[i]) as the trading execution portion. I changed some of the stocks in context.stocks and reran the algo. I also printed out context.n and allocation whenever trade() was called. I got the log output below. Can you help me understand what context.n is? What's the relationship to allocation? even though it's context.s / context.n it doesn't seem to change even when n increases or decreases:

2015-12-01 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-08 -- PRINT('Current context.n: ', 5)
2015-12-08 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-15 -- PRINT('Current context.n: ', 5)
2015-12-15 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-22 -- PRINT('Current context.n: ', 5)
2015-12-22 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-29 -- PRINT('Current context.n: ', 4)

could help walk me through what this portion of the code is doing?

   # set up the optimization  
   bnds = []  
    limits = [0,1]  
    for stock in context.stocks:  
        bnds.append(limits)  
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    # run the optimizer  
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds) 

    # determine the allocation  
    if res.success:  
        allocation = res.x  
        allocation[allocation<0] = 0  # clip off any negative terms  
        denom = np.sum(allocation)  
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:  
            allocation = allocation/denom  
    else:  
        allocation = np.copy(context.x0) 

    # keep track of the number of times run & a running sum of the allocations (so the average can be computed)  
    context.n += 1  
    context.s += allocation  

As noted above, the idea is to accumulate allocations over some period of time, and then average them. However, you'll get the same array if the allocation is always the same, regardless of n.

Here's a first crack at a long-short version. Maybe someone has insights into improving it. Note that I switched to SPY, thinking that it would be subject to less slippage than RSP at higher capital.

Clone Algorithm
569
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56dc721421e94b0deee57860
There was a runtime error.

Hi guys, is there a version of this min variance strategy that is applicable to a long-short strategy or the fact that the weights are all positive should be considered as a constraint ?

Francesco,

Here's basically the version I posted above (Mar. 6, 2016), but brought up to Q2 standards. It supports both long and short positions.

Grant

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5721c76dfc7f3a10faa43c80
There was a runtime error.

Thanks a lot Grant for the useful answer!
As alternative, do you think that it could make sense also to calculate the weights of the long and short positions in independet manner by then minimizing the variance of returns of short and long separately?
Cheers
Francesco

Dear Grant,
just another question, is still correct the calculation of the returns on a portfolio that also includes both long and short positions as

ret = prices.pct_change()[1:].as_matrix(context.stocks)  

shouldn't this ok only for long positions?
Thanks again
Francesco

I think that the minimization of the variance works using the returns as written. The constraints are what determine whether it is long only, or if long and short are allowed. The first one is normalization of the weights, allowing both positive and negative. The second one, I think, is a kind of long-short mean reversion constraint, but I gotta dwell on that one a bit.

  cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

I think that the minimization of the variance works using the returns as written. The constraints are what determine whether it is long only, or if long and short are allowed. The first one is normalization of the weights, allowing both positive and negative. The second one, I think, is a kind of long-short mean reversion constraint, but I gotta dwell on that one a bit.

  cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

What do you mean "breaks down" - times out, runs out of memory, other error, poor financial performance?

By the way, you can drop the call to handle_data if it does nothing.

if you put N=20 for exampe in attached backtest code at line 30

then at line 223

 if res.success and denom > 0:  
        allocation = allocation/denom  
    else:  
        print 'failed min'  
        allocation = np.copy(context.x0)  

the minimization function fails for all steps

Clone Algorithm
1
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5724a192ec666c10e1b54cb8
There was a runtime error.

Got it to work for N = 50. Once the backtest completes, I'll post it.

I changed to:

N_STOCKS = 50  
context.eps = 1  

Can't say that I understand it at this point, but there is an interaction between the two settings. I may have the time to take another look later today.

Clone Algorithm
1
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5724b52bc8909912d4daa1e3
There was a runtime error.

Ok, thanks
for N=100, I noticed it is required eps=10 to make it work, backtest attached

Clone Algorithm
0
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5724d34e023a0a11135089ad
There was a runtime error.

I'd suggest "rolling your own" versus trying to piggy-back off of my code (it is just a hack job). One thought is to do the long-short filtering/ranking thing, showing that you can get some decent performance, and then maybe trying some sort of minimum variance optimization, to see if the Sharpe ratio could be improved. Note that CVXOPT is also available as an optimizer.

Hello Grant, et al:
An algorithm that is simple, trades weekly, and achieves Sharpe > 2, deserves some consideration.
In playing with Grant's algo I noticed some odd behavior and took some effort to investigate.

My backtest file contains additional comments.

Observations
1. The algorithm is novel in how returns are calculated from minute data over a five day span. This overcomes a problem with the use of daily data. For robust covariance you want to use a number of periods that is at least 10x the number of equities. Four equities implies 40 days of data. Such a long window is a problem when trying to characterize an erratic equity like VIX or a commodity.
2. Total return for selected equities (RSP, EDV, TLT, XIV) and eps (0.05) is superb over the period 1 Dec 2010 to 4 Feb 2016
3. Total return is very sensitive to eps
288% @ eps = 0.049
362% @ eps = 0.050
219% @ eps = 0.051
4. Total return changes drastically with substitution of similar equities
(SPY for RSP) 235% vs 362%
(TLO for TLT) 197% vs 362%
5. The total return is very sensitive to the number of days of look back (separate backtest showed more than 3-to-1 variation in Sharpe ratio for lookback periods of 3 to 20 days). Similarly the result was very sensitive to start/end dates.
6. Something is wrong for such sensitivity in the result.

Compliance with the inequality constraint
For Grant's version of the algo and the case of eps=0.05 the inequality constraint is only met 49 out of 1303 days
This means that the algo is coasting the prior solution most of the time and is not operating as intended.
As you might expect is such a case the result is very sensitive to parameter or equity changes.
If the "success" logic is corrected the actual return is 101%, which is better than SPY, but with much higher volatility and drawdown

Why is this happening?
The ret_norm values are typically smaller than 0.05 and are often negative.
This means that on most days no set of positive weights summing to 1.0 can be found that will satisfy the constraint: dot(weights,ret_norm) > eps

But the algo checks for res.success
Yes, it does.
I don't understand the scipy SLSQP implementation well enough to say why res.success is True when res.status is not 0
Grant appears to be calling/invoking (what is the python term?) per the documentation
The following res.status errors are commonly seen with large eps values:
4 : Inequality constraints incompatible
8 : Positive directional derivative for linesearch
9 : Iteration limit exceeded (this always appears on the first 4 days)

So should I use a smaller eps value?
Generally yes, but not a fixed value
In this particular algo the set of equities is small and there is no other mechanism to assure that ret_norm values above some threshold, or are even positive.
Given this you face poor trade: reducing eps allows the algorithm to function as intended, but degrades performance as eps is the return threshold
Some dynamic method for setting eps is needed for this algorithm.

How to dynamically select eps
I welcome those of you with finance and math backgrounds to provide a more elegant solution, Here's a simple approach.
Assume that Equity 1 has the highest of the four ret_norm values (ret_norm_max). If the weights are set so that all are zero except for Equity 1, then the portfolio's ret_norm = ret_norm_max. Since the number of equities is small it is possible that Equity 1 is the only one with ret_norm>0. This condition frustrates optimization. A smaller eps value than ret_norm_max will improve the likelihood of optimization. As noted above a too small value will rob performance.
Here are some results for eps = 85%, 90%, 95%, and 100% of ret_norm_max
For each case the SLSQP optimizer succeeds in 1299 of 1303 days. It consistently fails to on the first 4 days.
return = 189% @ 100%
return = 239% @ 95%
return = 243% @ 90%
return = 232% @ 85%

Comment
While the result is not as spectacular as it first appeared the algorithm is now functioning as I expected and is more robust to parameter and equity changes. Now more investigation can be done (eps selection, equity selection, look back window, ...)

Clone Algorithm
83
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 57250219ec666c10e1b55221
There was a runtime error.

Thanks Peter,

Glad you found it interesting.

Cheers,

Grant

Here's an update using CVXPY. I think it is working basically in the same fashion as the original post above. It has been re-factored a bit, and more could be done. The main thing here is that CVXPY can be used. --Grant

Note:

    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
Clone Algorithm
98
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b1f970def15a62a0c33f90
There was a runtime error.

Very cool Grant. What do limitations, if any, do you see with this algo?

Hi Evan,

Well, first off, I just hacked the thing together, so caveat investor. This is probably the biggest "limitation"--that it doesn't have a spelled out underlying economic principle (i.e. what makes the thing tick). So, it could be "over-fit" and/or suffer from "data mining" (insert your favorite quant sin). The problem is compounded by the limited time over which one can backtest. If it could be tested going back many decades, then one could have more confidence.

The drawdown and volatility are very high, so another limitation, assuming that the long-term returns would persist, is that in the absence of a model explaining the returns, the likelihood of "abandoning ship" after losing money is high. For example, say one puts money into it and then it immediately drops 20%. It would be easy to justify pulling out, to cut losses.

It would be nice to see beta much lower (e.g. in the range -0.3 to 0.3, as Q requires for the contest), without shorting. With such a high beta, maybe there is a risk that if the market tanks, the strategy would die, too?

Leverage 1.29 intraday. Try this: Before ordering, determine whether the order will be an increase or decrease in allocation. Since there is no shorting in this case those are buy and sell respectively. Hold buys until selling is done. That will resolve the 57k negative cash and typically result in a higher return from what I've seen. Here, returns show 365% however margin was discarded so it made 366k on 157k for 232%. Benchmark 117%. It may do higher than 365%, real with no negative cash given those changes. There's an example of it above.

2017-02-24 13:00 _pvr:155 INFO PvR 0.1483 %/day   cagr 0.3   Portfolio value 466017   PnL 366017  
2017-02-24 13:00 _pvr:156 INFO   Profited 366017 on 157790 activated/transacted for PvR of 232.0%  
2017-02-24 13:00 _pvr:157 INFO   QRet 366.02 PvR 231.96 CshLw -57790 MxLv 1.29 RskHi 157790 MxShrt 0  
2017-02-24 13:00 pvr:245 INFO 2010-12-08 to 2017-02-24  $100000  2017-02-26 08:23 US/Eastern  
Runtime 0 hr 12.9 min  

A quickie example to illustrate that there may be ways to "tame" the algo to be kinda market neutral without shorting.

Clone Algorithm
98
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b2df59da44bf5e3b30e248
There was a runtime error.

Chart says it used 100k to make 152k, 152%.
In reality it used 144k to make 152k, that's 105%, under the benchmark 117%.

That's the good news. With default slippage/commissions:
Uses 196k to make 146k, just 75%. Better to invest in SPY.

Regarding commissions, my thought is that they would be $0 under Robinhood, right?

I guess you are concerned with leverage > 1 temporarily. Not sure how to handle that one. My assumption is that Robinhood can handle re-balancing, but if it means that cash has to be kept in reserve, then, yes, it will hurt the return.

As for slippage, for small amounts of capital it can be neglected, but maybe that's incorrect?

With zero commission and default slippage:
Uses 197k to make 150k, that's 76%.
Q Returns show 150%.

So slippage played the major role.

By the way I wish we had set_nonmargin() to be able to model what would happen on Robinhood or any nonmargin account.

@ Blue -

Even better than set_nonmargin() would be something like set_nonmargin(broker=`Robinhood`) with any idiosyncrasies baked in, so that backtesting and Q paper trading would be 1:1 with real-money trading. Robinhood, though, seems to have dropped off the radar screen at Quantopian headquarters. It's all about the Q fund (and futures in private alpha). Maybe a user has written an add-on like your proposed set_nonmargin()?

Hello guys,
Great work! Quick question...is this algo safe for smaller accounts (less than $25,000). Will it trigger the "Pattern Day trader" rule (by executing four roundtrip daytrades within 5 days)?

Thanks!

A weekly re-balancing should be o.k., however I've been advised that one has to watch out for going into negative cash. See https://www.quantopian.com/posts/quantopian-and-robinhood-lessons-learned-and-best-practices for some good info.

Grant,

I'm very interested in your implementation of the optimizer to find allocations. Is there a paper or article you can point me to for background? I'd like to understand the strategy better.

TIA.

Hi Stephen -

I am not aware of a paper or article describing exactly what I've done (a hack, really), but you might try Google Scholar. The basic idea is to minimize the variance in returns with constraints. I would be surprised if nobody has ever published something on the topic.

If you find anything, please share the references.

There is some general discussion on the use of optimizers in Robert Carver's book, Systematic Trading, along with using trailing volatility for weighting.

Hi,

Really interesting stuff here. Question: according to scipy docs on the minimze function being employed here:

"Note that COBYLA only supports inequality constraints." https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.minimize.html

But in the code, inequality is being used, and the method specified is SLSQP. Does this mean that this constraint is not actually being used? Or that the COBYLA method is in fact being used?

cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)