Back to Community
minimum variance w/ constraint

For your consideration. I got the list of securities from one of the postings on https://www.quantopian.com/posts/for-robinhood-trading .

Clone Algorithm
599
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    
    context.eps = 0.01
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
def handle_data(context,data):
    
    record(leverage = context.account.leverage)
    
    # allocate(context,data)

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
        
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
69 responses

Here's a tweak. Instead of context.eps = 0.01 I set context.eps = 0.05. Seems decent. Comment/criticisms/improvements welcome. --Grant

Clone Algorithm
599
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    
    context.eps = 0.05
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
def handle_data(context,data):
    
    record(leverage = context.account.leverage)
    
    # allocate(context,data)

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
        
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Here's another tweak. I changed if denom > 0 and np.dot(allocation,ret_norm) >= 0: to if denom > 0: . The former was probably resulting in the optimization not being applied, when it should have been.

[EDIT] I also changed to this:

    x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)  
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)  

The seed for the optimization is an equal weight portfolio.

--Grant

Clone Algorithm
599
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.05
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
def handle_data(context,data):
    
    record(leverage = context.account.leverage)

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
        
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
        
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

Awesome algorithm Grant.
Could you please give a brief explanation regarding the constraints of the optimization.
Many thanks,
Andrew

Incredible returns. Now I'm tempted to try it after that original post for Robinhood.

Edit:

Grant, Can you explain what is the purpose of this line for the constraint?

'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps}

I'm not sure I understand what is ret_norm or eps.

EDIT 2:

Nevermind I figured it out. You are seeking a return or greater based on the normal distribution.

Andrew,

This code computes the mean return normalized by the standard deviation:

    ret_mean = prices.pct_change().mean()  
    ret_std = prices.pct_change().std()  
    ret_norm = ret_mean/ret_std  
    ret_norm = ret_norm.as_matrix(context.stocks)  

Then, the normalized mean return is included as a constraint:

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
    {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

The variable x is the portfolio asset allocation/weight vector. The first constraint is a leverage constraint--the weights need to sum to one. The second constraint is saying that the sum of the normalized returns weighted by the portfolio allocations needs to be equal to or greater than a threshold, context.eps. It should tilt the portfolio toward assets with positive risk-adjusted returns.

By the way, I suspect that there may be a closed-form, analytic solution to the constrained optimization problem solved here iteratively. An attaboy to the first person to post it.

Thanks grant!
Awesome returns!

Most recent algo PvR out, accounting for the negative cash, 108.6%, or 0.0833 %/day:

2016-02-04_pvr_:132INFO PvR 0.0833 %/day     2010-12-01 to 2016-02-04  10000  minute  
2016-02-04_pvr_:135INFO  Profited 35134 on 32362 activated/transacted for PvR of 108.6%  
2016-02-04_pvr_:138INFO  QRet 351.34 PvR 108.57 CshLw -22362 MxLv 1.50 RskHi 32362 Shrts 0  

This algo ranks 6th in PvR/day among 61 tested this week, very good.
Just that it would be useful to see a version of it without margin, and as happened could maybe bring overall higher profitability.

How do you suggest eliminating margin, other than holding a small positive cash balance (which will just be dead capital and cut into the return)?

Maybe whatever magic it was that Tim V. did with his Robinhood algo to eliminate negative cash, I haven't tried to understand it yet.

Ideally speaking, beyond that, an ordering wrapper for all of the order methods that would monitor fills (including partial fills and unfilled) and adjust weights accordingly, not easy. Orders would be first queued in any frame, then analyzed wrt current cash, adjusted, ordered.

Thanks. I see the problem now with this code:

def handle_data(context,data):  
    context.leverage.append(context.account.leverage)  
    record(max_leverage = max(context.leverage))  

It is a mystery how the leverage could spike during the day, but settle to 1 by the close.

Here's a backtest indicating that the leverage pops up above 1 significantly. However, as shown above, at then end of the trading day, it is always near 1. Any idea what's going on? Maybe something fishy with the way order_target_percent is playing out? I guess if all the orders aren't processed in one minute, leverage can be out of whack temporarily?

Clone Algorithm
599
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.05
    
    context.leverage = []
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
def handle_data(context,data):
    
    context.leverage.append(context.account.leverage)
    
    record(max_leverage = max(context.leverage))

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    # print len(prices)
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds,options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})
    
    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    # if not res.success:
    #     print 'res.success = False'
    
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
        
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

You can add your own context variables in track_orders() here to experiment toward solutions etc.
The number at the beginning of lines -- minute of the day.
Would like to point out that you'll (rarely) see partial fills and they have the slash like
Bot 50/63 EDV at 80.50 or Sold -250/-285 XIV at 15.32
So if you scroll and copy the output you can search for '/' and find those.
Can also search for 'cash -' to find negative cash.
The order id logging option is turned on.
There is a line like this each time there's a new leverage high:
2010-12-14pvr:346INFO 99 MaxLv 1.01 QRet -0.7 PvR -0.7 RskHi 10068

Clone Algorithm
86
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.05
    
    context.leverage = []
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
def handle_data(context,data):
    track_orders(context,data)
    pvr(context, data)
    
    #context.leverage.append(context.account.leverage)
    #record(max_leverage = max(context.leverage))

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    # print len(prices)
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds,options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})
    
    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    # if not res.success:
    #     print 'res.success = False'
    
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
        
    track_orders(context,data)
    '''
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
    '''
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)

def track_orders(context, data):  # Log orders created, filled, unfilled or canceled.
    '''      https://www.quantopian.com/posts/track-orders
    Status:
       0 - Unfilled
       1 - Filled (can be partial)
       2 - Canceled
    '''
    c = context
    log_cash = 1    # Show cash values in logging window or not.
    log_ids  = 1    # Include order id's in logging window or not.

    ''' Start and stop date options ...
    To not overwhelm the logging window, start/stop dates can be entered
      either below or in initialize() if you move to there for better efficiency.
    Example:
        c.dates  = {
            'active': 0,
            'start' : ['2007-05-07', '2010-04-26'],
            'stop'  : ['2008-02-13', '2010-11-15']
        }
    '''
    if 'orders' not in c:
        c.orders = {}               # Move these to initialize() for better efficiency.
        c.dates  = {
            'active': 0,
            'start' : [],           # Start dates, option
            'stop'  : []            # Stop  dates, option
        }
    from pytz import timezone       # Python only does once, makes this portable.
                                    #   Move to top of algo for better efficiency.

    # If the dates 'start' or 'stop' lists have something in them, sets them.
    if c.dates['start'] or c.dates['stop']:
        date = str(get_datetime().date())
        if   date in c.dates['start']:    # See if there's a match to start
            c.dates['active'] = 1
        elif date in c.dates['stop']:     #   ... or to stop
            c.dates['active'] = 0
    else:
        c.dates['active'] = 1  # Set to active b/c no conditions

    if c.dates['active'] == 0:
        return                 # Skip if off

    def _minute():   # To preface each line with the minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _orders(to_log):    # So all logging comes from the same line number,
        log.info(to_log)    #   for vertical alignment in the logging window.

    to_delete = []
    for id in c.orders:
        o    = get_order(id)
        sec  = o.sid ; sym = sec.symbol
        oid  = o.id if log_ids else ''
        cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
        if o.filled:        # Filled at least some
            trade  = 'Bot' if o.amount > 0 else 'Sold'
            filled = o.filled 
            if o.filled != o.amount:
                filled = '{}/{}'.format(o.filled, o.amount)
            _orders(' {}      {} {} {} at {}   {} {}'.format(_minute(),
                trade, filled, sym, '%.2f' % data[sec].price, cash, oid))
            to_delete.append(o.id)
        else:
            canceled = 'canceled' if o.status == 2 else ''
            _orders(' {}         {} {} unfilled {} {}'.format(_minute(),
                    o.sid.symbol, o.amount, canceled, oid))
            if canceled: to_delete.append(o.id)

    for oo_list in get_open_orders().values(): # Open orders list
        for o in oo_list:
            sec  = o.sid ; sym = sec.symbol
            oid  = o.id if log_ids else ''
            cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
            if o.id in to_delete:
                continue
            if o.status == 2:                  # Canceled
                _orders(' {}    Canceled {} {} order   {} {}'.format(_minute(),
                        trade, o.amount, sym, '%.2f' % data[sec].price, cash, oid))
                to_delete.append(o.id)
            elif o.id not in c.orders:         # New
                c.orders[o.id] = 1
                price = '%.2f' % data[sec].price
                trade = 'Buy' if o.amount > 0 else 'Sell'
                if o.limit:                    # Limit order
                    _orders(' {}   {} {} {} now {} limit {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, price, o.limit, cash, oid))
                elif o.stop:                   # Stop order
                    _orders(' {}   {} {} {} now {} stop {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, price, o.stop, cash, oid))
                else:                          # Market order
                    _orders(' {}   {} {} {} at {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, price, cash, oid))
    for d in to_delete:
        del c.orders[d]

def pvr(context, data):
    ''' Custom chart and/or log of profit_vs_risk returns and related information
    '''
    # # # # # # # # # #  Options  # # # # # # # # # #
    record_max_lvrg = 1         # Maximum leverage encountered
    record_leverage = 0         # Leverage (context.account.leverage)
    record_q_return = 1         # Quantopian returns (percentage)
    record_pvr      = 1         # Profit vs Risk returns (percentage)
    record_pnl      = 0         # Profit-n-Loss
    record_shorting = 0         # Total value of any shorts
    record_overshrt = 0         # Shorts beyond longs+cash
    record_risk     = 0         # Risked, max cash spent or shorts beyond longs+cash
    record_risk_hi  = 1         # Highest risk overall
    record_cash     = 0         # Cash available
    record_cash_low = 0         # Any new lowest cash level
    logging         = 1         # Also to logging window conditionally (1) or not (0)
    log_method      = 'risk_hi' # 'daily' or 'risk_hi'

    from pytz import timezone   # Python will only do once, makes this portable.
                                #   Move to top of algo for better efficiency.
    c = context  # Brevity is the soul of wit -- Shakespeare [for efficiency, readability]
    if 'pvr' not in c:
        date_strt = get_environment('start').date()
        date_end  = get_environment('end').date()
        cash_low  = c.portfolio.starting_cash
        mode      = get_environment('data_frequency')
        c.pvr = {
            'max_lvrg': 0,
            'risk_hi' : 0,
            'days'    : 0.0,
            'date_prv': '',
            'cash_low': cash_low,
            'date_end': date_end,
            'mode'    : mode,
            'run_str' : '{} to {}  {}  {}'.format(date_strt,date_end,int(cash_low),mode)
        }
        log.info(c.pvr['run_str'])
    pvr_rtrn     = 0            # Profit vs Risk returns based on maximum spent
    profit_loss  = 0            # Profit-n-loss
    shorts       = 0            # Shorts value
    longs        = 0            # Longs  value
    overshorts   = 0            # Shorts value beyond longs plus cash
    new_risk_hi  = 0
    new_cash_low = 0                           # To trigger logging in cash_low case
    lvrg         = c.account.leverage          # Standard leverage, in-house
    date         = get_datetime().date()       # To trigger logging in daily case
    cash         = c.portfolio.cash
    start        = c.portfolio.starting_cash
    cash_dip     = int(max(0, start - cash))
    q_rtrn       = 100 * (c.portfolio.portfolio_value - start) / start

    if int(cash) < c.pvr['cash_low']:                # New cash low
        new_cash_low = 1
        c.pvr['cash_low']   = int(cash)
        if record_cash_low:
            record(CashLow = int(c.pvr['cash_low'])) # Lowest cash level hit

    if record_max_lvrg:
        if c.account.leverage > c.pvr['max_lvrg']:
            c.pvr['max_lvrg'] = c.account.leverage
            record(MaxLv = c.pvr['max_lvrg'])        # Maximum leverage

    if record_pnl:
        profit_loss = c.portfolio.pnl
        record(PnL = profit_loss)                    # "Profit and Loss" in dollars

    for p in c.portfolio.positions:
        shrs = c.portfolio.positions[p].amount
        if shrs < 0:
            shorts += int(abs(shrs * data[p].price))
        if shrs > 0:
            longs  += int(shrs * data[p].price)

    if shorts > longs + cash: overshorts = shorts             # Shorts when too high
    if record_shorting: record(Shorts  = shorts)              # Shorts value as a positve
    if record_overshrt: record(OvrShrt = overshorts)          # Shorts value as a positve
    if record_cash:     record(Cash = int(c.portfolio.cash))  # Cash
    if record_leverage: record(Lvrg = c.account.leverage)     # Leverage

    risk = int(max(cash_dip, shorts))
    if record_risk: record(Risk = risk)       # Amount in play, maximum of shorts or cash used

    if risk > c.pvr['risk_hi']:
        c.pvr['risk_hi'] = risk
        new_risk_hi = 1

        if record_risk_hi:
            record(RiskHi = c.pvr['risk_hi']) # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)
        if c.pvr['risk_hi'] != 0:     # Avoid zero-divide
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.pvr['risk_hi']
            record(PvR = pvr_rtrn)            # Profit_vs_Risk returns

    if record_q_return:
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    def _minute():   # To preface each line with minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _pvr_():
            log.info('PvR {} %/day     {}'.format(
                '%.4f' % (pvr_rtrn / c.pvr['days']), c.pvr['run_str']))
            log.info('  Profited {} on {} activated/transacted for PvR of {}%'.format(
                '%.0f' % (c.portfolio.portfolio_value - start), '%.0f' % c.pvr['risk_hi'],
                '%.1f' % pvr_rtrn))
            log.info('  QRet {} PvR {} CshLw {} MxLv {} RskHi {} Shrts {}'.format(
                '%.2f' % q_rtrn, '%.2f' % pvr_rtrn, '%.0f' % c.pvr['cash_low'],
                '%.2f' % c.pvr['max_lvrg'], '%.0f' % c.pvr['risk_hi'], '%.0f' % shorts))

    if logging:
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.pvr['date_prv'] != date \
          or new_cash_low:
            qret    = ' QRet '   + '%.1f' % q_rtrn
            lv      = ' Lv '     + '%.1f' % lvrg              if record_leverage else ''
            pvr     = ' PvR '    + '%.1f' % pvr_rtrn          if record_pvr      else ''
            pnl     = ' PnL '    + '%.0f' % profit_loss       if record_pnl      else ''
            csh     = ' Cash '   + '%.0f' % cash              if record_cash     else ''
            shrt    = ' Shrt '   + '%.0f' % shorts            if record_shorting else ''
            ovrshrt = ' Shrt '   + '%.0f' % overshorts        if record_overshrt else ''
            risk    = ' Risk '   + '%.0f' % risk              if record_risk     else ''
            mxlv    = ' MaxLv '  + '%.2f' % c.pvr['max_lvrg'] if record_max_lvrg else ''
            csh_lw  = ' CshLw '  + '%.0f' % c.pvr['cash_low'] if record_cash_low else ''
            rsk_hi  = ' RskHi '  + '%.0f' % c.pvr['risk_hi']  if record_risk_hi  else ''
            log.info('{}{}{}{}{}{}{}{}{}{}{}{}'.format(_minute(),
               lv, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, ovrshrt, risk, rsk_hi))
    if c.pvr['date_prv'] != date: c.pvr['days'] += 1.0
    if c.pvr['days'] % 130 == 0 and _minute() == '100': _pvr_()
    c.pvr['date_prv'] = date
    if c.pvr['date_end'] == date:
        # Summary on last minute of last day.
        # If using schedule_function(), backtest last day/time may need to match for this to execute.
        if 'pvr_summary_done' not in c: c.pvr_summary_done = 0
        log_summary = 0
        if c.pvr['mode'] == 'daily' and get_datetime().date() == c.pvr['date_end']:
            log_summary = 1
        elif c.pvr['mode'] == 'minute' and get_datetime() == get_environment('end'):
            log_summary = 1
        if log_summary and not c.pvr_summary_done:
            _pvr_()
            c.pvr_summary_done = 1

There was a runtime error.

I took a closer look, sell orders are going unfilled for numerous minutes sometimes while buys do go through right away and that accounts for the leverage spikes and deep negative cash.
So here's one suggestion:
a) In place of order_target_percent(), queue the orders.
b) In handle_data, every minute, if there are any queued orders ( if context.queue_list: ), call a function that will process those orders only if/when there are no open orders. Wish/hope there were/is a better way.
c) Set to process any sells first and then wait for them to be filled before buy. Or better, could only place buys when there is likely enough breathing room in available cash for slippage and commissions (so as long as one or more sells are done, allow any buys that can fit).

To determine whether a transaction ratio ("percentage" being fed to order_target_percent()) is a sell or buy (both are positive and simply adjustments, a lower number than existing is a sell), have to take a look at its current percentage of the portfolio compared to the new weight, allocation[i] that is stored/queued.

I wrote the bit above and then decided to work on it so I'll probably have a backtest approaching that fairly soon.

Thanks. I figured something like you describe could be happening. By the way, I posted a question to https://www.quantopian.com/posts/zero-commission-algorithmic-trading-robinhood-and-quantopian. It seems that order_target_percent() is not gonna work for re-balancing under Robinhood, unless enough cash is sitting around to cover the T+3 rule. If I understand correctly, it'll need to be something like order_target_percent_sell() followed by order_target_percent_buy() three days later when the cash from the sale is available.

Hi Grant,

Reviewing: Thanks to the PvR (Profit vs Risk) code I added it became apparent that the third algo appearing to be over 300% was really 108% in profit per dollar spent because of -22k in negative cash. It did actually profit 35k, just that it took 32k to make that, not 10k.

This new version of the algorithm does not go into negative cash and the result is pretty interesting.
It spends only 10k and ends with 44k so instead of 108% it is now a genuine 344.6% return per dollar spent, no margin.

The attached code adds a routine to queue orders, handle sells first, wait for them to be filled, then buy after that.
It also contains a suggested start toward the Robinhood T+3 you mentioned.
In the track_orders function output you'll see unfilled's. Its output can be toggled off. The first digit is minute of the day.

The genuine 344.6% return makes this the second best algo I've seen so far in PvR per day, kudos Grant Kiehne.
(and me, if I do say so myself)

Clone Algorithm
86
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''    https://www.quantopian.com/posts/minimum-variance-w-slash-constraint

Original:       PvR only 108.6%
2016-02-04_pvr_:132INFO PvR 0.0833 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:135INFO  Profited 35134 on 32362 activated/transacted for PvR of 108.6%
2016-02-04_pvr_:138INFO  QRet 351.34 PvR 108.57 CshLw -22362 MxLv 1.50 RskHi 32362 Shrts 0

Modifications:  PvR      344.6%
2016-02-04_pvr_:367INFO PvR 0.2645 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:370INFO  Profited 34437 on 9993 activated/transacted for PvR of 344.6%
2016-02-04_pvr_:373INFO  QRet 344.37 PvR 344.61 CshLw 6 MxLv 1.00 RskHi 9993 Shrts 0

Modified by garyha
This avoids negative cash mostly by queueing orders, then doing sells, then buys.

This dramatic increase in profitability is originally thanks to:

    http://quantopian.com/posts/pvr

Use PvR and see clearly.

Also track_orders() made it possible to understand what was going on with the orders:
https://www.quantopian.com/posts/track-orders

'''

import numpy as np
import scipy
from pytz import timezone

def initialize(context):
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
    context.n   = 0
    context.s   = np.zeros_like(context.stocks)
    context.x0  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.x1  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.eps = 0.05
    context.leverage = []
    schedule_function(allocate, date_rules.every_day(),   time_rules.market_open(minutes=60))
    schedule_function(queues,   date_rules.week_start(1), time_rules.market_open(minutes=60))
    set_commission(commission.PerTrade(cost=0))
    set_long_only()
    context.queue_list = []
    context.track_orders = 1    # toggle on|off

def handle_data(context,data):
    if get_open_orders():
        track_orders(context, data)    # for filled orders
        return
    if context.queue_list:
        trade(context, data)

    track_orders(context, data)        # for new orders and last-frame-filled's
    pvr(context, data)

    #context.leverage.append(context.account.leverage)
    #record(max_leverage = max(context.leverage))

def trade(context, data):    # Process any queued orders
    if get_open_orders(): return    # Wait for fills

    c = context
    mult = .984        # Multiplier for weights in orders, cash vs
                       #  slippage, commissions, to avoid negative cash.
    log_changes = 1    # Whether to log weight|allocation changes.
    sells = 0          # Indicator, whether any sells happened.
    qlist = sorted(c.queue_list)[:]  # Make an independent copy to allow remove().

    for o in qlist:    # Each order queued, process any sells
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight < pf_ratio_now:   # sell, is decrease in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Selling
            c.queue_list.remove(o)
            sells = 1  # let these settle before/if any buys

    if sells: return   # let any sells go thru before buys

    '''
    To do if Robinhood: Make sure T+3 is satisfied before buys here.
    Untested ...
    c = context
    date = get_datetime().date()
    if c.date_prv != date:
        c.day_count += 1
        c.date_prv = date
    if c.day_count <= 3:
        return
    else:
        c.day_count = 0
    '''

    for o in qlist:    # Should be all buys at this point
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight > pf_ratio_now:   # buy, is increase in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Buying
            c.queue_list.remove(o)

def queues(context, data):  # Queue orders (this was GK's ordering originally as trade())
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return

    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation

    if context.queue_list: return    # wait for orders to clear

    for i,stock in enumerate(context.stocks):
        context.queue_list.append( (allocation[i], stock) ) # list of tuples

    '''
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
    '''

def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return 2*np.dot(Acov,x)

def allocate(context, data):
    prices   = history(5*390,'1m', 'price')
    ret      = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std  = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    bnds     = []
    limits   = [0,1]

    for stock in context.stocks:
        bnds.append(limits)

    bnds = tuple(tuple(x) for x in bnds)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})

    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds,options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})

    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)

    # if not res.success:
    #     print 'res.success = False'

    context.n += 1
    context.s += allocation

def track_orders(context, data):  # Log orders created, filled, unfilled or canceled.
    if not context.track_orders: return

    '''      https://www.quantopian.com/posts/track-orders
    Status:
       0 - Unfilled
       1 - Filled (can be partial)
       2 - Canceled
    '''
    c = context
    log_cash = 1    # Show cash values in logging window or not.
    log_ids  = 1    # Include order id's in logging window or not.

    ''' Start and stop date options ...
    To not overwhelm the logging window, start/stop dates can be entered
      either below or in initialize() if you move to there for better efficiency.
    Example:
        c.dates  = {
            'active': 0,
            'start' : ['2007-05-07', '2010-04-26'],
            'stop'  : ['2008-02-13', '2010-11-15']
        }
    '''
    if 'orders' not in c:
        c.orders = {}               # Move these to initialize() for better efficiency.
        c.dates  = {
            'active': 0,
            'start' : [],           # Start dates, option
            'stop'  : []            # Stop  dates, option
        }
    #from pytz import timezone      # Python only does once, makes this portable.
                                    #   Move to top of algo for better efficiency.

    # If the dates 'start' or 'stop' lists have something in them, sets them.
    if c.dates['start'] or c.dates['stop']:
        date = str(get_datetime().date())
        if   date in c.dates['start']:    # See if there's a match to start
            c.dates['active'] = 1
        elif date in c.dates['stop']:     #   ... or to stop
            c.dates['active'] = 0
    else:
        c.dates['active'] = 1  # Set to active b/c no conditions

    if c.dates['active'] == 0:
        return                 # Skip if off

    def _minute():   # To preface each line with the minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _orders(to_log):       # So all logging comes from the same line number,
        log.info(to_log)       #   for vertical alignment in the logging window.

    ordrs = c.orders.copy()    # Independent copy to allow deletes
    for id in ordrs:
        o    = get_order(id)
        sec  = o.sid ; sym = sec.symbol
        oid  = o.id if log_ids else ''
        cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
        prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
        if o.filled:        # Filled at least some
            trade  = 'Bot' if o.amount > 0 else 'Sold'
            filled = '{}'.format(o.amount)
            if o.filled == o.amount:    # complete
                if 0 < c.orders[o.id] < o.amount:
                    filled  = 'all/{}'.format(o.amount)
                del c.orders[o.id]
            else:
                done_prv       = c.orders[o.id]       # previously filled ttl
                filled_this    = o.filled - done_prv  # filled this time, can be 0
                c.orders[o.id] = o.filled             # save for increments math
                filled         = '{}/{}'.format(filled_this, o.amount)
            _orders(' {}      {} {} {} at {}   {} {}'.format(_minute(),
                trade, filled, sym, prc, cash, oid))
        else:
            canceled = 'canceled' if o.status == 2 else ''
            _orders(' {}         {} {} unfilled {} {}'.format(_minute(),
                    o.sid.symbol, o.amount, canceled, oid))
            if canceled: del c.orders[o.id]

    for oo_list in get_open_orders().values(): # Open orders list
        for o in oo_list:
            sec  = o.sid ; sym = sec.symbol
            oid  = o.id if log_ids else ''
            cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
            prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
            if o.status == 2:                  # Canceled
                _orders(' {}    Canceled {} {} order   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))
                del c.orders[o.id]
            elif o.id not in c.orders:         # New
                c.orders[o.id] = 0
                trade = 'Buy' if o.amount > 0 else 'Sell'
                if o.limit:                    # Limit order
                    _orders(' {}   {} {} {} now {} limit {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.limit, cash, oid))
                elif o.stop:                   # Stop order
                    _orders(' {}   {} {} {} now {} stop {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.stop, cash, oid))
                else:                          # Market order
                    _orders(' {}   {} {} {} at {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))

def pvr(context, data):
    ''' Custom chart and/or log of profit_vs_risk returns and related information
    '''
    # # # # # # # # # #  Options  # # # # # # # # # #
    record_max_lvrg = 1         # Maximum leverage encountered
    record_leverage = 0         # Leverage (context.account.leverage)
    record_q_return = 1         # Quantopian returns (percentage)
    record_pvr      = 1         # Profit vs Risk returns (percentage)
    record_pnl      = 0         # Profit-n-Loss
    record_shorting = 0         # Total value of any shorts
    record_overshrt = 0         # Shorts beyond longs+cash
    record_risk     = 0         # Risked, max cash spent or shorts beyond longs+cash
    record_risk_hi  = 1         # Highest risk overall
    record_cash     = 0         # Cash available
    record_cash_low = 0         # Any new lowest cash level
    logging         = 1         # Also to logging window conditionally (1) or not (0)
    log_method      = 'risk_hi' # 'daily' or 'risk_hi'

    from pytz import timezone   # Python will only do once, makes this portable.
                                #   Move to top of algo for better efficiency.
    c = context  # Brevity is the soul of wit -- Shakespeare [for efficiency, readability]
    if 'pvr' not in c:
        date_strt = get_environment('start').date()
        date_end  = get_environment('end').date()
        cash_low  = c.portfolio.starting_cash
        mode      = get_environment('data_frequency')
        c.pvr = {
            'max_lvrg': 0,
            'risk_hi' : 0,
            'days'    : 0.0,
            'date_prv': '',
            'cash_low': cash_low,
            'date_end': date_end,
            'mode'    : mode,
            'run_str' : '{} to {}  {}  {}'.format(date_strt,date_end,int(cash_low),mode)
        }
        log.info(c.pvr['run_str'])
    pvr_rtrn     = 0            # Profit vs Risk returns based on maximum spent
    profit_loss  = 0            # Profit-n-loss
    shorts       = 0            # Shorts value
    longs        = 0            # Longs  value
    overshorts   = 0            # Shorts value beyond longs plus cash
    new_risk_hi  = 0
    new_cash_low = 0                           # To trigger logging in cash_low case
    lvrg         = c.account.leverage          # Standard leverage, in-house
    date         = get_datetime().date()       # To trigger logging in daily case
    cash         = c.portfolio.cash
    start        = c.portfolio.starting_cash
    cash_dip     = int(max(0, start - cash))
    q_rtrn       = 100 * (c.portfolio.portfolio_value - start) / start

    if int(cash) < c.pvr['cash_low']:                # New cash low
        new_cash_low = 1
        c.pvr['cash_low']   = int(cash)
        if record_cash_low:
            record(CashLow = int(c.pvr['cash_low'])) # Lowest cash level hit

    if record_max_lvrg:
        if c.account.leverage > c.pvr['max_lvrg']:
            c.pvr['max_lvrg'] = c.account.leverage
            record(MaxLv = c.pvr['max_lvrg'])        # Maximum leverage

    if record_pnl:
        profit_loss = c.portfolio.pnl
        record(PnL = profit_loss)                    # "Profit and Loss" in dollars

    for p in c.portfolio.positions:
        shrs = c.portfolio.positions[p].amount
        if shrs < 0:
            shorts += int(abs(shrs * data[p].price))
        if shrs > 0:
            longs  += int(shrs * data[p].price)

    if shorts > longs + cash: overshorts = shorts             # Shorts when too high
    if record_shorting: record(Shorts  = shorts)              # Shorts value as a positve
    if record_overshrt: record(OvrShrt = overshorts)          # Shorts value as a positve
    if record_cash:     record(Cash = int(c.portfolio.cash))  # Cash
    if record_leverage: record(Lvrg = c.account.leverage)     # Leverage

    risk = int(max(cash_dip, shorts))
    if record_risk: record(Risk = risk)       # Amount in play, maximum of shorts or cash used

    if risk > c.pvr['risk_hi']:
        c.pvr['risk_hi'] = risk
        new_risk_hi = 1

        if record_risk_hi:
            record(RiskHi = c.pvr['risk_hi']) # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)
        if c.pvr['risk_hi'] != 0:     # Avoid zero-divide
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.pvr['risk_hi']
            record(PvR = pvr_rtrn)            # Profit_vs_Risk returns

    if record_q_return:
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    def _minute():   # To preface each line with minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _pvr_():
            log.info('PvR {} %/day     {}'.format(
                '%.4f' % (pvr_rtrn / c.pvr['days']), c.pvr['run_str']))
            log.info('  Profited {} on {} activated/transacted for PvR of {}%'.format(
                '%.0f' % (c.portfolio.portfolio_value - start), '%.0f' % c.pvr['risk_hi'],
                '%.1f' % pvr_rtrn))
            log.info('  QRet {} PvR {} CshLw {} MxLv {} RskHi {} Shrts {}'.format(
                '%.2f' % q_rtrn, '%.2f' % pvr_rtrn, '%.0f' % c.pvr['cash_low'],
                '%.2f' % c.pvr['max_lvrg'], '%.0f' % c.pvr['risk_hi'], '%.0f' % shorts))

    if logging:
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.pvr['date_prv'] != date \
          or new_cash_low:
            qret    = ' QRet '   + '%.1f' % q_rtrn
            lv      = ' Lv '     + '%.1f' % lvrg              if record_leverage else ''
            pvr     = ' PvR '    + '%.1f' % pvr_rtrn          if record_pvr      else ''
            pnl     = ' PnL '    + '%.0f' % profit_loss       if record_pnl      else ''
            csh     = ' Cash '   + '%.0f' % cash              if record_cash     else ''
            shrt    = ' Shrt '   + '%.0f' % shorts            if record_shorting else ''
            ovrshrt = ' Shrt '   + '%.0f' % overshorts        if record_overshrt else ''
            risk    = ' Risk '   + '%.0f' % risk              if record_risk     else ''
            mxlv    = ' MaxLv '  + '%.2f' % c.pvr['max_lvrg'] if record_max_lvrg else ''
            csh_lw  = ' CshLw '  + '%.0f' % c.pvr['cash_low'] if record_cash_low else ''
            rsk_hi  = ' RskHi '  + '%.0f' % c.pvr['risk_hi']  if record_risk_hi  else ''
            log.info('{}{}{}{}{}{}{}{}{}{}{}{}'.format(_minute(),
               lv, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, ovrshrt, risk, rsk_hi))
    if c.pvr['date_prv'] != date: c.pvr['days'] += 1.0
    if c.pvr['days'] % 130 == 0 and _minute() == '100': _pvr_()
    c.pvr['date_prv'] = date
    if c.pvr['date_end'] == date:
        # Summary on last minute of last day.
        # If using schedule_function(), backtest last day/time may need to match for this to execute.
        if 'pvr_summary_done' not in c: c.pvr_summary_done = 0
        log_summary = 0
        if c.pvr['mode'] == 'daily' and get_datetime().date() == c.pvr['date_end']:
            log_summary = 1
        elif c.pvr['mode'] == 'minute' and get_datetime() == get_environment('end'):
            log_summary = 1
        if log_summary and not c.pvr_summary_done:
            _pvr_()
            c.pvr_summary_done = 1

def minut():   # To preface each line with the minute of the day.
               # Added to be used in trade()
    if get_environment('data_frequency') == 'minute':
        bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
        minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
        return str(minute).rjust(3)
    return ''    # Daily mode, just leave it out.

There was a runtime error.

Fantastic algorithm guys!!
during july 2015 to jan 2016 it seems the algorithm plateaus and declines.
I was wondering if more diagnostics can be run for this period to see if it is one instrument causing this decline or the group generally moving in a downward trend. essentially trying to use these diagnostics to maybe add more constraints and hence improve the algorithm.
Many thanks all,
Best,
Andrew

Thanks garyha, Glad you found it interesting. So if this is "the second best algo" what's the first?

@AC, try setting a track_orders start date in that like the example in code.
@GK, Only revealed to those onboard with PvR, not the majority who look the other way.

Added an emulation of Robinhood's 3-day delay in the availability of the proceedings, following an example in the Q API help.

Clone Algorithm
17
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''    https://www.quantopian.com/posts/minimum-variance-w-slash-constraint

Original:       PvR only 108.6%
2016-02-04_pvr_:132INFO PvR 0.0833 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:135INFO  Profited 35134 on 32362 activated/transacted for PvR of 108.6%
2016-02-04_pvr_:138INFO  QRet 351.34 PvR 108.57 CshLw -22362 MxLv 1.50 RskHi 32362 Shrts 0

Modifications:  PvR      344.6%
2016-02-04_pvr_:367INFO PvR 0.2645 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:370INFO  Profited 34437 on 9993 activated/transacted for PvR of 344.6%
2016-02-04_pvr_:373INFO  QRet 344.37 PvR 344.61 CshLw 6 MxLv 1.00 RskHi 9993 Shrts 0

Modified by garyha
This avoids negative cash mostly by queueing orders, then doing sells, then buys.

This dramatic increase in profitability is originally thanks to:

    http://quantopian.com/posts/pvr

Use PvR and see clearly.

Also track_orders() made it possible to understand what was going on with the orders:
https://www.quantopian.com/posts/track-orders

2016-02-10: Added emulation of the 3-day delay in the availability of the proceedings for Robinhood trading, following an example in the Quantopian API help. (TV)

'''

import numpy as np
import scipy
from pytz import timezone

def initialize(context):
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
    context.n   = 0
    context.s   = np.zeros_like(context.stocks)
    context.x0  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.x1  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.eps = 0.05
    context.leverage = []
    schedule_function(allocate, date_rules.every_day(),   time_rules.market_open(minutes=60))
    schedule_function(queues,   date_rules.week_start(1), time_rules.market_open(minutes=60))
    
    # Robinhood
    set_commission(commission.PerTrade(cost=0))
    set_long_only()
    
    context.queue_list = []
    context.track_orders = 1    # toggle on|off

    context.last_sale = None
    context.trading_days = 0

def handle_data(context,data):
    if get_open_orders():
        track_orders(context, data)    # for filled orders
        return
    if context.queue_list:
        trade(context, data)

    track_orders(context, data)        # for new orders and last-frame-filled's
    pvr(context, data)

    #context.leverage.append(context.account.leverage)
    #record(max_leverage = max(context.leverage))

def trade(context, data):    # Process any queued orders
    
    if get_open_orders(): return    # Wait for fills

    # For live trading only
    # if do_unsettled_funds_exist(context):
    #     return
    
    # Only for backtesting purposes!
    if cash_settlement_date(context):
        return
    
    c = context
    mult = .984        # Multiplier for weights in orders, cash vs
                       #  slippage, commissions, to avoid negative cash.
    log_changes = 1    # Whether to log weight|allocation changes.
    sells = 0          # Indicator, whether any sells happened.
    qlist = sorted(c.queue_list)[:]  # Make an independent copy to allow remove().

    for o in qlist:    # Each order queued, process any sells
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight < pf_ratio_now:   # sell, is decrease in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Selling
            c.queue_list.remove(o)
            sells = 1  # let these settle before/if any buys

    if sells: return   # let any sells go thru before buys

    for o in qlist:    # Should be all buys at this point
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight > pf_ratio_now:   # buy, is increase in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Buying
            c.queue_list.remove(o)

    # Only for backtesting purposes!
    check_last_sale(context)
    
def queues(context, data):  # Queue orders (this was GK's ordering originally as trade())
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return

    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation

    if context.queue_list: return    # wait for orders to clear

    for i,stock in enumerate(context.stocks):
        context.queue_list.append( (allocation[i], stock) ) # list of tuples

    '''
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
    '''

def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return 2*np.dot(Acov,x)

def allocate(context, data):
    prices   = history(5*390,'1m', 'price')
    ret      = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std  = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    bnds     = []
    limits   = [0,1]

    for stock in context.stocks:
        bnds.append(limits)

    bnds = tuple(tuple(x) for x in bnds)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})

    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds,options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})

    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)

    # if not res.success:
    #     print 'res.success = False'

    context.n += 1
    context.s += allocation

def track_orders(context, data):  # Log orders created, filled, unfilled or canceled.
    if not context.track_orders: return

    '''      https://www.quantopian.com/posts/track-orders
    Status:
       0 - Unfilled
       1 - Filled (can be partial)
       2 - Canceled
    '''
    c = context
    log_cash = 1    # Show cash values in logging window or not.
    log_ids  = 1    # Include order id's in logging window or not.

    ''' Start and stop date options ...
    To not overwhelm the logging window, start/stop dates can be entered
      either below or in initialize() if you move to there for better efficiency.
    Example:
        c.dates  = {
            'active': 0,
            'start' : ['2007-05-07', '2010-04-26'],
            'stop'  : ['2008-02-13', '2010-11-15']
        }
    '''
    if 'orders' not in c:
        c.orders = {}               # Move these to initialize() for better efficiency.
        c.dates  = {
            'active': 0,
            'start' : [],           # Start dates, option
            'stop'  : []            # Stop  dates, option
        }
    #from pytz import timezone      # Python only does once, makes this portable.
                                    #   Move to top of algo for better efficiency.

    # If the dates 'start' or 'stop' lists have something in them, sets them.
    if c.dates['start'] or c.dates['stop']:
        date = str(get_datetime().date())
        if   date in c.dates['start']:    # See if there's a match to start
            c.dates['active'] = 1
        elif date in c.dates['stop']:     #   ... or to stop
            c.dates['active'] = 0
    else:
        c.dates['active'] = 1  # Set to active b/c no conditions

    if c.dates['active'] == 0:
        return                 # Skip if off

    def _minute():   # To preface each line with the minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _orders(to_log):       # So all logging comes from the same line number,
        log.info(to_log)       #   for vertical alignment in the logging window.

    ordrs = c.orders.copy()    # Independent copy to allow deletes
    for id in ordrs:
        o    = get_order(id)
        sec  = o.sid ; sym = sec.symbol
        oid  = o.id if log_ids else ''
        cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
        prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
        if o.filled:        # Filled at least some
            trade  = 'Bot' if o.amount > 0 else 'Sold'
            filled = '{}'.format(o.amount)
            if o.filled == o.amount:    # complete
                if 0 < c.orders[o.id] < o.amount:
                    filled  = 'all/{}'.format(o.amount)
                del c.orders[o.id]
            else:
                done_prv       = c.orders[o.id]       # previously filled ttl
                filled_this    = o.filled - done_prv  # filled this time, can be 0
                c.orders[o.id] = o.filled             # save for increments math
                filled         = '{}/{}'.format(filled_this, o.amount)
            _orders(' {}      {} {} {} at {}   {} {}'.format(_minute(),
                trade, filled, sym, prc, cash, oid))
        else:
            canceled = 'canceled' if o.status == 2 else ''
            _orders(' {}         {} {} unfilled {} {}'.format(_minute(),
                    o.sid.symbol, o.amount, canceled, oid))
            if canceled: del c.orders[o.id]

    for oo_list in get_open_orders().values(): # Open orders list
        for o in oo_list:
            sec  = o.sid ; sym = sec.symbol
            oid  = o.id if log_ids else ''
            cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
            prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
            if o.status == 2:                  # Canceled
                _orders(' {}    Canceled {} {} order   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))
                del c.orders[o.id]
            elif o.id not in c.orders:         # New
                c.orders[o.id] = 0
                trade = 'Buy' if o.amount > 0 else 'Sell'
                if o.limit:                    # Limit order
                    _orders(' {}   {} {} {} now {} limit {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.limit, cash, oid))
                elif o.stop:                   # Stop order
                    _orders(' {}   {} {} {} now {} stop {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.stop, cash, oid))
                else:                          # Market order
                    _orders(' {}   {} {} {} at {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))

def do_unsettled_funds_exist(context):

    # Only to be used for live trading!
    
    if context.portfolio.cash != context.account.settled_cash:
        return True

def check_last_sale(context):

    #Only to be used for backtesting!
 
    open_orders = get_open_orders()
    most_recent_trade = []
 
    if open_orders:
        for sec, order in open_orders.iteritems():
            for oo in order:
                if oo.amount < 0:
                    most_recent_trade.append(oo.created)
    if len(most_recent_trade) > 0:
        context.last_sale = max(most_recent_trade)

        
def cash_settlement_date(context):

    # Only to be used for backtesting!
    
    if context.last_sale and (get_datetime() - context.last_sale).days < 3:
        return True

def pvr(context, data):
    ''' Custom chart and/or log of profit_vs_risk returns and related information
    '''
    # # # # # # # # # #  Options  # # # # # # # # # #
    record_max_lvrg = 1         # Maximum leverage encountered
    record_leverage = 0         # Leverage (context.account.leverage)
    record_q_return = 1         # Quantopian returns (percentage)
    record_pvr      = 1         # Profit vs Risk returns (percentage)
    record_pnl      = 0         # Profit-n-Loss
    record_shorting = 0         # Total value of any shorts
    record_overshrt = 0         # Shorts beyond longs+cash
    record_risk     = 0         # Risked, max cash spent or shorts beyond longs+cash
    record_risk_hi  = 1         # Highest risk overall
    record_cash     = 0         # Cash available
    record_cash_low = 0         # Any new lowest cash level
    logging         = 1         # Also to logging window conditionally (1) or not (0)
    log_method      = 'risk_hi' # 'daily' or 'risk_hi'

    from pytz import timezone   # Python will only do once, makes this portable.
                                #   Move to top of algo for better efficiency.
    c = context  # Brevity is the soul of wit -- Shakespeare [for efficiency, readability]
    if 'pvr' not in c:
        date_strt = get_environment('start').date()
        date_end  = get_environment('end').date()
        cash_low  = c.portfolio.starting_cash
        mode      = get_environment('data_frequency')
        c.pvr = {
            'max_lvrg': 0,
            'risk_hi' : 0,
            'days'    : 0.0,
            'date_prv': '',
            'cash_low': cash_low,
            'date_end': date_end,
            'mode'    : mode,
            'run_str' : '{} to {}  {}  {}'.format(date_strt,date_end,int(cash_low),mode)
        }
        log.info(c.pvr['run_str'])
    pvr_rtrn     = 0            # Profit vs Risk returns based on maximum spent
    profit_loss  = 0            # Profit-n-loss
    shorts       = 0            # Shorts value
    longs        = 0            # Longs  value
    overshorts   = 0            # Shorts value beyond longs plus cash
    new_risk_hi  = 0
    new_cash_low = 0                           # To trigger logging in cash_low case
    lvrg         = c.account.leverage          # Standard leverage, in-house
    date         = get_datetime().date()       # To trigger logging in daily case
    cash         = c.portfolio.cash
    start        = c.portfolio.starting_cash
    cash_dip     = int(max(0, start - cash))
    q_rtrn       = 100 * (c.portfolio.portfolio_value - start) / start

    if int(cash) < c.pvr['cash_low']:                # New cash low
        new_cash_low = 1
        c.pvr['cash_low']   = int(cash)
        if record_cash_low:
            record(CashLow = int(c.pvr['cash_low'])) # Lowest cash level hit

    if record_max_lvrg:
        if c.account.leverage > c.pvr['max_lvrg']:
            c.pvr['max_lvrg'] = c.account.leverage
            record(MaxLv = c.pvr['max_lvrg'])        # Maximum leverage

    if record_pnl:
        profit_loss = c.portfolio.pnl
        record(PnL = profit_loss)                    # "Profit and Loss" in dollars

    for p in c.portfolio.positions:
        shrs = c.portfolio.positions[p].amount
        if shrs < 0:
            shorts += int(abs(shrs * data[p].price))
        if shrs > 0:
            longs  += int(shrs * data[p].price)

    if shorts > longs + cash: overshorts = shorts             # Shorts when too high
    if record_shorting: record(Shorts  = shorts)              # Shorts value as a positve
    if record_overshrt: record(OvrShrt = overshorts)          # Shorts value as a positve
    if record_cash:     record(Cash = int(c.portfolio.cash))  # Cash
    if record_leverage: record(Lvrg = c.account.leverage)     # Leverage

    risk = int(max(cash_dip, shorts))
    if record_risk: record(Risk = risk)       # Amount in play, maximum of shorts or cash used

    if risk > c.pvr['risk_hi']:
        c.pvr['risk_hi'] = risk
        new_risk_hi = 1

        if record_risk_hi:
            record(RiskHi = c.pvr['risk_hi']) # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)
        if c.pvr['risk_hi'] != 0:     # Avoid zero-divide
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.pvr['risk_hi']
            record(PvR = pvr_rtrn)            # Profit_vs_Risk returns

    if record_q_return:
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    def _minute():   # To preface each line with minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _pvr_():
            log.info('PvR {} %/day     {}'.format(
                '%.4f' % (pvr_rtrn / c.pvr['days']), c.pvr['run_str']))
            log.info('  Profited {} on {} activated/transacted for PvR of {}%'.format(
                '%.0f' % (c.portfolio.portfolio_value - start), '%.0f' % c.pvr['risk_hi'],
                '%.1f' % pvr_rtrn))
            log.info('  QRet {} PvR {} CshLw {} MxLv {} RskHi {} Shrts {}'.format(
                '%.2f' % q_rtrn, '%.2f' % pvr_rtrn, '%.0f' % c.pvr['cash_low'],
                '%.2f' % c.pvr['max_lvrg'], '%.0f' % c.pvr['risk_hi'], '%.0f' % shorts))

    if logging:
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.pvr['date_prv'] != date \
          or new_cash_low:
            qret    = ' QRet '   + '%.1f' % q_rtrn
            lv      = ' Lv '     + '%.1f' % lvrg              if record_leverage else ''
            pvr     = ' PvR '    + '%.1f' % pvr_rtrn          if record_pvr      else ''
            pnl     = ' PnL '    + '%.0f' % profit_loss       if record_pnl      else ''
            csh     = ' Cash '   + '%.0f' % cash              if record_cash     else ''
            shrt    = ' Shrt '   + '%.0f' % shorts            if record_shorting else ''
            ovrshrt = ' Shrt '   + '%.0f' % overshorts        if record_overshrt else ''
            risk    = ' Risk '   + '%.0f' % risk              if record_risk     else ''
            mxlv    = ' MaxLv '  + '%.2f' % c.pvr['max_lvrg'] if record_max_lvrg else ''
            csh_lw  = ' CshLw '  + '%.0f' % c.pvr['cash_low'] if record_cash_low else ''
            rsk_hi  = ' RskHi '  + '%.0f' % c.pvr['risk_hi']  if record_risk_hi  else ''
            log.info('{}{}{}{}{}{}{}{}{}{}{}{}'.format(_minute(),
               lv, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, ovrshrt, risk, rsk_hi))
    if c.pvr['date_prv'] != date: c.pvr['days'] += 1.0
    if c.pvr['days'] % 130 == 0 and _minute() == '100': _pvr_()
    c.pvr['date_prv'] = date
    if c.pvr['date_end'] == date:
        # Summary on last minute of last day.
        # If using schedule_function(), backtest last day/time may need to match for this to execute.
        if 'pvr_summary_done' not in c: c.pvr_summary_done = 0
        log_summary = 0
        if c.pvr['mode'] == 'daily' and get_datetime().date() == c.pvr['date_end']:
            log_summary = 1
        elif c.pvr['mode'] == 'minute' and get_datetime() == get_environment('end'):
            log_summary = 1
        if log_summary and not c.pvr_summary_done:
            _pvr_()
            c.pvr_summary_done = 1

def minut():   # To preface each line with the minute of the day.
               # Added to be used in trade()
    if get_environment('data_frequency') == 'minute':
        bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
        minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
        return str(minute).rjust(3)
    return ''    # Daily mode, just leave it out.

There was a runtime error.

A tear sheet for the algo from the previous post.

Loading notebook preview...
Notebook previews are currently unavailable.

hi garyha!
Would you be able to provide some more information as to what algorithm performs a better
PVR metric.
Many thanks,
Best,
Andrew

Interesting that it holds up to the T+3 restriction.

One thought I had is that for such a strategy, it might be better to adjust the weights on a rolling basis, every day/minute, under the T+3 rule, rather than scheduling a function to run periodically. One would also have to consider the problem of not being able to buy fractional shares, as well.

small tweak to make the T+3 restriction dynamic based on the environment

Clone Algorithm
51
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''    https://www.quantopian.com/posts/minimum-variance-w-slash-constraint

Original:       PvR only 108.6%
2016-02-04_pvr_:132INFO PvR 0.0833 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:135INFO  Profited 35134 on 32362 activated/transacted for PvR of 108.6%
2016-02-04_pvr_:138INFO  QRet 351.34 PvR 108.57 CshLw -22362 MxLv 1.50 RskHi 32362 Shrts 0

Modifications:  PvR      344.6%
2016-02-04_pvr_:367INFO PvR 0.2645 %/day     2010-12-01 to 2016-02-04  10000  minute
2016-02-04_pvr_:370INFO  Profited 34437 on 9993 activated/transacted for PvR of 344.6%
2016-02-04_pvr_:373INFO  QRet 344.37 PvR 344.61 CshLw 6 MxLv 1.00 RskHi 9993 Shrts 0

Modified by garyha
This avoids negative cash mostly by queueing orders, then doing sells, then buys.

This dramatic increase in profitability is originally thanks to:

    http://quantopian.com/posts/pvr

Use PvR and see clearly.

Also track_orders() made it possible to understand what was going on with the orders:
https://www.quantopian.com/posts/track-orders

2016-02-10: Added emulation of the 3-day delay in the availability of the proceedings for Robinhood trading, following an example in the Quantopian API help. (TV)

'''

import numpy as np
import scipy
from pytz import timezone

def initialize(context):
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
    context.n   = 0
    context.s   = np.zeros_like(context.stocks)
    context.x0  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.x1  = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    context.eps = 0.05
    context.leverage = []
    schedule_function(allocate, date_rules.every_day(),   time_rules.market_open(minutes=60))
    schedule_function(queues,   date_rules.week_start(1), time_rules.market_open(minutes=60))
    
    # Robinhood
    set_commission(commission.PerTrade(cost=0))
    set_long_only()
    
    context.queue_list = []
    context.track_orders = 1    # toggle on|off

    context.last_sale = None
    context.trading_days = 0

def handle_data(context,data):
    if get_open_orders():
        track_orders(context, data)    # for filled orders
        return
    if context.queue_list:
        trade(context, data)

    track_orders(context, data)        # for new orders and last-frame-filled's
    pvr(context, data)

    #context.leverage.append(context.account.leverage)
    #record(max_leverage = max(context.leverage))

def trade(context, data):    # Process any queued orders
    
    if get_open_orders(): return    # Wait for fills

    # For live trading only
    arena = get_environment("arena")
    if arena in ['IB', 'ROBINHOOD']:
        if do_unsettled_funds_exist(context):
             return
    else:   
        # Only for backtesting purposes!
        if cash_settlement_date(context):
            return

    c = context
    mult = .984        # Multiplier for weights in orders, cash vs
                       #  slippage, commissions, to avoid negative cash.
    log_changes = 1    # Whether to log weight|allocation changes.
    sells = 0          # Indicator, whether any sells happened.
    qlist = sorted(c.queue_list)[:]  # Make an independent copy to allow remove().

    for o in qlist:    # Each order queued, process any sells
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight < pf_ratio_now:   # sell, is decrease in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Selling
            c.queue_list.remove(o)
            sells = 1  # let these settle before/if any buys

    if sells: return   # let any sells go thru before buys

    for o in qlist:    # Should be all buys at this point
        stock  = o[1] ; weight = o[0]
        pf_value_now = c.portfolio.positions[stock].amount * data[stock].price
        pf_ratio_now = pf_value_now / c.portfolio.portfolio_value
        if weight > pf_ratio_now:   # buy, is increase in allocation
            if log_changes:
                log.info('   {} {} {} ==> {}'.format(
                    minut(), stock.symbol, '%.3f' % pf_ratio_now, '%.3f' % weight))
            order_target_percent(stock, mult * weight)    # Buying
            c.queue_list.remove(o)

    # Only for backtesting purposes!
    check_last_sale(context)
    
def queues(context, data):  # Queue orders (this was GK's ordering originally as trade())
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return

    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation

    if context.queue_list: return    # wait for orders to clear

    for i,stock in enumerate(context.stocks):
        context.queue_list.append( (allocation[i], stock) ) # list of tuples

    '''
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
    '''

def variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    return 2*np.dot(Acov,x)

def allocate(context, data):
    prices   = history(5*390,'1m', 'price')
    ret      = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std  = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    bnds     = []
    limits   = [0,1]

    for stock in context.stocks:
        bnds.append(limits)

    bnds = tuple(tuple(x) for x in bnds)
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})

    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds,options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})

    if res.success:
        allocation = res.x
        allocation[allocation<0] = 0
        denom = np.sum(allocation)
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)

    # if not res.success:
    #     print 'res.success = False'

    context.n += 1
    context.s += allocation

def track_orders(context, data):  # Log orders created, filled, unfilled or canceled.
    if not context.track_orders: return

    '''      https://www.quantopian.com/posts/track-orders
    Status:
       0 - Unfilled
       1 - Filled (can be partial)
       2 - Canceled
    '''
    c = context
    log_cash = 1    # Show cash values in logging window or not.
    log_ids  = 1    # Include order id's in logging window or not.

    ''' Start and stop date options ...
    To not overwhelm the logging window, start/stop dates can be entered
      either below or in initialize() if you move to there for better efficiency.
    Example:
        c.dates  = {
            'active': 0,
            'start' : ['2007-05-07', '2010-04-26'],
            'stop'  : ['2008-02-13', '2010-11-15']
        }
    '''
    if 'orders' not in c:
        c.orders = {}               # Move these to initialize() for better efficiency.
        c.dates  = {
            'active': 0,
            'start' : [],           # Start dates, option
            'stop'  : []            # Stop  dates, option
        }
    #from pytz import timezone      # Python only does once, makes this portable.
                                    #   Move to top of algo for better efficiency.

    # If the dates 'start' or 'stop' lists have something in them, sets them.
    if c.dates['start'] or c.dates['stop']:
        date = str(get_datetime().date())
        if   date in c.dates['start']:    # See if there's a match to start
            c.dates['active'] = 1
        elif date in c.dates['stop']:     #   ... or to stop
            c.dates['active'] = 0
    else:
        c.dates['active'] = 1  # Set to active b/c no conditions

    if c.dates['active'] == 0:
        return                 # Skip if off

    def _minute():   # To preface each line with the minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _orders(to_log):       # So all logging comes from the same line number,
        log.info(to_log)       #   for vertical alignment in the logging window.

    ordrs = c.orders.copy()    # Independent copy to allow deletes
    for id in ordrs:
        o    = get_order(id)
        sec  = o.sid ; sym = sec.symbol
        oid  = o.id if log_ids else ''
        cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
        prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
        if o.filled:        # Filled at least some
            trade  = 'Bot' if o.amount > 0 else 'Sold'
            filled = '{}'.format(o.amount)
            if o.filled == o.amount:    # complete
                if 0 < c.orders[o.id] < o.amount:
                    filled  = 'all/{}'.format(o.amount)
                del c.orders[o.id]
            else:
                done_prv       = c.orders[o.id]       # previously filled ttl
                filled_this    = o.filled - done_prv  # filled this time, can be 0
                c.orders[o.id] = o.filled             # save for increments math
                filled         = '{}/{}'.format(filled_this, o.amount)
            _orders(' {}      {} {} {} at {}   {} {}'.format(_minute(),
                trade, filled, sym, prc, cash, oid))
        else:
            canceled = 'canceled' if o.status == 2 else ''
            _orders(' {}         {} {} unfilled {} {}'.format(_minute(),
                    o.sid.symbol, o.amount, canceled, oid))
            if canceled: del c.orders[o.id]

    for oo_list in get_open_orders().values(): # Open orders list
        for o in oo_list:
            sec  = o.sid ; sym = sec.symbol
            oid  = o.id if log_ids else ''
            cash = 'cash {}'.format(int(c.portfolio.cash)) if log_cash else ''
            prc  = '%.2f' % data[sec].price if sec in data else 'unknwn'
            if o.status == 2:                  # Canceled
                _orders(' {}    Canceled {} {} order   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))
                del c.orders[o.id]
            elif o.id not in c.orders:         # New
                c.orders[o.id] = 0
                trade = 'Buy' if o.amount > 0 else 'Sell'
                if o.limit:                    # Limit order
                    _orders(' {}   {} {} {} now {} limit {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.limit, cash, oid))
                elif o.stop:                   # Stop order
                    _orders(' {}   {} {} {} now {} stop {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, o.stop, cash, oid))
                else:                          # Market order
                    _orders(' {}   {} {} {} at {}   {} {}'.format(_minute(),
                        trade, o.amount, sym, prc, cash, oid))

def do_unsettled_funds_exist(context):

    # Only to be used for live trading!
    
    if context.portfolio.cash != context.account.settled_cash:
        return True

def check_last_sale(context):

    #Only to be used for backtesting!
 
    open_orders = get_open_orders()
    most_recent_trade = []
 
    if open_orders:
        for sec, order in open_orders.iteritems():
            for oo in order:
                if oo.amount < 0:
                    most_recent_trade.append(oo.created)
    if len(most_recent_trade) > 0:
        context.last_sale = max(most_recent_trade)

        
def cash_settlement_date(context):

    # Only to be used for backtesting!
    
    if context.last_sale and (get_datetime() - context.last_sale).days < 3:
        return True

def pvr(context, data):
    ''' Custom chart and/or log of profit_vs_risk returns and related information
    '''
    # # # # # # # # # #  Options  # # # # # # # # # #
    record_max_lvrg = 1         # Maximum leverage encountered
    record_leverage = 0         # Leverage (context.account.leverage)
    record_q_return = 1         # Quantopian returns (percentage)
    record_pvr      = 1         # Profit vs Risk returns (percentage)
    record_pnl      = 0         # Profit-n-Loss
    record_shorting = 0         # Total value of any shorts
    record_overshrt = 0         # Shorts beyond longs+cash
    record_risk     = 0         # Risked, max cash spent or shorts beyond longs+cash
    record_risk_hi  = 1         # Highest risk overall
    record_cash     = 0         # Cash available
    record_cash_low = 0         # Any new lowest cash level
    logging         = 1         # Also to logging window conditionally (1) or not (0)
    log_method      = 'risk_hi' # 'daily' or 'risk_hi'

    from pytz import timezone   # Python will only do once, makes this portable.
                                #   Move to top of algo for better efficiency.
    c = context  # Brevity is the soul of wit -- Shakespeare [for efficiency, readability]
    if 'pvr' not in c:
        date_strt = get_environment('start').date()
        date_end  = get_environment('end').date()
        cash_low  = c.portfolio.starting_cash
        mode      = get_environment('data_frequency')
        c.pvr = {
            'max_lvrg': 0,
            'risk_hi' : 0,
            'days'    : 0.0,
            'date_prv': '',
            'cash_low': cash_low,
            'date_end': date_end,
            'mode'    : mode,
            'run_str' : '{} to {}  {}  {}'.format(date_strt,date_end,int(cash_low),mode)
        }
        log.info(c.pvr['run_str'])
    pvr_rtrn     = 0            # Profit vs Risk returns based on maximum spent
    profit_loss  = 0            # Profit-n-loss
    shorts       = 0            # Shorts value
    longs        = 0            # Longs  value
    overshorts   = 0            # Shorts value beyond longs plus cash
    new_risk_hi  = 0
    new_cash_low = 0                           # To trigger logging in cash_low case
    lvrg         = c.account.leverage          # Standard leverage, in-house
    date         = get_datetime().date()       # To trigger logging in daily case
    cash         = c.portfolio.cash
    start        = c.portfolio.starting_cash
    cash_dip     = int(max(0, start - cash))
    q_rtrn       = 100 * (c.portfolio.portfolio_value - start) / start

    if int(cash) < c.pvr['cash_low']:                # New cash low
        new_cash_low = 1
        c.pvr['cash_low']   = int(cash)
        if record_cash_low:
            record(CashLow = int(c.pvr['cash_low'])) # Lowest cash level hit

    if record_max_lvrg:
        if c.account.leverage > c.pvr['max_lvrg']:
            c.pvr['max_lvrg'] = c.account.leverage
            record(MaxLv = c.pvr['max_lvrg'])        # Maximum leverage

    if record_pnl:
        profit_loss = c.portfolio.pnl
        record(PnL = profit_loss)                    # "Profit and Loss" in dollars

    for p in c.portfolio.positions:
        shrs = c.portfolio.positions[p].amount
        if shrs < 0:
            shorts += int(abs(shrs * data[p].price))
        if shrs > 0:
            longs  += int(shrs * data[p].price)

    if shorts > longs + cash: overshorts = shorts             # Shorts when too high
    if record_shorting: record(Shorts  = shorts)              # Shorts value as a positve
    if record_overshrt: record(OvrShrt = overshorts)          # Shorts value as a positve
    if record_cash:     record(Cash = int(c.portfolio.cash))  # Cash
    if record_leverage: record(Lvrg = c.account.leverage)     # Leverage

    risk = int(max(cash_dip, shorts))
    if record_risk: record(Risk = risk)       # Amount in play, maximum of shorts or cash used

    if risk > c.pvr['risk_hi']:
        c.pvr['risk_hi'] = risk
        new_risk_hi = 1

        if record_risk_hi:
            record(RiskHi = c.pvr['risk_hi']) # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)
        if c.pvr['risk_hi'] != 0:     # Avoid zero-divide
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.pvr['risk_hi']
            record(PvR = pvr_rtrn)            # Profit_vs_Risk returns

    if record_q_return:
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    def _minute():   # To preface each line with minute of the day.
        if get_environment('data_frequency') == 'minute':
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
            return str(minute).rjust(3)
        return ''    # Daily mode, just leave it out.

    def _pvr_():
            log.info('PvR {} %/day     {}'.format(
                '%.4f' % (pvr_rtrn / c.pvr['days']), c.pvr['run_str']))
            log.info('  Profited {} on {} activated/transacted for PvR of {}%'.format(
                '%.0f' % (c.portfolio.portfolio_value - start), '%.0f' % c.pvr['risk_hi'],
                '%.1f' % pvr_rtrn))
            log.info('  QRet {} PvR {} CshLw {} MxLv {} RskHi {} Shrts {}'.format(
                '%.2f' % q_rtrn, '%.2f' % pvr_rtrn, '%.0f' % c.pvr['cash_low'],
                '%.2f' % c.pvr['max_lvrg'], '%.0f' % c.pvr['risk_hi'], '%.0f' % shorts))

    if logging:
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.pvr['date_prv'] != date \
          or new_cash_low:
            qret    = ' QRet '   + '%.1f' % q_rtrn
            lv      = ' Lv '     + '%.1f' % lvrg              if record_leverage else ''
            pvr     = ' PvR '    + '%.1f' % pvr_rtrn          if record_pvr      else ''
            pnl     = ' PnL '    + '%.0f' % profit_loss       if record_pnl      else ''
            csh     = ' Cash '   + '%.0f' % cash              if record_cash     else ''
            shrt    = ' Shrt '   + '%.0f' % shorts            if record_shorting else ''
            ovrshrt = ' Shrt '   + '%.0f' % overshorts        if record_overshrt else ''
            risk    = ' Risk '   + '%.0f' % risk              if record_risk     else ''
            mxlv    = ' MaxLv '  + '%.2f' % c.pvr['max_lvrg'] if record_max_lvrg else ''
            csh_lw  = ' CshLw '  + '%.0f' % c.pvr['cash_low'] if record_cash_low else ''
            rsk_hi  = ' RskHi '  + '%.0f' % c.pvr['risk_hi']  if record_risk_hi  else ''
            log.info('{}{}{}{}{}{}{}{}{}{}{}{}'.format(_minute(),
               lv, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, ovrshrt, risk, rsk_hi))
    if c.pvr['date_prv'] != date: c.pvr['days'] += 1.0
    if c.pvr['days'] % 130 == 0 and _minute() == '100': _pvr_()
    c.pvr['date_prv'] = date
    if c.pvr['date_end'] == date:
        # Summary on last minute of last day.
        # If using schedule_function(), backtest last day/time may need to match for this to execute.
        if 'pvr_summary_done' not in c: c.pvr_summary_done = 0
        log_summary = 0
        if c.pvr['mode'] == 'daily' and get_datetime().date() == c.pvr['date_end']:
            log_summary = 1
        elif c.pvr['mode'] == 'minute' and get_datetime() == get_environment('end'):
            log_summary = 1
        if log_summary and not c.pvr_summary_done:
            _pvr_()
            c.pvr_summary_done = 1

def minut():   # To preface each line with the minute of the day.
               # Added to be used in trade()
    if get_environment('data_frequency') == 'minute':
        bar_dt = get_datetime().astimezone(timezone('US/Eastern'))
        minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)
        return str(minute).rjust(3)
    return ''    # Daily mode, just leave it out.

There was a runtime error.

A tiny correction to the last post: the call of the check_last_sale subroutine should only be done in backtesting, i.e., the corresponding piece of code in the trade subroutine should read

if arena not in ['IB', 'ROBINHOOD']:
# Backtesting
check_last_sale(context)

instead of just

check_last_sale(context)

Grant,

Interesting that it holds up to the T+3 restriction.

I hope I implemented the 3-day delay correctly ...

I added some log.info's to the allocate routine see if the optimizer ever had trouble finding a solution like this:

if res.success:  
        log.info("AOK scipy.optimize res.success=False")  
        allocation = res.x  
        allocation[allocation<0] = 0  
        denom = np.sum(allocation)  
        if denom > 0:  
            allocation = allocation/denom  
else:  
        log.info("WRN scipy.optimize res.success=False")  
        allocation = np.copy(context.x0)  

Running a backtest for 11-January-2016 to 15-Jan-2016 there seems to be some days where the optimizer is unable to get a solution and moved to an equal weight default. From run log, looks to me like optimizer found a solution on 2 out of 5 days:

2016-01-11pvr:359INFO2016-01-11 to 2016-01-15  10000  minute  
2016-01-11allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-12allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-12trade:112INFO    61 EDV 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 TLT 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 RSP 0.000 ==> 0.250  
2016-01-12trade:112INFO    61 XIV 0.000 ==> 0.250  
2016-01-12_orders:241INFO  61   Buy 34 RSP at 72.04   cash 10000 d96f007e8b4e456a85483d74d25354af  
2016-01-12_orders:241INFO  61   Buy 20 TLT at 122.81   cash 10000 ad6906801e6a4d09a4ef7a130d22b9ea  
2016-01-12_orders:241INFO  61   Buy 114 XIV at 21.53   cash 10000 d6d2460edd8a470796051c646c5a7188  
2016-01-12_orders:241INFO  61   Buy 21 EDV at 116.57   cash 10000 b566d65685f54fa28e086e1ee23e409e  
2016-01-12_orders:241INFO  62      Bot 34 RSP at 72.04   cash 2645 d96f007e8b4e456a85483d74d25354af  
2016-01-12_orders:241INFO  62         EDV 21 unfilled  b566d65685f54fa28e086e1ee23e409e  
2016-01-12_orders:241INFO  62      Bot 114 XIV at 21.48   cash 2645 d6d2460edd8a470796051c646c5a7188  
2016-01-12_orders:241INFO  62      Bot 20 TLT at 122.82   cash 2645 ad6906801e6a4d09a4ef7a130d22b9ea  
2016-01-12_orders:241INFO  63      Bot 21 EDV at 116.48   cash 196 b566d65685f54fa28e086e1ee23e409e  
2016-01-12pvr:453INFO 63 MaxLv 0.98 QRet 0.1 PvR 0.1 RskHi 9803  
2016-01-13allocate:171INFOAOK scipy.optimize res.success=True  
2016-01-14allocate:178INFOWRN scipy.optimize res.success=False  
2016-01-15allocate:171INFOAOK scipy.optimize res.success=True  
2016-01-15_pvr_:429INFOPvR -0.5568 %/day     2016-01-11 to 2016-01-15  10000  minute  
2016-01-15_pvr_:432INFO  Profited -273 on 9803 activated/transacted for PvR of -2.8%  
2016-01-15_pvr_:435INFO  QRet -2.73 PvR -2.78 CshLw 196 MxLv 0.98 RskHi 9803 Shrts 0  
End of logs.  

Comments or perspective or is this normal-operation or ?

Thanks Richard,

I'm not sure what's going on with the optimizer. Note that there is an option to display what's going on under the hood (see http://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html and the 'disp' flag).

As I noted above, there may be a closed-form solution (at least for an equality constraint, which may be sufficient). I think this applies:

http://quant.stackexchange.com/questions/18160/beta-constrained-markowitz-minimum-variance-portfolio-closed-form-solution

Another approach would be to use CVXOPT instead of scipy.optimize to see if it always converges.

Grant

Hi all,
I was wondering given the algorithm. how would it be possible to do performance attribution on the different instruments.
Many thanks
Andrew

Andrew,

This might be relevant:

https://www.quantopian.com/posts/round-trip-trade-analysis

Grant

@ Grant. Thanks for sharing. I thought SPY, SH & TLT was already a great combo for minimum-variance optimization (mvo), until you presented this combination (RSP,EDV,TLT,XIV)!
@All. Any insights/tips on how this combination works so well with mvo? or any related articles on constructing a portfolio for mvo?

From a cursory look, the performance is attributed to the high return by XIV, and also the negative correlation of returns between RSP/XIV versus EDV/TLT (shown in first half of the notebook). However, my hand picked stocks based on those 2 criteria did not do so well with mvo, as shown in the notebook. If only we can reverse engineer why this combination do so well with mvo. :)

Loading notebook preview...
Notebook previews are currently unavailable.

I am also attaching my modification to the strategy, which attempts to reduce the volatility by limiting the XIV weight during rough times.

Clone Algorithm
111
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# David Edwards, https://www.quantopian.com/posts/long-only-strategy-allocate-between-bull-and-bear-market-portfolios
# James Christopher, https://www.quantopian.com/posts/pipeline-calculating-beta
# Thomas Wiecki, https://www.quantopian.com/posts/beta-constrained-markowitz-minimum-variance-portfolio
# Grant Kiehne, https://www.quantopian.com/posts/minimum-variance-w-slash-constraint
# 
# 2010-12-01 to 2016-02-04 
import numpy as np
import scipy


def returns_confidence(R):
    '''
    Calculates the sum of the returns over each trailing
    window in the returns series R. It gains confidence 
    for each positive return and loses some for each negative
    window.
    
    Param:
        R: array/series
            Series of porfolio or benchmark returns

    Returns: float
        -1.0 =< Confidence level =< 1.0
    
    '''
    x = 1.0 / len(R)
    signal = 0
    for i in range(1, len(R)):
        r = R.tail(i).sum()
        if r > 0:
            signal += x
        elif r < 0:
            signal -= x
    return signal 


def initialize(context):

    context.stocks =  [sid(24744),sid(22887),sid(23921),sid(40516)]
    
    context.hist_freq = [5*390,'1m','price']
    context.conf_freq = [200,'1d', 'price']
    
    context.benchmark = sid(8554)
    context.conf_arr = []
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.05
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    context.long_only=True
    if context.long_only:
        set_long_only()
    
def handle_data(context,data):    
    #record(leverage = context.account.leverage)
    pass

def _beta(ts, benchmark_ret, benchmark_var):
    return np.cov(ts, benchmark_ret)[0, 1] / benchmark_var

def get_beta(rets,benchmark,stocks):
    out = np.zeros_like(stocks)
    returns = rets[stocks]
    spy_returns = rets.as_matrix([benchmark]).T
    spy_returns_var = np.var(spy_returns)
    out[:] = returns.apply(_beta, args=(spy_returns,spy_returns_var,))
    return out

def get_slope(y,sz=50):
    if len(y) < sz:
        return 0.0
    else:
        y = y[-sz+1:-1]
    x = list(range(len(y)))
    slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)
    return slope

def allocate(context, data):
    
    daily_prices = history(*context.conf_freq).fillna(method='ffill')                 
    daily_ret_pd = daily_prices.pct_change()[1:].fillna(value=0)        
    confidence = returns_confidence(daily_ret_pd[[context.benchmark]].T.mean())
    context.conf_arr.append(confidence)
    # #set confidence to 0 if market is down or trending down
    #if confidence < 0.8 or get_slope(context.conf_arr) < 0.0: confidence = 0.0
    record(conf=confidence)
    
    prices = history(*context.hist_freq).fillna(method='ffill')                 
    ret_pd = prices.pct_change()[1:].fillna(value=0)
        
    betas = get_beta(ret_pd,context.benchmark,context.stocks)    
    
    ret=ret_pd.as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()[context.stocks]
    ret_std = prices.pct_change().std()[context.stocks]
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    if confidence < 0.5:
        bnds = ((0.,1.),(0.,1.),(0.,1.),(0.,.0))
    else:
        bnds = tuple([(0.0,1.0)]*len(context.x0))
    
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},            
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps},
           )
    #{'type': 'eq', 'fun': lambda x:  np.dot(x,betas)-confidence},
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        if context.long_only: allocation[allocation<0] = 0
        denom = np.sum(np.abs(allocation))
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation
    
    b = np.dot(allocation,betas)
    #record(beta=b)
    
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    for i,stock in enumerate(context.stocks):
      if i < 2:
          record(stock.symbol,allocation[i])
    
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
There was a runtime error.

My very strong advice would be to stop dikking around with 5 or 6 year back tests. Forget about looking at instruments with history so short they have been around for the blink of an eye. And forget about Quantopian if it only provides you with such data. Zipline would be a far better option. You need to obtain or manufacture bond data going back over a very long a period of time. If you can't find it make your own using interest rate data from the Fed. There are no ETFs going back beyond 1996 so forget them too and either use stock indices or mutual fund data. You won't be able to lay your hands on minute data of course.

This is an excellent forum. There are many excellent drafters of code. But there don't seem to be many people who know what it means to live and trade through many different market cycles.

@ Anthony, I'm just a hack, while you might actually know what you are doing. I'm definitely not promoting this as a sensible investment. I agree that the backtest time frame is too short for this algo. It could be that the bull market gets amplified, and then as it flattens out toward the end, the algo just gets lucky. It is limited by the availability of data for XIV. So the limited time frame is a risk everyone should be aware of. Maybe there is a way to cook up a proxy for XIV?

@ Ted, looks like an improvement, but I'd worry about over-fitting (particularly in light of Anthony's comments). XIV is complete voodoo to me:

The investment seeks to replicate, net of expenses, the inverse of the daily performance of the S&P 500 VIX Short-Term Futures index. The index was designed to provide investors with exposure to one or more maturities of futures contracts on the VIX, which reflects implied volatility of the S&P 500 Index at various points along the volatility forward curve. The calculation of the VIX is based on prices of put and call options on the S&P 500 Index. The ETNs are linked to the daily inverse return of the index and do not represent an investment in the inverse of the VIX.

Kinda scary that the strategy hinges on something that sounds pretty far removed from anything that a mere mortal could understand.

Regarding minimum variance optimization, keep in mind that the algo has a constraint that should tilt the portfolio toward securities that have higher volatility-normalized returns over the trailing window (at least that's what I had in mind). I've attached a backtest of your version, without the constraint, to illustrate the importance of the constraint.

Clone Algorithm
13
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# David Edwards, https://www.quantopian.com/posts/long-only-strategy-allocate-between-bull-and-bear-market-portfolios
# James Christopher, https://www.quantopian.com/posts/pipeline-calculating-beta
# Thomas Wiecki, https://www.quantopian.com/posts/beta-constrained-markowitz-minimum-variance-portfolio
# Grant Kiehne, https://www.quantopian.com/posts/minimum-variance-w-slash-constraint
# 
# 2010-12-01 to 2016-02-04 
import numpy as np
import scipy


def returns_confidence(R):
    '''
    Calculates the sum of the returns over each trailing
    window in the returns series R. It gains confidence 
    for each positive return and loses some for each negative
    window.
    
    Param:
        R: array/series
            Series of porfolio or benchmark returns

    Returns: float
        -1.0 =< Confidence level =< 1.0
    
    '''
    x = 1.0 / len(R)
    signal = 0
    for i in range(1, len(R)):
        r = R.tail(i).sum()
        if r > 0:
            signal += x
        elif r < 0:
            signal -= x
    return signal 


def initialize(context):

    context.stocks =  [sid(24744),sid(22887),sid(23921),sid(40516)]
    
    context.hist_freq = [5*390,'1m','price']
    context.conf_freq = [200,'1d', 'price']
    
    context.benchmark = sid(8554)
    context.conf_arr = []
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.05
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    context.long_only=True
    if context.long_only:
        set_long_only()
    
def handle_data(context,data):    
    #record(leverage = context.account.leverage)
    pass

def _beta(ts, benchmark_ret, benchmark_var):
    return np.cov(ts, benchmark_ret)[0, 1] / benchmark_var

def get_beta(rets,benchmark,stocks):
    out = np.zeros_like(stocks)
    returns = rets[stocks]
    spy_returns = rets.as_matrix([benchmark]).T
    spy_returns_var = np.var(spy_returns)
    out[:] = returns.apply(_beta, args=(spy_returns,spy_returns_var,))
    return out

def get_slope(y,sz=50):
    if len(y) < sz:
        return 0.0
    else:
        y = y[-sz+1:-1]
    x = list(range(len(y)))
    slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)
    return slope

def allocate(context, data):
    
    daily_prices = history(*context.conf_freq).fillna(method='ffill')                 
    daily_ret_pd = daily_prices.pct_change()[1:].fillna(value=0)        
    confidence = returns_confidence(daily_ret_pd[[context.benchmark]].T.mean())
    context.conf_arr.append(confidence)
    # #set confidence to 0 if market is down or trending down
    #if confidence < 0.8 or get_slope(context.conf_arr) < 0.0: confidence = 0.0
    record(conf=confidence)
    
    prices = history(*context.hist_freq).fillna(method='ffill')                 
    ret_pd = prices.pct_change()[1:].fillna(value=0)
        
    betas = get_beta(ret_pd,context.benchmark,context.stocks)    
    
    ret=ret_pd.as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()[context.stocks]
    ret_std = prices.pct_change().std()[context.stocks]
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    if confidence < 0.5:
        bnds = ((0.,1.),(0.,1.),(0.,1.),(0.,.0))
    else:
        bnds = tuple([(0.0,1.0)]*len(context.x0))
    
    # cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},            
    #         {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps},
    #        )
    
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0})
    
    #{'type': 'eq', 'fun': lambda x:  np.dot(x,betas)-confidence},
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    if res.success:
        allocation = res.x
        if context.long_only: allocation[allocation<0] = 0
        denom = np.sum(np.abs(allocation))
        if denom > 0:
            allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation
    
    b = np.dot(allocation,betas)
    #record(beta=b)
    
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    for i,stock in enumerate(context.stocks):
      if i < 2:
          record(stock.symbol,allocation[i])
    
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
There was a runtime error.

Not N

Not at all
I enjoy your work and admire your coding. But I have traded more cock ups over the years than I care to recall!

just trying to figure out what a "cock ups" is? never traded that market myself.

also Anthony is correct to validate a system I use at least fifteen thousand bars up to one hundred twenty five thousand bars in testing. i separate out the trending data and no-trending data, create synthetic random data and patch it all together in various configurations. if your model holds up to this then you have no curve fit.

also if your logic is sound you should be experiencing positive slippage on initial and when leveraging positions, otherwise i don't think a system will hold up to institutional trading size at least. m

Hey Grant --

I was wondering if you could help walk me through what this portion of the code is doing?:

    bnds = []  
    limits = [0,1]  
    for stock in context.stocks:  
        bnds.append(limits)  
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)  
    if res.success:  
        allocation = res.x  
        allocation[allocation<0] = 0  
        denom = np.sum(allocation)  
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:  
            allocation = allocation/denom  
    else:  
        allocation = np.copy(context.x0)  
    context.n += 1  
    context.s += allocation  

It looks like the main goal of the algo is to get to the scheduled function call trade:

def trade(context, data):  
    print ('Current context.n: ',context.n)  
    if context.n > 0:  
        allocation = context.s/context.n  
        print('Required Allocation: ',allocation)  
    else:  
        return  
    context.n = 0  
    context.s = np.zeros_like(context.stocks)  
    context.x0 = allocation  
    if get_open_orders():  
        return  
    for i,stock in enumerate(context.stocks):  
        order_target_percent(stock,allocation[i])  

With order_target_percent(stock,allocation[i]) as the trading execution portion. I changed some of the stocks in context.stocks and reran the algo. I also printed out context.n and allocation whenever trade() was called. I got the log output below. Can you help me understand what context.n is? What's the relationship to allocation? even though it's context.s / context.n it doesn't seem to change even when n increases or decreases:

2015-12-01 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-08 -- PRINT('Current context.n: ', 5)
2015-12-08 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-15 -- PRINT('Current context.n: ', 5)
2015-12-15 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-22 -- PRINT('Current context.n: ', 5)
2015-12-22 -- PRINT('Required Allocation: ', array([0.25, 0.25, 0.25, 0.25], dtype=object))
2015-12-29 -- PRINT('Current context.n: ', 4)

could help walk me through what this portion of the code is doing?

   # set up the optimization  
   bnds = []  
    limits = [0,1]  
    for stock in context.stocks:  
        bnds.append(limits)  
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    # run the optimizer  
    res= scipy.optimize.minimize(variance, context.x0, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds) 

    # determine the allocation  
    if res.success:  
        allocation = res.x  
        allocation[allocation<0] = 0  # clip off any negative terms  
        denom = np.sum(allocation)  
        if denom > 0 and np.dot(allocation,ret_norm) >= 0:  
            allocation = allocation/denom  
    else:  
        allocation = np.copy(context.x0) 

    # keep track of the number of times run & a running sum of the allocations (so the average can be computed)  
    context.n += 1  
    context.s += allocation  

As noted above, the idea is to accumulate allocations over some period of time, and then average them. However, you'll get the same array if the allocation is always the same, regardless of n.

Here's a first crack at a long-short version. Maybe someone has insights into improving it. Note that I switched to SPY, thinking that it would be subject to less slippage than RSP at higher capital.

Clone Algorithm
599
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(8554),   # SPY
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = 1.0*np.zeros_like(context.stocks)/len(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.1
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
def handle_data(context,data):
    
    record(leverage = context.account.leverage)

def allocate(context, data):
    
    prices = history(5*390,'1m', 'price')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = [(-1,1)]*len(context.stocks)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    allocation = res.x
    denom = np.sum(np.absolute(allocation))
    
    if res.success and denom > 0:
        allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
        denom = np.sum(np.absolute(allocation))
        if denom > 0:
            allocation = allocation/np.sum(np.absolute(allocation))
        else:
            return
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    record(SPY = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
There was a runtime error.

Hi guys, is there a version of this min variance strategy that is applicable to a long-short strategy or the fact that the weights are all positive should be considered as a constraint ?

Francesco,

Here's basically the version I posted above (Mar. 6, 2016), but brought up to Q2 standards. It supports both long and short positions.

Grant

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(8554),   # SPY
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = 1.0*np.zeros_like(context.stocks)/len(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.1
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
def allocate(context, data):
    
    prices = data.history(context.stocks,'price',5*390,'1m')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = [(-1,1)]*len(context.stocks)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    allocation = res.x
    denom = np.sum(np.absolute(allocation))
    
    if res.success and denom > 0:
        allocation = allocation/denom
    else:
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation
              
def trade(context, data):
    
    if context.n > 0:
        allocation = context.s/context.n
        denom = np.sum(np.absolute(allocation))
        if denom > 0:
            allocation = allocation/np.sum(np.absolute(allocation))
        else:
            return
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
    record(SPY = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
    
    record(leverage = context.account.leverage)
         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
There was a runtime error.

Thanks a lot Grant for the useful answer!
As alternative, do you think that it could make sense also to calculate the weights of the long and short positions in independet manner by then minimizing the variance of returns of short and long separately?
Cheers
Francesco

Dear Grant,
just another question, is still correct the calculation of the returns on a portfolio that also includes both long and short positions as

ret = prices.pct_change()[1:].as_matrix(context.stocks)  

shouldn't this ok only for long positions?
Thanks again
Francesco

I think that the minimization of the variance works using the returns as written. The constraints are what determine whether it is long only, or if long and short are allowed. The first one is normalization of the weights, allowing both positive and negative. The second one, I think, is a kind of long-short mean reversion constraint, but I gotta dwell on that one a bit.

  cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

I think that the minimization of the variance works using the returns as written. The constraints are what determine whether it is long only, or if long and short are allowed. The first one is normalization of the weights, allowing both positive and negative. The second one, I think, is a kind of long-short mean reversion constraint, but I gotta dwell on that one a bit.

  cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  

What do you mean "breaks down" - times out, runs out of memory, other error, poor financial performance?

By the way, you can drop the call to handle_data if it does nothing.

if you put N=20 for exampe in attached backtest code at line 30

then at line 223

 if res.success and denom > 0:  
        allocation = allocation/denom  
    else:  
        print 'failed min'  
        allocation = np.copy(context.x0)  

the minimization function fails for all steps

Clone Algorithm
1
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a sample mean-reversion algorithm on Quantopian for you to test and adapt.
This example uses a dynamic stock selector, pipeline, to select stocks to trade. 
It orders stocks from the top 1% of the previous day's dollar-volume (liquid
stocks).

Algorithm investment thesis:
Top-performing stocks from last week will do worse this week, and vice-versa.

Every Monday, we rank high dollar-volume stocks based on their previous 5 day returns.
We long the bottom 10% of stocks with the WORST returns over the past 5 days.
We short the top 10% of stocks with the BEST returns over the past 5 days.

This type of algorithm may be used in live trading and in the Quantopian Open.
"""

# Import the libraries we will use here.
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume, Returns
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage
from quantopian.pipeline.data import morningstar

import numpy as np
import pandas as pd
import math
import scipy

N_STOCKS = 10
IS_LIQUID = 1e7


class AvgDailyDollarVolumeTraded(CustomFactor):
    inputs = [USEquityPricing.close, USEquityPricing.volume]
    
    def compute(self, today, assets, out, close_price, volume):
        out[:] = np.mean(close_price * volume, axis=0)


def initialize(context):
    """
    Called once at the start of the program. Any one-time
    startup logic goes here.
    """
  
    # Rebalance on the first trading day of each week at 11AM.
    schedule_function(rebalance,
                      date_rules.every_day(),
                      time_rules.market_open(hours=1, minutes=30))

    # Record tracking variables at the end of each day.
    schedule_function(record_vars,
                      date_rules.every_day(),
                      time_rules.market_close(minutes=1))

    
    context.n = 0
    context.s = np.zeros(N_STOCKS)
    context.x0 = 1.0*np.zeros(N_STOCKS)/float(N_STOCKS)
    context.x1 = 1.0*np.ones(N_STOCKS)/float(N_STOCKS)
    context.eps = 0.1
    #schedule_function(allocate,date_rules.every_day(),time_rules.market_open(hours=1))
    #schedule_function(trade,date_rules.every_day(),time_rules.market_open(hours=2))

    # Attach pipeline
    attach_pipeline(make_pipeline(context), 'ranking_pipeline')


def make_pipeline(context):
    """
    Create and return our pipeline.
    """
    pipe = Pipeline()
  
    # We only want to trade relatively liquid stocks.
    # Build a filter that only passes stocks that have $10,000,000 average
    # daily dollar volume over the last 20 days.
    dollar_volume = AvgDailyDollarVolumeTraded(window_length=20)
    is_liquid = (dollar_volume > IS_LIQUID)
    
    # We also don't want to trade penny stocks, which we define as any stock with an
    # average price of less than $5.00 over the last 200 days.
    sma_200 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=200)
    not_a_penny_stock = (sma_200 > 5)
        
    # Before we do any other ranking, we want to throw away these assets.
    initial_screen = (is_liquid & not_a_penny_stock)

    combined_rank = (
        dollar_volume.rank(mask=initial_screen)
    )
    pipe.add(combined_rank, 'combined_rank')

    selected = combined_rank.top(N_STOCKS)
   
    
    # The final output of our pipeline should only include 
    # the top/bottom N stocks by our criteria.
    pipe.set_screen(selected)
    
    pipe.add(selected, 'selected')
    
    return pipe


def before_trading_start(context, data):
    """
    Called every day before market open. This is where we get the securities
    that made it through the pipeline.
    """

    # Pipeline_output returns a pandas DataFrame with the results of our factors
    # and filters.
    context.output = pipeline_output('ranking_pipeline')

    context.selected = context.output[context.output['selected']]


    # A list of the securities that we want to order today.
    context.security_list = context.selected.index.tolist()

    # A set of the same securities, sets have faster lookup.
    context.security_set = set(context.security_list)
    context.stocks = context.security_list
    
def assign_weights(context, data):
    """
    Assign weights to our long and short target positions.
    """
    allocate(context, data)
    context.weights = context.x0
  
    
def rebalance(context,data):
    """
    This rebalancing function is called according to our schedule_function settings.
    """

    assign_weights(context, data)

    # For each security in our universe, order long or short positions according
    # to our context.long_secs and context.short_secs lists.
    i = 0
    for stock in context.security_list:
        if data.can_trade(stock):
            order_target_percent(stock, context.weights[i])
            print  stock
            print  context.weights[i]
            i +=1
            
            
    # Sell all previously held positions not in our new context.security_list.
    for stock in context.portfolio.positions:
        if stock not in context.security_set and data.can_trade(stock):
            order_target_percent(stock, 0)
    
    # Min variance calculation
    if context.n > 0:
        allocation = context.s/context.n
        denom = np.sum(np.absolute(allocation))
        if denom > 0:
            allocation = allocation/np.sum(np.absolute(allocation))
        else:
            return
    else:
        return
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    context.allocation = allocation


def record_vars(context, data):
    """
    This function is called at the end of each day and plots certain variables.
    """

    # Check how many long and short positions we have.
    longs = shorts = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1
        if position.amount < 0:
            shorts += 1

    # Record and plot the leverage of our portfolio over time as well as the
    # number of long and short positions. Even in minute mode, only the end-of-day
    # leverage is plotted.
    record(num_positions=len(context.portfolio.positions),
           exposure=context.account.net_leverage, 
           leverage=context.account.leverage)


def handle_data(context,data):
    """
    The handle_data function is called every minute. There is nothing that we want
    to do every minute in this algorithm.
    """
    pass


def allocate(context, data):
    
    prices = data.history(context.stocks,'price',5*390,'1m')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = [(-1,1)]*len(context.stocks)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    allocation = res.x
    denom = np.sum(np.absolute(allocation))
    
    if res.success and denom > 0:
        allocation = allocation/denom
    else:
        print 'failed min'
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation

    
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)


There was a runtime error.

Got it to work for N = 50. Once the backtest completes, I'll post it.

I changed to:

N_STOCKS = 50  
context.eps = 1  

Can't say that I understand it at this point, but there is an interaction between the two settings. I may have the time to take another look later today.

Clone Algorithm
1
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a sample mean-reversion algorithm on Quantopian for you to test and adapt.
This example uses a dynamic stock selector, pipeline, to select stocks to trade. 
It orders stocks from the top 1% of the previous day's dollar-volume (liquid
stocks).

Algorithm investment thesis:
Top-performing stocks from last week will do worse this week, and vice-versa.

Every Monday, we rank high dollar-volume stocks based on their previous 5 day returns.
We long the bottom 10% of stocks with the WORST returns over the past 5 days.
We short the top 10% of stocks with the BEST returns over the past 5 days.

This type of algorithm may be used in live trading and in the Quantopian Open.
"""

# Import the libraries we will use here.
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume, Returns
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage
from quantopian.pipeline.data import morningstar

import numpy as np
import pandas as pd
import math
import scipy

N_STOCKS = 50
IS_LIQUID = 1e7


class AvgDailyDollarVolumeTraded(CustomFactor):
    inputs = [USEquityPricing.close, USEquityPricing.volume]
    
    def compute(self, today, assets, out, close_price, volume):
        out[:] = np.mean(close_price * volume, axis=0)


def initialize(context):
    """
    Called once at the start of the program. Any one-time
    startup logic goes here.
    """
  
    # Rebalance on the first trading day of each week at 11AM.
    schedule_function(rebalance,
                      date_rules.every_day(),
                      time_rules.market_open(hours=1, minutes=30))

    # Record tracking variables at the end of each day.
    schedule_function(record_vars,
                      date_rules.every_day(),
                      time_rules.market_close(minutes=1))

    
    context.n = 0
    context.s = np.zeros(N_STOCKS)
    context.x0 = 1.0*np.zeros(N_STOCKS)/float(N_STOCKS)
    context.x1 = 1.0*np.ones(N_STOCKS)/float(N_STOCKS)
    context.eps = 1
    #schedule_function(allocate,date_rules.every_day(),time_rules.market_open(hours=1))
    #schedule_function(trade,date_rules.every_day(),time_rules.market_open(hours=2))

    # Attach pipeline
    attach_pipeline(make_pipeline(context), 'ranking_pipeline')


def make_pipeline(context):
    """
    Create and return our pipeline.
    """
    pipe = Pipeline()
  
    # We only want to trade relatively liquid stocks.
    # Build a filter that only passes stocks that have $10,000,000 average
    # daily dollar volume over the last 20 days.
    dollar_volume = AvgDailyDollarVolumeTraded(window_length=20)
    is_liquid = (dollar_volume > IS_LIQUID)
    
    # We also don't want to trade penny stocks, which we define as any stock with an
    # average price of less than $5.00 over the last 200 days.
    sma_200 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=200)
    not_a_penny_stock = (sma_200 > 5)
        
    # Before we do any other ranking, we want to throw away these assets.
    initial_screen = (is_liquid & not_a_penny_stock)

    combined_rank = (
        dollar_volume.rank(mask=initial_screen)
    )
    pipe.add(combined_rank, 'combined_rank')

    selected = combined_rank.top(N_STOCKS)
   
    
    # The final output of our pipeline should only include 
    # the top/bottom N stocks by our criteria.
    pipe.set_screen(selected)
    
    pipe.add(selected, 'selected')
    
    return pipe


def before_trading_start(context, data):
    """
    Called every day before market open. This is where we get the securities
    that made it through the pipeline.
    """

    # Pipeline_output returns a pandas DataFrame with the results of our factors
    # and filters.
    context.output = pipeline_output('ranking_pipeline')

    context.selected = context.output[context.output['selected']]


    # A list of the securities that we want to order today.
    context.security_list = context.selected.index.tolist()

    # A set of the same securities, sets have faster lookup.
    context.security_set = set(context.security_list)
    context.stocks = context.security_list
    
def assign_weights(context, data):
    """
    Assign weights to our long and short target positions.
    """
    allocate(context, data)
    context.weights = context.x0
  
    
def rebalance(context,data):
    """
    This rebalancing function is called according to our schedule_function settings.
    """

    assign_weights(context, data)

    # For each security in our universe, order long or short positions according
    # to our context.long_secs and context.short_secs lists.
    i = 0
    for stock in context.security_list:
        if data.can_trade(stock):
            order_target_percent(stock, context.weights[i])
            print  stock
            print  context.weights[i]
            i +=1
            
            
    # Sell all previously held positions not in our new context.security_list.
    for stock in context.portfolio.positions:
        if stock not in context.security_set and data.can_trade(stock):
            order_target_percent(stock, 0)
    
    # Min variance calculation
    if context.n > 0:
        allocation = context.s/context.n
        denom = np.sum(np.absolute(allocation))
        if denom > 0:
            allocation = allocation/np.sum(np.absolute(allocation))
        else:
            return
    else:
        return
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    context.allocation = allocation


def record_vars(context, data):
    """
    This function is called at the end of each day and plots certain variables.
    """

    # Check how many long and short positions we have.
    longs = shorts = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1
        if position.amount < 0:
            shorts += 1

    # Record and plot the leverage of our portfolio over time as well as the
    # number of long and short positions. Even in minute mode, only the end-of-day
    # leverage is plotted.
    record(num_positions=len(context.portfolio.positions),
           exposure=context.account.net_leverage, 
           leverage=context.account.leverage)


def handle_data(context,data):
    """
    The handle_data function is called every minute. There is nothing that we want
    to do every minute in this algorithm.
    """
    pass


def allocate(context, data):
    
    prices = data.history(context.stocks,'price',5*390,'1m')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = [(-1,1)]*len(context.stocks)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    allocation = res.x
    denom = np.sum(np.absolute(allocation))
    
    if res.success and denom > 0:
        allocation = allocation/denom
    else:
        print 'failed min'
        print denom
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation

    
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)


There was a runtime error.

Ok, thanks
for N=100, I noticed it is required eps=10 to make it work, backtest attached

Clone Algorithm
0
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a sample mean-reversion algorithm on Quantopian for you to test and adapt.
This example uses a dynamic stock selector, pipeline, to select stocks to trade. 
It orders stocks from the top 1% of the previous day's dollar-volume (liquid
stocks).

Algorithm investment thesis:
Top-performing stocks from last week will do worse this week, and vice-versa.

Every Monday, we rank high dollar-volume stocks based on their previous 5 day returns.
We long the bottom 10% of stocks with the WORST returns over the past 5 days.
We short the top 10% of stocks with the BEST returns over the past 5 days.

This type of algorithm may be used in live trading and in the Quantopian Open.
"""

# Import the libraries we will use here.
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume, Returns
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage
from quantopian.pipeline.data import morningstar

import numpy as np
import pandas as pd
import math
import scipy

N_STOCKS = 100
IS_LIQUID = 1e7


class AvgDailyDollarVolumeTraded(CustomFactor):
    inputs = [USEquityPricing.close, USEquityPricing.volume]
    
    def compute(self, today, assets, out, close_price, volume):
        out[:] = np.mean(close_price * volume, axis=0)


def initialize(context):
    """
    Called once at the start of the program. Any one-time
    startup logic goes here.
    """
  
    # Rebalance on the first trading day of each week at 11AM.
    schedule_function(rebalance,
                      date_rules.every_day(),
                      time_rules.market_open(hours=1, minutes=30))

    # Record tracking variables at the end of each day.
    schedule_function(record_vars,
                      date_rules.every_day(),
                      time_rules.market_close(minutes=1))

    
    context.n = 0
    context.s = np.zeros(N_STOCKS)
    context.x0 = 1.0*np.zeros(N_STOCKS)/float(N_STOCKS)
    context.x1 = 1.0*np.ones(N_STOCKS)/float(N_STOCKS)
    context.eps = 10
    #schedule_function(allocate,date_rules.every_day(),time_rules.market_open(hours=1))
    #schedule_function(trade,date_rules.every_day(),time_rules.market_open(hours=2))

    # Attach pipeline
    attach_pipeline(make_pipeline(context), 'ranking_pipeline')


def make_pipeline(context):
    """
    Create and return our pipeline.
    """
    pipe = Pipeline()
  
    # We only want to trade relatively liquid stocks.
    # Build a filter that only passes stocks that have $10,000,000 average
    # daily dollar volume over the last 20 days.
    dollar_volume = AvgDailyDollarVolumeTraded(window_length=20)
    is_liquid = (dollar_volume > IS_LIQUID)
    
    # We also don't want to trade penny stocks, which we define as any stock with an
    # average price of less than $5.00 over the last 200 days.
    sma_200 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=200)
    not_a_penny_stock = (sma_200 > 5)
        
    # Before we do any other ranking, we want to throw away these assets.
    initial_screen = (is_liquid & not_a_penny_stock)

    combined_rank = (
        dollar_volume.rank(mask=initial_screen)
    )
    pipe.add(combined_rank, 'combined_rank')

    selected = combined_rank.top(N_STOCKS)
   
    
    # The final output of our pipeline should only include 
    # the top/bottom N stocks by our criteria.
    pipe.set_screen(selected)
    
    pipe.add(selected, 'selected')
    
    return pipe


def before_trading_start(context, data):
    """
    Called every day before market open. This is where we get the securities
    that made it through the pipeline.
    """

    # Pipeline_output returns a pandas DataFrame with the results of our factors
    # and filters.
    context.output = pipeline_output('ranking_pipeline')

    context.selected = context.output[context.output['selected']]


    # A list of the securities that we want to order today.
    context.security_list = context.selected.index.tolist()

    # A set of the same securities, sets have faster lookup.
    context.security_set = set(context.security_list)
    context.stocks = context.security_list
    
def assign_weights(context, data):
    """
    Assign weights to our long and short target positions.
    """
    allocate(context, data)
    context.weights = context.x0
  
    
def rebalance(context,data):
    """
    This rebalancing function is called according to our schedule_function settings.
    """

    assign_weights(context, data)

    # For each security in our universe, order long or short positions according
    # to our context.long_secs and context.short_secs lists.
    i = 0
    for stock in context.security_list:
        if data.can_trade(stock):
            order_target_percent(stock, context.weights[i])
            print  stock
            print  context.weights[i]
            i +=1
            
            
    # Sell all previously held positions not in our new context.security_list.
    for stock in context.portfolio.positions:
        if stock not in context.security_set and data.can_trade(stock):
            order_target_percent(stock, 0)
    
    # Min variance calculation
    if context.n > 0:
        allocation = context.s/context.n
        denom = np.sum(np.absolute(allocation))
        if denom > 0:
            allocation = allocation/np.sum(np.absolute(allocation))
        else:
            return
    else:
        return
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    context.allocation = allocation


def record_vars(context, data):
    """
    This function is called at the end of each day and plots certain variables.
    """

    # Check how many long and short positions we have.
    longs = shorts = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1
        if position.amount < 0:
            shorts += 1

    # Record and plot the leverage of our portfolio over time as well as the
    # number of long and short positions. Even in minute mode, only the end-of-day
    # leverage is plotted.
    record(num_positions=len(context.portfolio.positions),
           exposure=context.account.net_leverage, 
           leverage=context.account.leverage)


def handle_data(context,data):
    """
    The handle_data function is called every minute. There is nothing that we want
    to do every minute in this algorithm.
    """
    pass


def allocate(context, data):
    
    prices = data.history(context.stocks,'price',5*390,'1m')
    
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
    
    bnds = [(-1,1)]*len(context.stocks)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(np.absolute(x))-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
    
    allocation = res.x
    denom = np.sum(np.absolute(allocation))
    
    if res.success and denom > 0:
        allocation = allocation/denom
    else:
        print 'failed min'
        print denom
        allocation = np.copy(context.x0)
    
    context.n += 1
    context.s += allocation

    
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)


There was a runtime error.

I'd suggest "rolling your own" versus trying to piggy-back off of my code (it is just a hack job). One thought is to do the long-short filtering/ranking thing, showing that you can get some decent performance, and then maybe trying some sort of minimum variance optimization, to see if the Sharpe ratio could be improved. Note that CVXOPT is also available as an optimizer.

Hello Grant, et al:
An algorithm that is simple, trades weekly, and achieves Sharpe > 2, deserves some consideration.
In playing with Grant's algo I noticed some odd behavior and took some effort to investigate.

My backtest file contains additional comments.

Observations
1. The algorithm is novel in how returns are calculated from minute data over a five day span. This overcomes a problem with the use of daily data. For robust covariance you want to use a number of periods that is at least 10x the number of equities. Four equities implies 40 days of data. Such a long window is a problem when trying to characterize an erratic equity like VIX or a commodity.
2. Total return for selected equities (RSP, EDV, TLT, XIV) and eps (0.05) is superb over the period 1 Dec 2010 to 4 Feb 2016
3. Total return is very sensitive to eps
288% @ eps = 0.049
362% @ eps = 0.050
219% @ eps = 0.051
4. Total return changes drastically with substitution of similar equities
(SPY for RSP) 235% vs 362%
(TLO for TLT) 197% vs 362%
5. The total return is very sensitive to the number of days of look back (separate backtest showed more than 3-to-1 variation in Sharpe ratio for lookback periods of 3 to 20 days). Similarly the result was very sensitive to start/end dates.
6. Something is wrong for such sensitivity in the result.

Compliance with the inequality constraint
For Grant's version of the algo and the case of eps=0.05 the inequality constraint is only met 49 out of 1303 days
This means that the algo is coasting the prior solution most of the time and is not operating as intended.
As you might expect is such a case the result is very sensitive to parameter or equity changes.
If the "success" logic is corrected the actual return is 101%, which is better than SPY, but with much higher volatility and drawdown

Why is this happening?
The ret_norm values are typically smaller than 0.05 and are often negative.
This means that on most days no set of positive weights summing to 1.0 can be found that will satisfy the constraint: dot(weights,ret_norm) > eps

But the algo checks for res.success
Yes, it does.
I don't understand the scipy SLSQP implementation well enough to say why res.success is True when res.status is not 0
Grant appears to be calling/invoking (what is the python term?) per the documentation
The following res.status errors are commonly seen with large eps values:
4 : Inequality constraints incompatible
8 : Positive directional derivative for linesearch
9 : Iteration limit exceeded (this always appears on the first 4 days)

So should I use a smaller eps value?
Generally yes, but not a fixed value
In this particular algo the set of equities is small and there is no other mechanism to assure that ret_norm values above some threshold, or are even positive.
Given this you face poor trade: reducing eps allows the algorithm to function as intended, but degrades performance as eps is the return threshold
Some dynamic method for setting eps is needed for this algorithm.

How to dynamically select eps
I welcome those of you with finance and math backgrounds to provide a more elegant solution, Here's a simple approach.
Assume that Equity 1 has the highest of the four ret_norm values (ret_norm_max). If the weights are set so that all are zero except for Equity 1, then the portfolio's ret_norm = ret_norm_max. Since the number of equities is small it is possible that Equity 1 is the only one with ret_norm>0. This condition frustrates optimization. A smaller eps value than ret_norm_max will improve the likelihood of optimization. As noted above a too small value will rob performance.
Here are some results for eps = 85%, 90%, 95%, and 100% of ret_norm_max
For each case the SLSQP optimizer succeeds in 1299 of 1303 days. It consistently fails to on the first 4 days.
return = 189% @ 100%
return = 239% @ 95%
return = 243% @ 90%
return = 232% @ 85%

Comment
While the result is not as spectacular as it first appeared the algorithm is now functioning as I expected and is more robust to parameter and equity changes. Now more investigation can be done (eps selection, equity selection, look back window, ...)

Clone Algorithm
83
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
#
# minimum variance with constraint optimization applied to a small set of equities
#
#    Method summary
#    typical variance minimization problem with the following features
#    minimization method is SLSQP
#    ==> must supply function (variance) and derivative (jac_variance)
#    constraints applied
#        sum of weights = 1
#        weighted sum of normalized returns > eps
#    returns are calculated from minute data over a five day span
#        this is novel way to simultaneousely achieve
#            robust covariance estimate (sample count > 10x number of equities)
#            short window to allow use of dynamic equities (like VIX)
#    ordering is done weekly with
#        order weights = average of optimization results over past 5 days
#
#    Observations
#    1. Result for selected equities (RSP, EDV, TLT, XIV) and eps (0.05) is superb over the period 1 Dec 2010 to 4 Feb 2016
#    2. Result is very sensitive to eps
#        288% @ eps = 0.049    362% @ eps = 0.050    219% @ eps = 0.051
#    3. Result changes drastically with substitution of similar equities 
#        (SPY for RSP)   235% vs 362%
#        (TLO for TLT)   197% vs 362%
#
#    Supposition
#    Such erratic behavior hints of a code error/bug or an ill-posed problem
#    Ill-posed?
#      General problem statement
#        This problem should be well-posed given to similarity to other problems
#        The use of normalized returns vs log returns is not standard
#        This should should not be problematic 
#    code error/bug?
#      Application of optimizer
#        SLSQP is a valid method for this class of problem
#        The optimizer appears to be called per its documentation
#      Function calls
#        variance() and jac_variance() are consistently defined
#      Application of constraints
#        The two constraints after each "successful" optimization
#        sum of weights = 1 is always met
#        weighted sum of normalized returns > eps is met only 49 times in 5 years
#        ==> here is a problem
#        For sufficiently large eps value the constraint can never be met
#        Despite this the SLSQP algorithm continues to report "success"
#        
#    Secondary application of the inequality constraint
#      A secondary check is added to verify that the inequality constraint is met
#
#    Observations relative to initial behavior
#    1. Result for selected equities (RSP, EDV, TLT, XIV) and eps (0.05) is not desirable over the period 1 Dec 2010 to 4 Feb 2016
#        Return somewhat better than SPY, but volatility and drawdown much worse
#    2. Result remains sensitive to eps, but reason is clearer
#         88% @ eps = 0.049 (constraints met 58 in 1303 days)   
#        101% @ eps = 0.050 (49 times)   
#         97% @ eps = 0.051 (42 times)
#    3. Result remain sensitive to substitution of similar equities. same reason
#        (SPY for RSP)    98% vs 101% (constraints met 53 times)
#        (TLO for TLT)   121% vs 101% (40 times)
#    4. for eps = 0.01 (Grant Kiehne's initial model) 
#        155% @ eps = 0.01 (constraint met 1018 times)
#    5. eps can be further reduced to increase probability of meeting constraints
#        133% @ eps = 0.0 (constraint met 1204 times)
#        such low eps values defeat the purpose of having the eps limit
#         ==> some alternate means of assigning eps is needed
#    6. Try eps = np.max(ret_norm)
#        This value is achievable 
#        189% @ eps = np.max(ret_norm)  (constraint met 1299 times)
#        No optimization is possible if only one return is positive
#        This may happen for the small portfolio considered here
#        ==> need to relax eps some to allow optimization
#    7. Try eps = PCT*np.max(ret_norm) if max>0
#        239% @ PCT=95%  (constraint met 1299 times)
#        243% @ PCT=90%  (constraint met 1299 times)
#        232% @ PCT=85%  (constraint met 1299 times)
# 

import numpy as np
import scipy

def initialize(context):
    
    context.stocks = [sid(24744),  # RSP
                      sid(22887),  # EDV
                      sid(23921),  # TLT
                      sid(40516)]  # XIV
      
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = np.zeros_like(context.stocks)
    context.x1 = 1.0*np.ones_like(context.stocks)/len(context.stocks)
    
    context.eps = 0.01
    context.tol = 1.0e-5    #assume convergence is 10 time SLSQP ftol of 1e-6
    context.valid_constraint_count = 0
    context.opt_pass_count = 0
    context.run_count = 0
    context.eps_vals = []
    
    schedule_function(allocate,date_rules.every_day(),time_rules.market_open(minutes=60))
    
    schedule_function(trade,date_rules.week_start(days_offset=1),time_rules.market_open(minutes=60))
    
    set_commission(commission.PerTrade(cost=0))
    
    set_long_only()
    
    schedule_function(record_leverage, date_rules.every_day())

def record_leverage(context, data):
    record(leverage = context.account.leverage)

def allocate(context, data):
    context.run_count += 1
    prices = data.history(context.stocks, 'price', 5*390,'1m')
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    ret_mean = prices.pct_change().mean()
    ret_std = prices.pct_change().std()
    ret_norm = ret_mean/ret_std
    ret_norm = ret_norm.as_matrix(context.stocks)
#
#    alternate eps assignment method
#
    ret_norm_max = np.max(ret_norm)
    eps_factor = 0.90 if ret_norm_max >0 else 1.0
    context.eps = eps_factor*ret_norm_max
    
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
           
    bnds = tuple(tuple(x) for x in bnds)

    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})
    
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)
#
#  disable original allocation assignment code
#
#    if res.success:
#        allocation = res.x
#        allocation[allocation<0] = 0
#        denom = np.sum(allocation)
#        if denom > 0:
#            allocation = allocation/denom
#    else:
#        allocation = np.copy(context.x0)     
#    context.n += 1
#    context.s += allocation
    
#---------- add some debugging code
#

    allocation = np.copy(context.x0)    
    if res.success:    # if SLSQP declares success
        context.opt_pass_count += 1
        wt_constraint = np.sum(res.x) - 1.0
        weighted_ret_norm = np.dot(res.x,ret_norm)
        w_ret_constraint = weighted_ret_norm - context.eps + context.tol
        record(wt_constraint = wt_constraint)
        record(w_ret_constraint = w_ret_constraint)
        if(w_ret_constraint > 0): # and constraint is actually met
            context.valid_constraint_count += 1
            allocation = res.x
            allocation[allocation<0] = 0
            denom = np.sum(allocation)
            if denom > 0:
                allocation = allocation/denom 
                
            msg = "{0} runs, {1} SLSQP passes, {2} constraints passed".format(
                context.run_count, context.opt_pass_count,
                context.valid_constraint_count)
            if(context.run_count>1000): log.info(msg)
        else:
            log.info("constraint fail, SLSQP status = {0}".format(res.status))
    else:
        log.info("SLSQP fail, SLSQP status = {0}".format(res.status))
    context.n += 1
    context.s += allocation
#
#---------- end of debugging code
def trade(context, data):
    if context.n > 0:
        allocation = context.s/context.n
    else:
        return
    
    context.n = 0
    context.s = np.zeros_like(context.stocks)
    context.x0 = allocation
    
    if get_open_orders():
        return
    
    for i,stock in enumerate(context.stocks):
        order_target_percent(stock,allocation[i])
        
"""        
    record(RSP = allocation[0])
    record(EDV = allocation[1])
    record(TLT = allocation[2])
    record(XIV = allocation[3])
"""         
def variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
    
    return np.dot(x,np.dot(Acov,x))

def jac_variance(x,*args):
    
    p = np.squeeze(np.asarray(args))
    Acov = np.cov(p.T)
        
    return 2*np.dot(Acov,x)
There was a runtime error.

Thanks Peter,

Glad you found it interesting.

Cheers,

Grant

Here's an update using CVXPY. I think it is working basically in the same fashion as the original post above. It has been re-factored a bit, and more could be done. The main thing here is that CVXPY can be used. --Grant

Note:

    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
Clone Algorithm
105
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import cvxpy as cvx

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
    
    # parameters
    # --------------------------
    context.N = 5 # trailing window size, days
    context.leverage = 1.0 # gross leverage
    # --------------------------
    
    context.stocks = [
        sid(24744),  # RSP
        sid(22887),  # EDV
        sid(23921),  # TLT
        sid(40516)   # XIV
        ]
    
    context.weight = np.zeros(len(context.stocks))
    
    schedule_function(normalize, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60))
    
    schedule_function(allocate, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60))

def before_trading_start(context,data):    

    record(leverage = context.account.leverage)
    
    num_secs = 0
    for i,stock in enumerate(context.portfolio.positions.keys()):
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)
    
    get_weights(context, data)
    
def normalize(context,data):
    
    context.weight[context.weight<0] = 0
    
    denom = np.sum(context.weight)
    if denom > 0:
        context.weight = context.weight/denom

def get_weights(context, data):
    
    m = len(context.stocks)
    
    b_current = np.zeros(m)
    
    for i, stock in enumerate(context.stocks):
        b_current[i] = context.portfolio.positions[stock].amount*data.current(stock,'price')
    
    denom = np.sum(b_current)
    
    # test for divide-by-zero case
    if denom > 0:
        b_current = b_current/denom
    else:
        b_current = np.zeros(m)
    
    prices = data.history(context.stocks,'price',context.N*390,'1m')
     
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    p = prices.as_matrix(context.stocks)
    ret_mean = np.mean(ret,axis=0)
    ret_mean_norm = ret_mean/np.std(ret,axis=0)
    
    p = np.squeeze(np.asarray(ret))
    Acov = np.cov(p.T)
     
    x = cvx.Variable(m) # portfolio weights to be found
    
    variance = cvx.quad_form(x, Acov)
    
    # minimize variance
    objective = cvx.Minimize(variance)

    constraints = [cvx.sum_entries(x) == 1, cvx.sum_entries(ret_mean_norm*x) >= 0.02, x > 0]
    
    prob = cvx.Problem(objective, constraints)
    prob.solve()
    
    if prob.status == 'optimal':
        b = np.squeeze(np.asarray(x.value))
    else:
        b = b_current
     
    context.weight += b
              
def allocate(context, data):
    
    for i, stock in enumerate(context.stocks):
            if data.can_trade(stock):
                order_target_percent(stock, context.leverage*context.weight[i])
                
    context.weight = np.zeros(len(context.stocks))
There was a runtime error.

Very cool Grant. What do limitations, if any, do you see with this algo?

Hi Evan,

Well, first off, I just hacked the thing together, so caveat investor. This is probably the biggest "limitation"--that it doesn't have a spelled out underlying economic principle (i.e. what makes the thing tick). So, it could be "over-fit" and/or suffer from "data mining" (insert your favorite quant sin). The problem is compounded by the limited time over which one can backtest. If it could be tested going back many decades, then one could have more confidence.

The drawdown and volatility are very high, so another limitation, assuming that the long-term returns would persist, is that in the absence of a model explaining the returns, the likelihood of "abandoning ship" after losing money is high. For example, say one puts money into it and then it immediately drops 20%. It would be easy to justify pulling out, to cut losses.

It would be nice to see beta much lower (e.g. in the range -0.3 to 0.3, as Q requires for the contest), without shorting. With such a high beta, maybe there is a risk that if the market tanks, the strategy would die, too?

Leverage 1.29 intraday. Try this: Before ordering, determine whether the order will be an increase or decrease in allocation. Since there is no shorting in this case those are buy and sell respectively. Hold buys until selling is done. That will resolve the 57k negative cash and typically result in a higher return from what I've seen. Here, returns show 365% however margin was discarded so it made 366k on 157k for 232%. Benchmark 117%. It may do higher than 365%, real with no negative cash given those changes. There's an example of it above.

2017-02-24 13:00 _pvr:155 INFO PvR 0.1483 %/day   cagr 0.3   Portfolio value 466017   PnL 366017  
2017-02-24 13:00 _pvr:156 INFO   Profited 366017 on 157790 activated/transacted for PvR of 232.0%  
2017-02-24 13:00 _pvr:157 INFO   QRet 366.02 PvR 231.96 CshLw -57790 MxLv 1.29 RskHi 157790 MxShrt 0  
2017-02-24 13:00 pvr:245 INFO 2010-12-08 to 2017-02-24  $100000  2017-02-26 08:23 US/Eastern  
Runtime 0 hr 12.9 min  

A quickie example to illustrate that there may be ways to "tame" the algo to be kinda market neutral without shorting.

Clone Algorithm
105
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import cvxpy as cvx

def initialize(context):
    
    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
    
    # parameters
    # --------------------------
    context.N = 5 # trailing window size, days
    context.eps = 0.02
    context.leverage = 1.0 # gross leverage
    # --------------------------
    
    context.stocks = [
        sid(24744),  # RSP
        sid(22887),  # EDV
        sid(23921),  # TLT
        sid(40516)   # XIV
        ]
    
    context.weight = np.zeros(len(context.stocks))
    
    schedule_function(normalize, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60))
    
    schedule_function(allocate, date_rules.week_start(days_offset=1), time_rules.market_open(minutes=60))

def before_trading_start(context,data):    

    record(leverage = context.account.leverage)
    
    num_secs = 0
    for i,stock in enumerate(context.portfolio.positions.keys()):
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)
    
    get_weights(context, data)
    
def normalize(context,data):
    
    context.weight[context.weight<0] = 0
    
    denom = np.sum(context.weight)
    if denom > 0:
        context.weight = context.weight/denom

def get_weights(context, data):
    
    m = len(context.stocks)
    
    b_current = np.zeros(m)
    
    for i, stock in enumerate(context.stocks):
        b_current[i] = context.portfolio.positions[stock].amount*data.current(stock,'price')
    
    denom = np.sum(b_current)
    
    # test for divide-by-zero case
    if denom > 0:
        b_current = b_current/denom
    else:
        b_current = np.zeros(m)
    
    prices = data.history(context.stocks,'price',context.N*390,'1m')
     
    ret = prices.pct_change()[1:].as_matrix(context.stocks)
    p = prices.as_matrix(context.stocks)
    ret_mean = np.mean(ret,axis=0)
    w = 1.0/np.std(ret,axis=0)
    d = np.sum(w)
    ret_mean_norm = w*ret_mean
    
    p = np.squeeze(np.asarray(ret))
    Acov = np.cov(p.T)
     
    x = cvx.Variable(m) # portfolio weights to be found
    
    variance = cvx.quad_form(x, Acov)
    
    # minimize variance
    objective = cvx.Minimize(variance)

    constraints = [cvx.sum_entries(x) == 1, cvx.sum_entries(ret_mean_norm*x) >= context.eps, x > 0]
    
    prob = cvx.Problem(objective, constraints)
    prob.solve()
    
    if prob.status == 'optimal':
        b = w*np.squeeze(np.asarray(x.value))/d
    else:
        b = w*b_current/d
     
    context.weight += b
              
def allocate(context, data):
    
    for i, stock in enumerate(context.stocks):
            if data.can_trade(stock):
                order_target_percent(stock, context.leverage*context.weight[i])
                
    context.weight = np.zeros(len(context.stocks))
There was a runtime error.

Chart says it used 100k to make 152k, 152%.
In reality it used 144k to make 152k, that's 105%, under the benchmark 117%.

That's the good news. With default slippage/commissions:
Uses 196k to make 146k, just 75%. Better to invest in SPY.

Regarding commissions, my thought is that they would be $0 under Robinhood, right?

I guess you are concerned with leverage > 1 temporarily. Not sure how to handle that one. My assumption is that Robinhood can handle re-balancing, but if it means that cash has to be kept in reserve, then, yes, it will hurt the return.

As for slippage, for small amounts of capital it can be neglected, but maybe that's incorrect?

With zero commission and default slippage:
Uses 197k to make 150k, that's 76%.
Q Returns show 150%.

So slippage played the major role.

By the way I wish we had set_nonmargin() to be able to model what would happen on Robinhood or any nonmargin account.

@ Blue -

Even better than set_nonmargin() would be something like set_nonmargin(broker=`Robinhood`) with any idiosyncrasies baked in, so that backtesting and Q paper trading would be 1:1 with real-money trading. Robinhood, though, seems to have dropped off the radar screen at Quantopian headquarters. It's all about the Q fund (and futures in private alpha). Maybe a user has written an add-on like your proposed set_nonmargin()?

Hello guys,
Great work! Quick question...is this algo safe for smaller accounts (less than $25,000). Will it trigger the "Pattern Day trader" rule (by executing four roundtrip daytrades within 5 days)?

Thanks!

A weekly re-balancing should be o.k., however I've been advised that one has to watch out for going into negative cash. See https://www.quantopian.com/posts/quantopian-and-robinhood-lessons-learned-and-best-practices for some good info.

Grant,

I'm very interested in your implementation of the optimizer to find allocations. Is there a paper or article you can point me to for background? I'd like to understand the strategy better.

TIA.

Hi Stephen -

I am not aware of a paper or article describing exactly what I've done (a hack, really), but you might try Google Scholar. The basic idea is to minimize the variance in returns with constraints. I would be surprised if nobody has ever published something on the topic.

If you find anything, please share the references.

There is some general discussion on the use of optimizers in Robert Carver's book, Systematic Trading, along with using trailing volatility for weighting.

Hi,

Really interesting stuff here. Question: according to scipy docs on the minimze function being employed here:

"Note that COBYLA only supports inequality constraints." https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.minimize.html

But in the code, inequality is being used, and the method specified is SLSQP. Does this mean that this constraint is not actually being used? Or that the COBYLA method is in fact being used?

cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)