Back to Community
Universal portfolios

This algo is an implementation of Universal Portfolios, described in paper by Professor Thomas M. Cover from Stanford. His book is one of the standard textbooks in Information Theory. For implementation tricks to bring the theory into practice, please refer to comments in the code.

This model makes no statistical assumption about distribution of asset prices (e.g. normal, log-normal). Also, it leaves few rooms for backtest-fitting, as only parameters are choice of (i) assets in portfolio and (ii) look back period. I chose very basic 8 ETFs and one year period.

Interestingly, in late 2008 the model decided to own none of the 8 ETFs and just held cash during financial crisis. Even more interestingly, the model again sold all ETFs on August 26, 2015, so we just have $3M in cash today (started with $1M 10 years ago)...

Note: It is long-only and with high beta - not suitable as for contest. But it could be potentially a useful framework in allocating funds to algos in hedge fund.

Clone Algorithm
824
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
---------------------------------------
UNIVERSAL PORTFOLIOS
---------------------------------------
Implementation of strategy inspired by the paper by Thomas M. Cover, Information Theorist
Implementation authored by Naoki Nagai, 2015

Description:

This algo is an Quantopian python implematation of Universal Portfolios, described in paper by Professor Thomas M. Cover from Stanford.  Universal Portfolio is mathematically proved to achieve the return close to to the optimal constantly rebalanced portfolio in hindsight.

Methodology:

Let us construct regularly rebalanced portfolios with fixed weights given to each security (e.g. 40% Equity, 40% Bond, 20% Gold).  What would be the optimal weight given to each asset?  In this methodology, we evaluate every portfolio with every possible combination of weights, and calculate the return for each.  Then, our Universal Porfolio will be the weighted average of the all of these possible portfolio, weighted by the performance of each.  We don't make any kind of statistical assumptions about the underlying distribution of prices.  It's purely based on historical pricing data.

Proof:

Professor Cover's paper shows that return generated from this methodology S^ approaches  S*.  S* is the return of the regularly rebalanced portfolio with the optimal constant weight, which was selected in hindsight.  Even though we select the universal portfolio before knowing how it turns out, it approaches the optimal portfolio that was selected after the performance is known.
(for proof, see paper http://www-isl.stanford.edu/~cover/papers/paper93.pdf)

Analogy:

Algo works kind of like this.  We have tens of thousands of porfolio managers who decides their own allocations.  Then looking at the their performance for the 1 past year, we allocate our investment fudns proportional to the past 1 year return.  You can imagine this probably works.

Implication:

Perhaps Q fund could allocate their entire fund to each algo using this methodology. 
'''

import numpy as np

def initialize(context):
    set_symbol_lookup_date('2015-01-01')
    context.equities = symbols(
        # Equity
        'VV',     # US large cap
        'VO',     # US mid cap
        'VB',     # US small cap
    )
    context.fixedincome = symbols(
        # Fixed income
        'TLT',    # Long-term government bond
        'IEF',    # Mid-term government bond
        'LQD',    # Corporate bond
    )
    context.realasset = symbols(
        # Commodity and REIT
        'GLD',    # Gold
        'VNQ',    # US REIT
    )
    context.securities = context.equities + context.fixedincome + context.realasset
    
    context.period = 252                   # One year to evaluate past performance
    context.lever = 2.0                    # Leverage 
    context.allocation_intervals = 10      # Allocation intervals (100% / 10 = 10% increment) 
    context.weights = dict.fromkeys(context.securities, 0)
    context.shares = dict.fromkeys(context.securities, 0)

    # analyze function determine the target weights for the week
    schedule_function(analyze, date_rules.week_start(days_offset = 2),   time_rules.market_close())

    # rebalance function determine the target shares for the day
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes=60))
    
def analyze(context, data):    
    # History of return prices
    prices = history(bar_count=context.period+1, frequency='1d', field='price', ffill=True)
    
    # Returns is daily change, remove empty security, remove the last day NaN
    returns = prices.pct_change().dropna(how='all',axis=1)[1:]
    
    # Change the data to Numpy for faster calculation
    X = np.array(returns)
    X[np.isnan(X)] = 0
    
    # Transpose and add 1 (i.e. change -0.01 -> 0.99)
    X = X.transpose() + 1.0
    (n, m) = X.shape
    
    # In theory, we are supposed to calculate the integral of wealth over all portfolio
    # You cannot do that in practice, so we approximate by doing it descreetly.
    # We are going to vary weight by 10% due to memory constraint.
    B = binnings(context.allocation_intervals, n) / context.allocation_intervals
    
    # B is a matrix containing weights for every possible portfolio.  
    # We try every combination of weights for m securities
    # There are precision C m combinations for such portfolio = 19,448 portfolios for 8 securities 
    log.info('--- Universal Portfolio: evaluated %d possible portfolio for %d assets' % B.shape)

    # S is wealth vector corresponding to each portfolio. It is calculated as below
    # - B contains vectors of weights for all portfolio, X contains daily return for each asset
    # - By matrix algebra, BX will calculate daily returns for each porfolio for the past yaer
    # - Product of BX along the axis 1 (time) is the annual return for each portfolio.
    S = np.prod(np.dot(B,X), axis=1) - 1
    
    # Finally weight is calculated by weighted average of all portfolios possible, 
    # using the past 1 year of past return as the weight.  We can do this by SB/|S|
    W = np.dot(S,B)/sum(abs(S))
   
    # Store the weight in context variable.  We calculate this weekly. 
    # Actually ordering of shares is peformed in rebalance function
    i = 0
    for sec in returns:
        log.info('%4s: % 2.1f (%s)' % (sec.symbol, W[i] *100, sec.security_name))
        if sec in data:
            # We set the weight to long-only.
            # After we calculate the weight average of the all portfolio returns,
            # it could happen that weighted average ends up being negative.
            # It needs to be verified but it does not mean we should short it.
            # It means it needs not to be invested in those securities with negative weight
            context.weights[sec] = max(0,W[i])
        i = i + 1
     
        
# From the target weight, calculate how many shares we should be owning
def rebalance(context, data):
    # Take averages of 3 days to avoid over-reacting to daily price fluctuation
    prices = history(3, frequency='1d', field='price', ffill=False).mean()
    
    for sec in context.weights:
        
        # Target weight for this asset
        target_weight = context.weights[sec] * context.lever

        # How many shares are we trading?
        target_share = context.portfolio.portfolio_value * target_weight / prices[sec]
        
        # Record target shares
        context.shares[sec] = target_share
        
def execute(context, data):
    # Average trading volume per hour
    tradingvolume = history(3, frequency='1d', field='volume', ffill=True).mean()
    
    for sec in context.shares:
        
        # If share has no data, skip
        if sec not in data:
            continue
            
        # If we still have outstanding orders, skip
        if sec in get_open_orders():
            continue
        
        # How many shares are we trading?
        target_share = context.shares[sec]
        
        # How many shares do we have now?
        current_share = context.portfolio.positions[sec].amount
        
        # Trading shares is the gap between the current and target shares
        trade_share = target_share - current_share
        
        # volume of share trade cannot exceed the trading volume of the last bar
        trade_share = min(trade_share,  tradingvolume[sec]/390/5)    # for buying shares
        trade_share = max(trade_share, -tradingvolume[sec]/390/5)    # for selling shares
        
        # Don't trade less than $1000 to save comission
        if abs(trade_share * data[sec].price) < 1000:
            continue
        
        # Make the order 
        order_target(sec, current_share + trade_share)
    
def handle_data(context, data):
    w = context.weights
    record(equities =    sum(w[s] for s in context.equities    if w[s] > 0))
    record(fixedincome = sum(w[s] for s in context.fixedincome if w[s] > 0))
    record(realassets =  sum(w[s] for s in context.realasset   if w[s] > 0))
    record(cash = max(0,context.portfolio.cash) / context.portfolio.portfolio_value)
    execute(context, data)

   
# Thanks to smart implementaion by the user 'bar' from stackoverflow
# http://stackoverflow.com/questions/6750298/efficient-item-binning-algorithm-itertools-numpy
def binnings(n, k, cache={}):
    if n == 0:
        return np.zeros((1, k))
    if k == 0:
        return np.empty((0, 0))
    args = (n, k)
    if args in cache:
        return cache[args]
    a = binnings(n - 1, k, cache)
    a1 = a + (np.arange(k) == 0)
    b = binnings(n, k - 1, cache)
    b1 = np.hstack((np.zeros((b.shape[0], 1)), b))
    b1 = np.hstack((np.zeros((b.shape[0], 1)), b))
    result = np.vstack((a1, b1))
    cache[args] = result
    return result



We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.
18 responses

Thanks for the well-written algorithm with comprehensive comments!

Hi Naoki,

Thank you very much for sharing this, it looks really nice.
I was wondering though: How is it possible that the script chooses to keep the cash and not buy any ETF's?
The bins only consider the money to be put in at least one of the ETF's, not none, am I right?

Also, just as a remark, there is a numpy function np.nan_to_num(arr), doing exactly what it implies.

At least one ETF get the money, and in all such C(10,8) portfolios all of the funds are invested. But when taking the average of portfolio holdings weighted by the return of each portfolio, return happened to be negative across most portfolios. The script decides not to buy any ETFs if all holdings turns negative in this way.

Thanks for the tip on nan_to_num!

That makes sense..

Something else I was wondering; The bins you are generating are basically just linear combinations of the identity matrix (all money in one of the securities)
Why does the outcome differ (it does) when you consider all these bins separately, and not just using the identity matrix as 'bins'?

Even turning off the 'clipping' operations such as zeroing negative weights and disregarding trades under 1000, there still is a difference between the two..

The result from portfolios with all money in each security, and result of all possible portfolio is different, because we rebalance it daily. Return of average of 100% stock and 100% bond is different from the return of 50% bond/50% stock if you rebalance it (if you don't rebalance at all, it should be the same).

In terms of algebra, we are taking S = X B where matrix X (252 days of return * m securities) dotted by matrix B (m securities * all possible portfolios), which will produce matrix S (252 days of return * all possible portfolio). So using B = all portfolios or B = I (identity matrix) will produce completely different S.

Thank you for sharing. It's very interesting this script holds cash during downturn. The high beta and return are mainly due to leverage. Can you elaborate why you choose leverage of 2?

@Joe Black

I dig this strategy a little bit. There is an explanation here[1]. In general, this strategy does not perform well when the volatility of the underlying securities is low. Hence we need to improve the performance of this strategy by amplifying the volatility with a leverage. You could try a leverage of 10, that has a lot a fun :)

[1]http://epchan.blogspot.sg/2007/01/universal-portfolios.html

I did some light reading on the algorithm and it seems to be a bit slow in reaching the optimal return with hindsight. Very cool nonetheless. Doubly nice that it holds cash on a downturn.

Hi Joe, I chose leverage of 2 because of more attractive return, with still reasonable drawdown.

I reviewed the paper again and realized I misinterpreted one thing.

In line 94, after calculating returns for all portfolio, the script subtract 1 (e.g. 1% loss is -0.01). Then it takes the weighted average of all portfolio. However, what the paper meant was not subtracting 1 (e.g. 1% loss is 0.99) and taking the weighted average.

Now I did removed the -1 and backtested, but 1 year is too short and 'universal portfolio' ends up almost just the equal weight. So as in the original paper, this version below takes the the power 20 of the 1 year return to emulate 20-year return, and takes the weighted average.

Because there is no negative returns (just the returns less than 1) you never get negative weights in universal portfolio. So no need to skip shorting ETFs in trading according to the theory. I was checking the paper for how to treat negative weight and realized the above discrepancy.

But interestingly, the original -1 version has better return and drawdown. Would this -1 universal portfolio still approach the optimal, even if you subtract 1 before integration?

Clone Algorithm
824
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
---------------------------------------
UNIVERSAL PORTFOLIOS
---------------------------------------
Implementation of strategy inspired by the paper by Thomas M. Cover, Information Theorist
Implementation authored by Naoki Nagai, 2015

Description:

This algo is an Quantopian python implematation of Universal Portfolios, described in paper by Professor Thomas M. Cover from Stanford.  Universal Portfolio is mathematically proved to achieve the return close to to the optimal constantly rebalanced portfolio in hindsight.

Methodology:

Let us construct regularly rebalanced portfolios with fixed weights given to each security (e.g. 40% Equity, 40% Bond, 20% Gold).  What would be the optimal weight given to each asset?  In this methodology, we evaluate every portfolio with every possible combination of weights, and calculate the return for each.  Then, our Universal Porfolio will be the weighted average of the all of these possible portfolio, weighted by the performance of each.  We don't make any kind of statistical assumptions about the underlying distribution of prices.  It's purely based on historical pricing data.

Proof:

Professor Cover's paper shows that return generated from this methodology S^ approaches  S*.  S* is the return of the regularly rebalanced portfolio with the optimal constant weight, which was selected in hindsight.  Even though we select the universal portfolio before knowing how it turns out, it approaches the optimal portfolio that was selected after the performance is known.
(for proof, see paper http://www-isl.stanford.edu/~cover/papers/paper93.pdf)

Analogy:

Algo works kind of like this.  We have tens of thousands of porfolio managers who decides their own allocations.  Then looking at the their performance for the 1 past year, we allocate our investment fudns proportional to the past 1 year return.  You can imagine this probably works.

Implication:

Perhaps Q fund could allocate their entire fund to each algo using this methodology. 
'''

import numpy as np

def initialize(context):
    set_symbol_lookup_date('2015-01-01')
    context.equities = symbols(
        # Equity
        'VV',     # US large cap
        'VO',     # US mid cap
        'VB',     # US small cap
    )
    context.fixedincome = symbols(
        # Fixed income
        'TLT',    # Long-term government bond
        'IEF',    # Mid-term government bond
        'LQD',    # Corporate bond
    )
    context.realasset = symbols(
        # Commodity and REIT
        'GLD',    # Gold
        'VNQ',    # US REIT
    )
    context.securities = context.equities + context.fixedincome + context.realasset
    
    context.period = 252                   # One year to evaluate past performance
    context.lever = 2.0                    # Leverage 
    context.allocation_intervals = 10      # Allocation intervals (100% / 10 = 10% increment) 
    context.weights = dict.fromkeys(context.securities, 0)
    context.shares = dict.fromkeys(context.securities, 0)

    # analyze function determine the target weights for the week
    schedule_function(analyze, date_rules.week_start(days_offset = 2),   time_rules.market_close())

    # rebalance function determine the target shares for the day
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes=60))
    
def analyze(context, data):    
    # History of return prices
    prices = history(bar_count=context.period+1, frequency='1d', field='price', ffill=True)
    
    # Returns is daily change, remove empty security, remove the last day NaN
    returns = prices.pct_change().dropna(how='all',axis=1)[1:]
    
    # Change the data to Numpy for faster calculation
    X = np.array(returns)
    X[np.isnan(X)] = 0
    
    # Transpose and add 1 (i.e. change -0.01 -> 0.99)
    X = X.transpose() + 1.0
    (n, m) = X.shape
    
    # In theory, we are supposed to calculate the integral of wealth over all portfolio
    # You cannot do that in practice, so we approximate by doing it descreetly.
    # We are going to vary weight by 10% due to memory constraint.
    B = binnings(context.allocation_intervals, n) / context.allocation_intervals
    
    # B is a matrix containing weights for every possible portfolio.  
    # We try every combination of weights for m securities
    # There are precision C m combinations for such portfolio = 19,448 portfolios for 8 securities 
    log.info('--- Universal Portfolio: evaluated %d possible portfolio for %d assets' % B.shape)

    # S is wealth vector corresponding to each portfolio. It is calculated as below
    # - B contains vectors of weights for all portfolio, X contains daily return for each asset
    # - By matrix algebra, BX will calculate daily returns for each porfolio for the past yaer
    # - Product of BX along the axis 1 (time) is the annual return for each portfolio.
    S = np.prod(np.dot(B,X), axis=1) ** 20  #- 1 <-- Prof. Cover's paper did not say -1 should be here
    
    # Finally weight is calculated by weighted average of all portfolios possible, 
    # using the past 1 year of past return as the weight.  We can do this by SB/|S|
    W = np.dot(S,B)/sum(abs(S))
   
    # Store the weight in context variable.  We calculate this weekly. 
    # Actually ordering of shares is peformed in rebalance function
    i = 0
    for sec in returns:
        log.info('%4s: % 2.1f (%s)' % (sec.symbol, W[i] *100, sec.security_name))
        if sec in data:
            # We set the weight to long-only.
            # After we calculate the weight average of the all portfolio returns,
            # it could happen that weighted average ends up being negative.
            # It needs to be verified but it does not mean we should short it.
            # It means it needs not to be invested in those securities with negative weight
            context.weights[sec] = max(0,W[i])
        i = i + 1
     
        
# From the target weight, calculate how many shares we should be owning
def rebalance(context, data):
    # Take averages of 3 days to avoid over-reacting to daily price fluctuation
    prices = history(3, frequency='1d', field='price', ffill=False).mean()
    
    for sec in context.weights:
        
        # Target weight for this asset
        target_weight = context.weights[sec] * context.lever

        # How many shares are we trading?
        target_share = context.portfolio.portfolio_value * target_weight / prices[sec]
        
        # Record target shares
        context.shares[sec] = target_share
        
def execute(context, data):
    # Average trading volume per hour
    tradingvolume = history(3, frequency='1d', field='volume', ffill=True).mean()
    
    for sec in context.shares:
        
        # If share has no data, skip
        if sec not in data:
            continue
            
        # If we still have outstanding orders, skip
        if sec in get_open_orders():
            continue
        
        # How many shares are we trading?
        target_share = context.shares[sec]
        
        # How many shares do we have now?
        current_share = context.portfolio.positions[sec].amount
        
        # Trading shares is the gap between the current and target shares
        trade_share = target_share - current_share
        
        # volume of share trade cannot exceed the trading volume of the last bar
        trade_share = min(trade_share,  tradingvolume[sec]/390/5)    # for buying shares
        trade_share = max(trade_share, -tradingvolume[sec]/390/5)    # for selling shares
        
        # Don't trade less than $1000 to save comission
        if abs(trade_share * data[sec].price) < 1000:
            continue
        
        # Make the order 
        order_target(sec, current_share + trade_share)
    
def handle_data(context, data):
    w = context.weights
    record(equities =    sum(w[s] for s in context.equities    if w[s] > 0))
    record(fixedincome = sum(w[s] for s in context.fixedincome if w[s] > 0))
    record(realassets =  sum(w[s] for s in context.realasset   if w[s] > 0))
    record(cash = max(0,context.portfolio.cash) / context.portfolio.portfolio_value)
    execute(context, data)

   
# Thanks to smart implementaion by the user 'bar' from stackoverflow
# http://stackoverflow.com/questions/6750298/efficient-item-binning-algorithm-itertools-numpy
def binnings(n, k, cache={}):
    if n == 0:
        return np.zeros((1, k))
    if k == 0:
        return np.empty((0, 0))
    args = (n, k)
    if args in cache:
        return cache[args]
    a = binnings(n - 1, k, cache)
    a1 = a + (np.arange(k) == 0)
    b = binnings(n, k - 1, cache)
    b1 = np.hstack((np.zeros((b.shape[0], 1)), b))
    b1 = np.hstack((np.zeros((b.shape[0], 1)), b))
    result = np.vstack((a1, b1))
    cache[args] = result
    return result




We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

"But interestingly, the original -1 version has better return and drawdown"

As expected both versions have about the same risk-adjusted Return in "normal" situations. I made a backtest post-crisis (starting 1 January 2010) and the new/corrected version has higher beta (hence the bigger DD) and higher return, but about the same Shape-ratio.

The diference comes from the 1st version going to cash on the Great Recession correction. You should probably focus on that and to try to understand why/diferences.

Hi Naoki,
Yes the paper doesn't subtract 1 from the return. Your first script also used sum(abs()) as denominator, which leads to negative weights. The idea is to weight performance for all possible constant rebal portfolio. Subtract 1 or not doesn't change the idea of weighting based on performance, however that sum(abs()) did make a difference.
Also I think the paper doesn't suggest use a certain look back period. You could start on any day with equal weight, and keep re-balancing since that day based on actual performance.

A couple of things.
I tried this algorithm from 2004 and it didn't seem to do well, any idea why?
Also, I haven't tried all of this, but can we extend this algorithm with the following etfs?
1. And spx short etf for the bear market. This didn't work for me.
2. Some international etfs, like Asia pac, Europe, emerging markets, etc etf

Any thoughts?

hi this is my first comment on quantopian . nice to meet you here

Hi Saravanan, it does not work well for 2004 because some ETFs were't there back then (the algo needs 1 additional year of back data as well). SPX does not work well because it's an inverse of SPY. The Universal Portfolio considers every possible combination of stocks in the universe. But putting large cap (close to SPY) and SPX together distorts calculation.

By the way, this is not the kind of algo Quantopian is looking for at the moment. I would say there are 3 type of algo trading: portfolio optimization (like this one), momentum (go with trend) and arbitrage (buy and sell something at the same time). Quantopian is looking for arbitrage type algo (i.e. low-beta) because you can create value regardless of the stock market going up or down, and that's what the institutional investors want in hedge funds, to diversify their portfolio.

Nevertheless, Universal Portfolio is a cool theory with mathematical backing, and it is an interesting problem to code. I hope techniques and tricks used in the script would be useful in creating up a great algo.

Starting with 1 million, the first algo exchanges 4.5 million dollars to profit 2 million for a return (taking leverage into account) of only 45%. The second is 41%. Aside from that, the flatline in '08 looks great. Sure would be good to even make money during that downturn, more difficult than expected when I restrict myself to only what I think can truly do well going forward.

Naoki,

I like this code a lot and wanted to experiment . It worked for me the other day, but now I get following error

Something went wrong. Sorry for the inconvenience. Try using the built-in debugger to analyze your code. If you would like help, send us an email.
ValueError: cannot convert float NaN to integer
There was a runtime error on line 171.

Line 171 points to your execute function. It appears an NaN is creeping into the data, probably on this line

tradingvolume = history(3, frequency='1d', field='volume', ffill=True).mean()

but doesn't mean adjust for NaN's be using a 0 value?

I then inserted following

 if  pd.isnull(tradingvolume[sec]):  
      continue

but this did not eliminate error.

Before I go crazy with this I figure somebody else has already caught an fixed this bug. btw, comes up in run for dates 01/01/2003 - 01/01/2005

I know this is really old but how can I port over the logic that determines the market is in a downturn and chooses to hold cash?

I would like to use it for other Pipeline Algos I have and especially the one attached. I know the attached uses XIV but we could change that to prove the logic works during 2008. Thanks!

Clone Algorithm
102
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
#https://www.quantopian.com/posts/for-robinhood-trading 
#Tim Vidmar  3/29/2016 original
#Here is the latest incarnation of the algo, with garyha's PvR routine, and Luca's cash management. Thanks for your brilliant code, guys!

#It no longer obeys the T+3 rule, since it has been apparently abandoned by Robinhood in the meantime, and it stays in positive cash all the time.

#It needs a warm-up period, though, to come to full leverage (1.0).

#Paul Stearns  4/6/2016
#Well if you've got the stomach for it, here is a version that adds another position to the mix, and fiddles with how much to bet on the various positions.

#Pointers on max draw down reduction without hammering the return might be good.

class ExposureMngr(object):
    
    def __init__(self, target_leverage = 1.0, target_long_exposure_perc = 0.50, target_short_exposure_perc = 0.50):   
        self.target_leverage            = target_leverage
        self.target_long_exposure_perc  = target_long_exposure_perc              
        self.target_short_exposure_perc = target_short_exposure_perc           
        self.short_exposure             = 0.0
        self.long_exposure              = 0.0
        self.open_order_short_exposure  = 0.0
        self.open_order_long_exposure   = 0.0
      
    def get_current_leverage(self, context, consider_open_orders = True):
        curr_cash = context.portfolio.cash - (self.short_exposure * 2)
        if consider_open_orders:
            curr_cash -= self.open_order_short_exposure
            curr_cash -= self.open_order_long_exposure
        curr_leverage = (context.portfolio.portfolio_value - curr_cash) / context.portfolio.portfolio_value
        return curr_leverage

    def get_exposure(self, context, consider_open_orders = True):
        long_exposure, short_exposure = self.get_long_short_exposure(context, consider_open_orders)
        return long_exposure + short_exposure
    
    def get_long_short_exposure(self, context, consider_open_orders = True):
        long_exposure         = self.long_exposure
        short_exposure        = self.short_exposure
        if consider_open_orders:
            long_exposure  += self.open_order_long_exposure
            short_exposure += self.open_order_short_exposure     
        return (long_exposure, short_exposure)
    
    def get_long_short_exposure_pct(self, context, consider_open_orders = True, consider_unused_cash = True):
        long_exposure, short_exposure = self.get_long_short_exposure(context, consider_open_orders)        
        total_cash = long_exposure + short_exposure
        if consider_unused_cash:
            total_cash += self.get_available_cash(context, consider_open_orders)
        long_exposure_pct   = long_exposure  / total_cash if total_cash > 0 else 0
        short_exposure_pct  = short_exposure / total_cash if total_cash > 0 else 0
        return (long_exposure_pct, short_exposure_pct)
    
    def get_available_cash(self, context, consider_open_orders = True):
        curr_cash = context.portfolio.cash - (self.short_exposure * 2)
        if consider_open_orders:
            curr_cash -= self.open_order_short_exposure
            curr_cash -= self.open_order_long_exposure            
        leverage_cash = context.portfolio.portfolio_value * (self.target_leverage - 1.0)
        return curr_cash + leverage_cash
          
    def get_available_cash_long_short(self, context, consider_open_orders = True):
        total_available_cash  = self.get_available_cash(context, consider_open_orders)
        long_exposure         = self.long_exposure
        short_exposure        = self.short_exposure
        if consider_open_orders:
            long_exposure  += self.open_order_long_exposure
            short_exposure += self.open_order_short_exposure
        current_exposure       = long_exposure + short_exposure + total_available_cash
        target_long_exposure  = current_exposure * self.target_long_exposure_perc
        target_short_exposure = current_exposure * self.target_short_exposure_perc        
        long_available_cash   = target_long_exposure  - long_exposure 
        short_available_cash  = target_short_exposure - short_exposure
        return (long_available_cash, short_available_cash)
    
    def update(self, context, data):
    
        self.open_order_short_exposure  = 0.0
        self.open_order_long_exposure   = 0.0
        for stock, orders in  get_open_orders().iteritems():
            if stock not in data:
                continue
            price = data[stock].price
            amount = 0 if stock not in context.portfolio.positions else context.portfolio.positions[stock].amount
            for oo in orders:
                order_amount = oo.amount - oo.filled
                if order_amount < 0 and amount <= 0:
                    self.open_order_short_exposure += (price * -order_amount)
                elif order_amount > 0 and amount >= 0:
                    self.open_order_long_exposure  += (price * order_amount)
        
        self.short_exposure = 0.0
        self.long_exposure  = 0.0
        for stock in context.portfolio.positions:
            amount = context.portfolio.positions[stock].amount
            last_sale_price = context.portfolio.positions[stock].last_sale_price
            if amount < 0:
                self.short_exposure += (last_sale_price * -amount)
            elif amount > 0:
                self.long_exposure  += (last_sale_price * amount)
        

def initialize(context):
        
    context.exposure = ExposureMngr(target_leverage = .99,
                                    target_long_exposure_perc = .99,
                                    target_short_exposure_perc = 0.0)

    schedule_function(rebalance, 
                      date_rules.every_day(),
                      time_rules.market_close(hours = 2))  



    equitySymbol = sid(24744)
#    equitySymbol = symbol('RSP')
    equityFuturesSymbol = sid(40516)
#    futuresSymbol = symbol('XIV')
    treasurySymbol = sid(22887)
#    treasurySymbol = symbol('EDV')
    treasuryFuturesSymbol = sid(38294)
#    treasuryFuturesSymbol = symbol('TMF')

    context.assets = [sid(24744), sid(40516), sid(22887), sid(38294)]
    
    context.e = equitySymbol
    context.ef = equityFuturesSymbol
    context.t = treasurySymbol
    context.tf = treasuryFuturesSymbol
    context.betTotalPercent = 0.30000
    context.betLowPercent = 0.8
    context.betHighPercent = 0.2
    context.historyDays = 100
    context.buyTrx = 0
    context.sellTrx = 0

def rebalance(context,data):
#   Get last hundred days worth of prices for universe, removing rows where the price is NaN.
    P = data.history(context.assets,'price',context.historyDays, '1d').dropna(axis = 1) 
    x = (P.tail(context.historyDays/10).median() / P.median() - 1).dropna() 
#   Get 1 value for each member of the universe. 
#   The value will be the median of the last 10 days divided by the median of the entire set.

#   Check if the datatable x contains the members of our universe.
    if context.e not in x.index:
        return
    if context.t not in x.index:
        return
    if context.tf not in x.index:
        return
    if context.ef not in x.index:
        return   
    
#   Check if the asset has a known last price and is currently listed on a supported exchange. 
    if not data.can_trade(context.e):
       return
    if not data.can_trade(context.t):
       return
    if not data.can_trade(context.tf):
       return
    if not data.can_trade(context.ef):
       return      

#   create a datatable signal with boolean values that indicate whether the last 10 days median > last 100 days median.    
    signal = (x > 0)

#   Check if the datatable signal contains prices for the members of our universe.
    if context.e not in signal.index:
        return
    if context.t not in signal.index:
        return
    if context.tf not in signal.index:
        return
    if context.ef not in signal.index:
        return
    
    if get_open_orders():
        return

#   Set boolean to true if both equitySymbol and futuresSymbol 10 day median prices are > 100 day median prices
    go = (signal[context.e] and signal[context.ef]) and ((x[context.e] + x[context.ef]) * 3 > (x[context.t] + x[context.tf]))
    
    bet = context.portfolio.portfolio_value * context.betTotalPercent
    
    context.exposure.update(context, data)
    
    long_cash, short_cash = \
    context.exposure.get_available_cash_long_short(context)

    if go:
        if long_cash > bet:
            if x[context.ef] > x[context.e]:
                betFuturesPercent = context.betLowPercent
                betEquityPercent = context.betHighPercent
            else:
                betFuturesPercent = context.betHighPercent
                betEquityPercent = context.betLowPercent
            order_value(context.ef, bet * betFuturesPercent)
            context.buyTrx += 1
            order_value(context.e, bet * betEquityPercent)
            context.buyTrx += 1
        order_target(context.t, 0.0)
        if context.buyTrx > context.sellTrx:
            context.sellTrx += 1
        order_target(context.tf, 0.0)
        if context.buyTrx > context.sellTrx:
            context.sellTrx += 1
    else:
        if long_cash > bet:
            if x[context.t] > x[context.tf]:
                betTreasuryPercent = context.betLowPercent
                betTreasuryFuturesPercent = context.betHighPercent
            else:
                betTreasuryPercent = context.betHighPercent
                betTreasuryFuturesPercent = context.betLowPercent
            order_value(context.t,  bet * betTreasuryPercent)
            context.buyTrx += 1
            order_value(context.tf,  bet * betTreasuryFuturesPercent)
            context.buyTrx += 1
        order_target(context.e, 0.0)                                          
        if context.buyTrx > context.sellTrx:
            context.sellTrx += 1
        order_target(context.ef, 0.0)
        if context.buyTrx > context.sellTrx:
            context.sellTrx += 1

       
    #record(go = go)
    
    pvr(context, data)


def handle_data(context,data):    

    pass


def pvr(context, data):  
    ''' Custom chart and/or log of profit_vs_risk returns and related information  
    '''  
    # # # # # # # # # #  Options  # # # # # # # # # #  
    record_max_lvrg = 1         # Maximum leverage encountered  
    record_leverage = 0         # Leverage (context.account.leverage)  
    record_q_return = 0         # Quantopian returns (percentage)  
    record_pvr      = 1         # Profit vs Risk returns (percentage)  
    record_pnl      = 0         # Profit-n-Loss  
    record_shorting = 1         # Total value of any shorts  
    record_overshrt = 0         # Shorts beyond longs+cash  
    record_risk     = 0         # Risked, max cash spent or shorts beyond longs+cash  
    record_risk_hi  = 1         # Highest risk overall  
    record_cash     = 0         # Cash available  
    record_cash_low = 1         # Any new lowest cash level  
    logging         = 1         # Also to logging window conditionally (1) or not (0)  
    log_method      = 'risk_hi' # 'daily' or 'risk_hi'

    from pytz import timezone   # Python will only do once, makes this portable.  
                                #   Move to top of algo for better efficiency.  
    c = context  # Brevity is the soul of wit -- Shakespeare [for efficiency, readability]  
    if 'pvr' not in c:  
        date_strt = get_environment('start').date()  
        date_end  = get_environment('end').date()  
        cash_low  = c.portfolio.starting_cash  
        mode      = get_environment('data_frequency')  
        c.pvr = {  
            'max_lvrg': 0,  
            'risk_hi' : 0,  
            'days'    : 0.0,  
            'date_prv': '',  
            'cash_low': cash_low,  
            'date_end': date_end,  
            'mode'    : mode,  
            'run_str' : '{} to {}  {}  {}'.format(date_strt,date_end,int(cash_low),mode)  
        }  
        log.info(c.pvr['run_str'])  
    pvr_rtrn     = 0            # Profit vs Risk returns based on maximum spent  
    profit_loss  = 0            # Profit-n-loss  
    shorts       = 0            # Shorts value  
    longs        = 0            # Longs  value  
    overshorts   = 0            # Shorts value beyond longs plus cash  
    new_risk_hi  = 0  
    new_cash_low = 0                           # To trigger logging in cash_low case  
    lvrg         = c.account.leverage          # Standard leverage, in-house  
    date         = get_datetime().date()       # To trigger logging in daily case  
    cash         = c.portfolio.cash  
    start        = c.portfolio.starting_cash  
    cash_dip     = int(max(0, start - cash))  
    q_rtrn       = 100 * (c.portfolio.portfolio_value - start) / start

    if int(cash) < c.pvr['cash_low']:                # New cash low  
        new_cash_low = 1  
        c.pvr['cash_low']   = int(cash)  
        if record_cash_low:  
            record(CashLow = int(c.pvr['cash_low'])) # Lowest cash level hit

    if record_max_lvrg:  
        if c.account.leverage > c.pvr['max_lvrg']:  
            c.pvr['max_lvrg'] = c.account.leverage  
            record(MaxLv = c.pvr['max_lvrg'])        # Maximum leverage

    if record_pnl:  
        profit_loss = c.portfolio.pnl  
        record(PnL = profit_loss)                    # "Profit and Loss" in dollars

    for p in c.portfolio.positions:  
        shrs = c.portfolio.positions[p].amount
        shrs_price = data.current(p, 'price')     
        if shrs < 0:  
            shorts += int(abs(shrs * shrs_price))  
        if shrs > 0:  
            longs  += int(shrs * shrs_price)

    if shorts > longs + cash: overshorts = shorts             # Shorts when too high  
    if record_shorting: record(Shorts  = shorts)              # Shorts value as a positve  
    if record_overshrt: record(OvrShrt = overshorts)          # Shorts value as a positve  
    if record_cash:     record(Cash = int(c.portfolio.cash))  # Cash  
    if record_leverage: record(Lvrg = c.account.leverage)     # Leverage

    risk = int(max(cash_dip, shorts))  
    if record_risk: record(Risk = risk)       # Amount in play, maximum of shorts or cash used

    if risk > c.pvr['risk_hi']:  
        c.pvr['risk_hi'] = risk  
        new_risk_hi = 1

        if record_risk_hi:  
            record(RiskHi = c.pvr['risk_hi']) # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)  
        if c.pvr['risk_hi'] != 0:     # Avoid zero-divide  
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.pvr['risk_hi']  
            record(PvR = pvr_rtrn)            # Profit_vs_Risk returns

    if record_q_return:  
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    def _minute():   # To preface each line with minute of the day.  
        if get_environment('data_frequency') == 'minute':  
            bar_dt = get_datetime().astimezone(timezone('US/Eastern'))  
            minute = (bar_dt.hour * 60) + bar_dt.minute - 570  # (-570 = 9:31a)  
            return str(minute).rjust(3)  
        return ''    # Daily mode, just leave it out.

    def _pvr_():  
            log.info('PvR {} %/day     {}'.format(  
                '%.4f' % (pvr_rtrn / c.pvr['days']), c.pvr['run_str']))  
            log.info('  Profited {} on {} activated/transacted for PvR of {}%'.format(  
                '%.0f' % (c.portfolio.portfolio_value - start), '%.0f' % c.pvr['risk_hi'],  
                '%.1f' % pvr_rtrn))  
            log.info('  QRet {} PvR {} CshLw {} MxLv {} RskHi {} Shrts {}'.format(  
                '%.2f' % q_rtrn, '%.2f' % pvr_rtrn, '%.0f' % c.pvr['cash_low'],  
                '%.2f' % c.pvr['max_lvrg'], '%.0f' % c.pvr['risk_hi'], '%.0f' % shorts))

    if logging:  
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.pvr['date_prv'] != date \
          or new_cash_low:  
            qret    = ' QRet '   + '%.1f' % q_rtrn  
            lv      = ' Lv '     + '%.1f' % lvrg              if record_leverage else ''  
            pvr     = ' PvR '    + '%.1f' % pvr_rtrn          if record_pvr      else ''  
            pnl     = ' PnL '    + '%.0f' % profit_loss       if record_pnl      else ''  
            csh     = ' Cash '   + '%.0f' % cash              if record_cash     else ''  
            shrt    = ' Shrt '   + '%.0f' % shorts            if record_shorting else ''  
            ovrshrt = ' Shrt '   + '%.0f' % overshorts        if record_overshrt else ''  
            risk    = ' Risk '   + '%.0f' % risk              if record_risk     else ''  
            mxlv    = ' MaxLv '  + '%.2f' % c.pvr['max_lvrg'] if record_max_lvrg else ''  
            csh_lw  = ' CshLw '  + '%.0f' % c.pvr['cash_low'] if record_cash_low else ''  
            rsk_hi  = ' RskHi '  + '%.0f' % c.pvr['risk_hi']  if record_risk_hi  else ''  
            log.info('{}{}{}{}{}{}{}{}{}{}{}{}'.format(_minute(),  
               lv, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, ovrshrt, risk, rsk_hi))  
    if c.pvr['date_prv'] != date: c.pvr['days'] += 1.0  
    if c.pvr['days'] % 130 == 0 and _minute() == '100': _pvr_()  
    c.pvr['date_prv'] = date  
    if c.pvr['date_end'] == date:  
        # Summary on last minute of last day.  
        # If using schedule_function(), backtest last day/time may need to match for this to execute.  
        log.info(' Buys {} Sells {}'.format('%.0f' % context.buyTrx, '%.0f' % context.sellTrx))
        if 'pvr_summary_done' not in c: c.pvr_summary_done = 0  
        log_summary = 0  
        if c.pvr['mode'] == 'daily' and get_datetime().date() == c.pvr['date_end']:  
            log_summary = 1  
        elif c.pvr['mode'] == 'minute' and get_datetime() == get_environment('end'):  
            log_summary = 1  
        if log_summary and not c.pvr_summary_done:  
            _pvr_()  
            c.pvr_summary_done = 1

    

    
    
    
    
There was a runtime error.

Here is another Algo I would like to use that logic in. Would it be possible to implement it with the attached?

Clone Algorithm
173
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a PEAD strategy based off Estimize's earnings estimates. Estimize
is a service that aggregate financial estimates from independent, buy-side,
sell-side analysts as well as students and professors. You can run this
algorithm yourself by geting the free sample version of Estimize's consensus
dataset and EventVestor's Earnings Calendar Dataset at:

- https://www.quantopian.com/data/eventvestor/earnings_calendar
- https://www.quantopian.com/data/estimize/revisions

Much of the variables are meant for you to be able to play around with them:
1. context.days_to_hold: defines the number of days you want to hold before exiting a position
2. context.min/max_surprise: defines the min/max % surprise you want before trading on a signal
"""

import numpy as np

from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, AverageDollarVolume
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.data.accern import alphaone as alphaone

from quantopian.pipeline.data.estimize import (
    ConsensusEstimizeEPS,
    ConsensusWallstreetEPS,
    ConsensusEstimizeRevenue, 
    ConsensusWallstreetRevenue
)

# The sample and full version is found through the same namespace
# https://www.quantopian.com/data/eventvestor/earnings_calendar
# Sample date ranges: 01 Jan 2007 - 10 Feb 2014
from quantopian.pipeline.data.eventvestor import EarningsCalendar
from quantopian.pipeline.factors.eventvestor import (
    BusinessDaysUntilNextEarnings,
    BusinessDaysSincePreviousEarnings
)

# Create custom factor subclass to calculate a market cap based on yesterday's
# close
class PercentSurprise(CustomFactor):
    window_length = 1
    inputs = [ConsensusEstimizeEPS.previous_actual_value,
              ConsensusEstimizeEPS.previous_mean]

    # Compute market cap value
    def compute(self, today, assets, out, actual_eps, estimize_eps):
        out[:] = (actual_eps[-1] - estimize_eps[-1])/(estimize_eps[-1] + 0)
        
"""       
class DailySentimentByImpactScore(CustomFactor):  
    # Economic Hypothesis: Accern reports both an `impact score`  
    # and `article sentiment`. The `impact score` is used to measure  
    # the likelihood that a security's price changes by more than 1%  
    # in the following day. The `article sentiment` is a quantified daily  
    # measure of news & blog sentiment about a given security. This combined  
    # measure of `impact score` and `article sentiment` may hold information  
    # about price changes in the following day.  
    inputs = [alphaone.article_sentiment, alphaone.impact_score]  
    window_length = 1

    def compute(self, today, assets, out, sentiment, impact_score):  
        out[:] = sentiment * impact_score  
"""       
class WeightedSentimentByVolatility(CustomFactor):  
    # Economic Hypothesis: Sentiment volatility can be an indicator that  
    # public news is changing rapidly about a given security. So securities  
    # with a high level of sentiment volatility may indicate a change in  
    # momentum for that stock's price.  
    inputs = [alphaone.article_sentiment]  
    window_length = 2

    def compute(self, today, assets, out, sentiment):  
        out[:] = np.nanstd(sentiment, axis=0) * np.nanmean(sentiment, axis=0)  

def make_pipeline(context):
    # Create our pipeline
    pipe = Pipeline()
    
    # Instantiating our factors
    factor = PercentSurprise()
    weighted_sentiment = WeightedSentimentByVolatility()
    
    
    # Screen out penny stocks and low liquidity securities.
    dollar_volume = AverageDollarVolume(window_length=20)
    is_liquid = dollar_volume > 10**7
    
    #Filter down stocks using sentiment
    top_sentiment = weighted_sentiment.percentile_between(85, 100, mask=is_liquid)

    # Filter down to stocks in the top/bottom 
    longs = (factor >= context.min_surprise) & (factor <= context.max_surprise)
    #shorts = (factor <= -context.min_surprise) & (factor >= -context.max_surprise)

    # Add long/shorts to the pipeline
    pipe.add(longs, "longs")
    pipe.add(top_sentiment, "top_sentiment")
    #pipe.add(shorts, "shorts")
    pipe.add(BusinessDaysSincePreviousEarnings(), 'pe')
    
    # Set our pipeline screens
    pipe.set_screen(longs & is_liquid & top_sentiment & (weighted_sentiment != 0))
 
    return pipe
        
def initialize(context):
    #: Set commissions and slippage to 0 to determine pure alpha
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))
    set_slippage(slippage.FixedSlippage(spread=0))

    #: Declaring the days to hold, change this to what you want)))
    context.days_to_hold = 5
    #: Declares which stocks we currently held and how many days we've held them dict[stock:days_held]
    context.stocks_held = {}

    #: Declares the minimum magnitude of percent surprise
    context.min_surprise = .00
    context.max_surprise = .04
    
    #: OPTIONAL - Initialize our Hedge
    # See order_positions for hedging logic
    #context.spy = sid(8554)
    
    # Make our pipeline
    attach_pipeline(make_pipeline(context), 'estimize')

    
    # Log our positions at 10:00AM
    schedule_function(func=log_positions,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_close(minutes=30))
    # Order our positions
    schedule_function(func=order_positions,
                      date_rule=date_rules.every_day(),
                      time_rule=time_rules.market_open())

def before_trading_start(context, data):
    # Screen for securities that only have an earnings release
    # 1 business day previous and separate out the earnings surprises into
    # positive and negative 
    results = pipeline_output('estimize')
    results = results[results['pe'] == 1]
    assets_in_universe = results.index
    context.positive_surprise = assets_in_universe#[results.longs]
    #context.negative_surprise = assets_in_universe[results.shorts]
    log.info(results.iloc[:5])
    log.info("There are %s positive surprises today " % 
             (len(context.positive_surprise)))

def log_positions(context, data):
    #: Get all positions
    all_positions = "Current positions for %s : " % (str(get_datetime()))
    for pos in context.portfolio.positions:
        if context.portfolio.positions[pos].amount != 0:
            all_positions += "%s at %s shares, " % (pos.symbol, context.portfolio.positions[pos].amount)
    log.info(all_positions)
        
def order_positions(context, data):
    """
    Main ordering conditions to always order an equal percentage in each position
    so it does a rolling rebalance by looking at the stocks to order today and the stocks
    we currently hold in our portfolio.
    """
    port = context.portfolio.positions
    #record(leverage=context.account.leverage)

    # Check if we've exited our positions and if we haven't, exit the remaining securities
    # that we have left
    for security in port:
        context.stocks_held[security] += 1
        if context.stocks_held[security] >= context.days_to_hold:
            if port[security].amount == 0:
                del context.stocks_held[security]
            else:
                order_target_percent(security, 0)

    # Check our current positions
    current_positive_pos = [pos for pos in port if (port[pos].amount > 0 and pos in context.stocks_held)]
    #current_negative_pos = [pos for pos in port if (port[pos].amount < 0 and pos in context.stocks_held)]
    #negative_stocks = context.negative_surprise.tolist() + current_negative_pos
    positive_stocks = context.positive_surprise.tolist() + current_positive_pos
    
    """
    # Rebalance our negative surprise securities (existing + new)
    for security in negative_stocks:
        can_trade = context.stocks_held.get(security) <= context.days_to_hold or \
                    context.stocks_held.get(security) is None
        if data.can_trade(security) and can_trade:
            order_target_percent(security, -1.0 / len(negative_stocks))
            if context.stocks_held.get(security) is None:
                context.stocks_held[security] = 0
    """
    # Rebalance our positive surprise securities (existing + new)                
    for security in positive_stocks:
        can_trade = context.stocks_held.get(security) <= context.days_to_hold or \
                    context.stocks_held.get(security) is None
        if data.can_trade(security) and can_trade:
            order_target_percent(security, 1.0 / len(positive_stocks))
            if context.stocks_held.get(security) is None:
                context.stocks_held[security] = 0

    #: Get the total amount ordered for the day
    amount_ordered = 0 
    for order in get_open_orders():
        for oo in get_open_orders()[order]:
            amount_ordered += oo.amount * data.current(oo.sid, 'price')

    #: Order our hedge
    # order_target_value(context.spy, -amount_ordered)
    # context.stocks_held[context.spy] = 0
    # log.info("We currently have a net order of $%0.2f and will hedge with SPY by ordering $%0.2f" % (amount_ordered, -amount_ordered))
    
    
    
def handle_data(context, data):
    
    for security in context.portfolio.positions:
        can_trade = context.stocks_held.get(security) <= 1
        if data.can_trade(security) and can_trade:
            current_position = context.portfolio.positions[security].amount
            cost_basis = context.portfolio.positions[security].cost_basis
            price = data.current(security, 'price')
            limit = cost_basis*1.04
            stop = cost_basis*0.96
            if price >= limit and current_position > 0:
                order_target_percent(security, 0)
                log.info( str(security) + ' Sold for Profit')
                #del context.stocks_held[security]
            if price <= stop and current_position > 0:
                order_target_percent(security, 0)
                log.info( str(security) + ' Sold for Loss')
                #del context.stocks_held[security]
                
    record(leverage=context.account.leverage)
    
    
    
    
    
    
    
    
    
    
    
    
There was a runtime error.