Back to Community
long/short OLMAR hack

Here's a hack, based on the OLMAR algorithm (see http://arxiv.org/ftp/arxiv/papers/1206/1206.4626.pdf). It is long/short large-cap NASDAQ stocks, with a position in QQQ to neutralize beta, if necessary.

Clone Algorithm
138
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.1
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    context.qqq = sid(19920)
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    num_secs = 0
    
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    prices = history(30*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    sum_weighted_port = np.zeros(len(context.stocks))
    sum_weights = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(3,31):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n*390)
        sum_weighted_port += weighted_port
        sum_weights += weight
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
        
    allocation = sum_weighted_port/sum_weights
    allocation = allocation/np.sum(allocation)
        
    rebalance_portfolio(data, context, allocation)
        
def get_weighted_port(data,context,prices,n):
    
    prices = prices.tail(n).as_matrix(context.stocks)
    prices = pd.ewma(prices, span=390)
    
    b_t = np.zeros(len(context.stocks))
    
    # update portfolio
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)
    
    denom = np.sum(b_t)
    # test for divide-by-zero case
    if denom > 0:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = np.ones(len(context.stocks)) / len(context.stocks)

    x_tilde = np.zeros(len(context.stocks))

    b = np.zeros(len(context.stocks))
    
    # find price relative for each secuirty
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 1.0:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        else:
            context.ls[stock] += 1
        x_tilde[i] = price_rel
    
    ###########################
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod = np.dot(b_t, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
    else:     
        lam = max(0, num/denom)
    
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    
    weight = np.dot(b_norm,x_tilde)
    
    return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):
    
    record(sum_port = np.sum(desired_port))
    
    # check for open orders      
    for stock in context.stocks:
        if get_open_orders(stock):
            return
    
    pct_ls = 0
        
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    order_target_percent(context.qqq, -0.5*context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.qqq]:
            order_target_percent(stock,0)

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w
There was a runtime error.
27 responses

Here's one with QQQ replaced by SPY. Looks a bit better, but it could just be fluke.

Clone Algorithm
138
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.1
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    # context.qqq = sid(19920)
    context.spy = sid(8554)
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    num_secs = 0
    
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    prices = history(30*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    sum_weighted_port = np.zeros(len(context.stocks))
    sum_weights = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(3,31):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n*390)
        sum_weighted_port += weighted_port
        sum_weights += weight
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
        
    allocation = sum_weighted_port/sum_weights
    allocation = allocation/np.sum(allocation)
        
    rebalance_portfolio(data, context, allocation)
        
def get_weighted_port(data,context,prices,n):
    
    prices = prices.tail(n).as_matrix(context.stocks)
    prices = pd.ewma(prices, span=390)
    
    b_t = np.zeros(len(context.stocks))
    
    # update portfolio
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)
    
    denom = np.sum(b_t)
    # test for divide-by-zero case
    if denom > 0:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = np.ones(len(context.stocks)) / len(context.stocks)

    x_tilde = np.zeros(len(context.stocks))

    b = np.zeros(len(context.stocks))
    
    # find price relative for each secuirty
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 1.0:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        else:
            context.ls[stock] += 1
        x_tilde[i] = price_rel
    
    ###########################
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod = np.dot(b_t, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
    else:     
        lam = max(0, num/denom)
    
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    
    weight = np.dot(b_norm,x_tilde)
    
    return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):
    
    record(sum_port = np.sum(desired_port))
    
    # check for open orders      
    for stock in context.stocks:
        if get_open_orders(stock):
            return
    
    pct_ls = 0
        
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    # order_target_percent(context.qqq, -0.5*context.leverage*pct_ls)
    order_target_percent(context.spy, -0.5*context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.spy]:
            order_target_percent(stock,0)

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w
There was a runtime error.

Looks great, Grant. Still enjoy the OLMAR algorithm. I computed a tear-sheet for the second one which looks quite good indeed.

Loading notebook preview...
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Returns show ~40%, twice the benchmark. Profit vs Risk is higher around 50% since it didn't risk the entire starting capital value.

This algo has low beta, low volatility high stability and may be the winner of 6 month contest but, after running backtest on full market cycle I am not as optimistic as Tomas and garyha about performance of this algo especially in period 2009-2013.

Clone Algorithm
1
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Long-Short OLMAR hack by Grant
# https://www.quantopian.com/posts/long-slash-short-olmar-hack#562cf5e52c637ba0c3000560


import numpy as np
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.1
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    # context.qqq = sid(19920)
    context.spy = sid(8554)
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    num_secs = 0
    
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    prices = history(30*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    sum_weighted_port = np.zeros(len(context.stocks))
    sum_weights = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(3,31):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n*390)
        sum_weighted_port += weighted_port
        sum_weights += weight
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
        
    allocation = sum_weighted_port/sum_weights
    allocation = allocation/np.sum(allocation)
        
    rebalance_portfolio(data, context, allocation)
        
def get_weighted_port(data,context,prices,n):
    
    prices = prices.tail(n).as_matrix(context.stocks)
    prices = pd.ewma(prices, span=390)
    
    b_t = np.zeros(len(context.stocks))
    
    # update portfolio
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)
    
    denom = np.sum(b_t)
    # test for divide-by-zero case
    if denom > 0:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = np.ones(len(context.stocks)) / len(context.stocks)

    x_tilde = np.zeros(len(context.stocks))

    b = np.zeros(len(context.stocks))
    
    # find price relative for each secuirty
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 1.0:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        else:
            context.ls[stock] += 1
        x_tilde[i] = price_rel
    
    ###########################
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod = np.dot(b_t, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
    else:     
        lam = max(0, num/denom)
    
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    
    weight = np.dot(b_norm,x_tilde)
    
    return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):
    
    record(sum_port = np.sum(desired_port))
    
    # check for open orders      
    for stock in context.stocks:
        if get_open_orders(stock):
            return
    
    pct_ls = 0
        
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    # order_target_percent(context.qqq, -0.5*context.leverage*pct_ls)
    order_target_percent(context.spy, -0.5*context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.spy]:
            order_target_percent(stock,0)

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w
There was a runtime error.

Thanks Vladimir,

I've seen the same thing. It could be a matter of parameters being tuned to optimize recent performance.

Grant

Vladimir, I'm not optimistic nor pessimistic about this algo, didn't mean to imply that. Although, having thought about it, I'm pessimistic about OLMAR in general, because the winning code in the first contest for example (was olmar) had a -39% drawdown in 2008, drastic (when I tested it using a start of early '07 or something).

No, my point was, there's another metric one can use, and it can be advantageous to everyone. Called (for now) PvR_Ret in the custom chart below, standing for Profit versus Risk (maximum amount actually laid on the table in exchange for stocks). It would also be an advantage in making your point,to you , because, as you can see below, when returns are negative, PvR_Ret is ~60% more negative at times. Also when positive, it is more positive. Both are because risk was not as high as starting capital. It can add/help to see more clearly during development.

Tim Berners-Lee talked about how frustrating it was in the early days trying to promote the World Wide Web he invented. People are automatically resistant to new. This is like that.

Clone Algorithm
2
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Long-Short OLMAR hack by Grant
# https://www.quantopian.com/posts/long-slash-short-olmar-hack#562cf5e52c637ba0c3000560


import numpy as np
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.1
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    # context.qqq = sid(19920)
    context.spy = sid(8554)
    

    schedule_function(info,   date_rules.every_day(),  time_rules.market_close())

    c = context
    c.cash_low = c.portfolio.starting_cash
    c.max_lvrg = 0
    c.max_shrt = 0
    c.risk_hi  = 0
    c.date_prv = ''
    c.date_end = str(get_environment('end').date())
    print '{} to {}'.format(str(get_datetime().date()) , c.date_end)

from pytz import timezone

def info(context, data):
    ''' Custom chart and/or log of profit_vs_risk returns and related information
    '''
    # # # # # # # # # #  Options  # # # # # # # # # #
    record_max_lvrg = 1          # maximum leverage encountered
    record_leverage = 0          # Leverage (context.account.leverage)
    record_q_return = 1          # Quantopian returns (percentage)
    record_pvr      = 1          # Profit vs Risk returns (percentage)
    record_pnl      = 0          # Profit-n-Loss
    record_shorting = 1          # Total value of any shorts
    record_risk     = 0          # Risked, maximum cash spent or shorts in excess of cash at any time
    record_risk_hi  = 1          # Highest risk overall
    record_cash     = 0          # Cash available
    record_cash_low = 0          # Any new lowest cash level
    logging         = 1          # Also log to the logging window conditionally (1) or not (0)
    log_method      = 'risk_hi'  # 'daily' or 'risk_hi'

    c = context                          # For brevity
    new_cash_low = 0                     # To trigger logging in cash_low case
    date = str(get_datetime().date())    # To trigger logging in daily case
    cash = c.portfolio.cash

    if int(cash) < c.cash_low:    # New cash low
        new_cash_low = 1
        c.cash_low   = int(cash)
        if record_cash_low:
            record(CashLow = int(c.cash_low))

    pvr_rtrn      = 0        # Profit vs Risk returns based on maximum spent
    q_rtrn        = 0        # Returns by Quantopian
    profit_loss   = 0        # Profit-n-loss
    shorts        = 0        # Shorts value
    start         = c.portfolio.starting_cash
    cash_dip      = int(max(0, start - cash))

    if record_cash:
        record(cash = int(c.portfolio.cash))  # Cash

    if record_leverage:
        record(Lvrg = c.account.leverage)     # Leverage

    if record_max_lvrg:
        if c.account.leverage > c.max_lvrg:
            c.max_lvrg = c.account.leverage
            record(MaxLvrg = c.max_lvrg)      # Maximum leverage

    if record_pnl:
        profit_loss = c.portfolio.pnl
        record(PnL = profit_loss)             # "Profit and Loss" in dollars

    for p in c.portfolio.positions:
        shrs = c.portfolio.positions[p].amount
        if shrs < 0:
            shorts += int(abs(shrs * data[p].price))

    if record_shorting:
        record(Shorts = shorts)               # Shorts value as a positve

    # Shorts in excess of cash to cover them, a positive value
    shorts_excess = int(shorts - cash) if shorts > cash else 0
    c.max_shrt    = int(max(c.max_shrt, shorts_excess))

    risk = int(max(cash_dip, shorts_excess, shorts))
    if record_risk:
        record(Risk = risk)                   # Amount in play, maximum of shorts or cash used

    new_risk_hi = 0
    if risk > c.risk_hi:
        c.risk_hi = risk
        new_risk_hi = 1

        if record_risk_hi:
            record(Risk_hi = c.risk_hi)       # Highest risk overall

    if record_pvr:      # Profit_vs_Risk returns based on max amount actually spent (risk high)
        if c.risk_hi != 0:     # Avoid zero-divide
            pvr_rtrn = 100 * (c.portfolio.portfolio_value - start) / c.risk_hi
            record(PvR_Ret = pvr_rtrn)        # Profit_vs_Risk returns

    if record_q_return:
        q_rtrn = 100 * (c.portfolio.portfolio_value - start) / start
        record(QRet = q_rtrn)                 # Quantopian returns to compare to pvr returns curve

    if logging:
        if log_method == 'risk_hi' and new_risk_hi \
          or log_method == 'daily' and c.date_prv != date \
          or c.date_end == date \
          or new_cash_low:
            mxlv   = 'MaxLv '   + '%.1f' % c.max_lvrg   if record_max_lvrg else ''
            qret   = 'QRet '    + '%.1f' % q_rtrn       if record_q_return else ''
            pvr    = 'PvR_Ret ' + '%.1f' % pvr_rtrn     if record_pvr      else ''
            pnl    = 'PnL '     + '%.0f' % profit_loss  if record_pnl      else ''
            csh    = 'Cash '    + '%.0f' % cash         if record_cash     else ''
            csh_lw = 'CshLw '   + '%.0f' % c.cash_low   if record_cash_low else ''
            shrt   = 'Shrt '    + '%.0f' % shorts       if record_shorting else ''
            risk   = 'Risk '    + '%.0f' % risk         if record_risk     else ''
            rsk_hi = 'RskHi '  + '%.0f' % c.risk_hi     if record_risk_hi  else ''
            minute = get_datetime().astimezone(timezone('US/Eastern')).time().minute
            log.info('{} {} {} {} {} {} {} {} {} {}'.format(
                    minute, mxlv, qret, pvr, pnl, csh, csh_lw, shrt, risk, rsk_hi))

    if c.date_end == date:    # Log on last day, like cash 125199  portfolio 126890
        log.info('cash {}  portfolio {}'.format(
                int(cash), int(c.portfolio.portfolio_value)))

    c.date_prv = date
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    #record(leverage = leverage)
            
    for stock in context.stocks:
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    num_secs = 0
    
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    #record(num_secs = num_secs)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    prices = history(30*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    sum_weighted_port = np.zeros(len(context.stocks))
    sum_weights = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(3,31):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n*390)
        sum_weighted_port += weighted_port
        sum_weights += weight
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
        
    allocation = sum_weighted_port/sum_weights
    allocation = allocation/np.sum(allocation)
        
    rebalance_portfolio(data, context, allocation)
        
def get_weighted_port(data,context,prices,n):
    
    prices = prices.tail(n).as_matrix(context.stocks)
    prices = pd.ewma(prices, span=390)
    
    b_t = np.zeros(len(context.stocks))
    
    # update portfolio
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)
    
    denom = np.sum(b_t)
    # test for divide-by-zero case
    if denom > 0:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = np.ones(len(context.stocks)) / len(context.stocks)

    x_tilde = np.zeros(len(context.stocks))

    b = np.zeros(len(context.stocks))
    
    # find price relative for each secuirty
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 1.0:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        else:
            context.ls[stock] += 1
        x_tilde[i] = price_rel
    
    ###########################
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod = np.dot(b_t, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
    else:     
        lam = max(0, num/denom)
    
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    
    weight = np.dot(b_norm,x_tilde)
    
    return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):
    
    #record(sum_port = np.sum(desired_port))
    
    # check for open orders      
    for stock in context.stocks:
        if get_open_orders(stock):
            return
    
    pct_ls = 0
        
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    # order_target_percent(context.qqq, -0.5*context.leverage*pct_ls)
    order_target_percent(context.spy, -0.5*context.leverage*pct_ls)
    
    #record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.spy]:
            order_target_percent(stock,0)

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w


There was a runtime error.

@ Thomas,

I'm not quite sure how to pose the questions, but is there some way to sort out if the performance is "real" and not due to chance, overfitting, whatever? It seems that unless one can pull together a story that near-term performance is understood and will persist, then attracting capital will be difficult. Or even if an explanation can't be provided for recent performance, then might there be a way to minimize risk by detecting a deviation from the trend?

@Grant,

That's a very important and timely question which I'm currently working at. Our current way of thinking is to place most emphasis on the paper or real-money track-record of a strategy and evaluate whether it matches the backtest. So if you have something that you think would be fund-worthy definitely set it to paper-trading (but we can also see when algorithm code was most recently edited so we do have a pretty good handle on which data was available at the time of algorithm writing). Unfortunately, as that data only accumulates at the speed of time, there are opposing goals of wanting to deploy capital sooner but also to have sufficient out-of-sample (OOS) so as not be fooled by randomness.

That's where statistics comes into play and where a lot of the work on pyfolio ties in. Currently there are 3 methods that can be used for this:
* the linear cone (this is the first plot of the returns tear-sheet): you can look at whether the OOS period leaves the 1SD or 2SD cone and deviate from expectations.
* the Bayesian cone (see http://blog.quantopian.com/bayesian-cone/ and http://quantopian.github.io/pyfolio/bayesian/)
* the BEST model (a Bayesian T-Test essentially): This just compares the in-sample Sharpe ratio to the OOS Sharpe ratio in a Bayesian way. See here for the original paper: http://www.indiana.edu/~kruschke/BEST/

So what I would do in this instance is to paper-trade this algorithm (the longer the better, at least 1 Month) and look at the cone plot in the tear-sheet. Writing this I realize that it's not yet easy on research to link a backtest to a paper-trading algorithm in order to create the cone. The get_live_results() is a first step and you could link that manually but we will make this more user-friendly.

See also slides from a recent talk that ties this together: https://docs.google.com/presentation/d/1rHFHla_I6teK5A-c8jglRiRR9atnGycdrxUYUqjKBq8/pub?start=false&loop=false&delayms=3000&slide=id.gcc36f9863_0_5

What are your thoughts on this?

Thanks Thomas,

No deep thoughts yet. It seems from an investor standpoint, it is a matter of "How much money should I put toward this, and how will I know when to add more or to pull out?" It's also a matter of expectations. For a money market fund and certainly for a bank CD, if you hear that you've lost $5 of capital, it is not a good sign. On the other hand, one sorta expects to lose when buying a lottery ticket.

One potential problem is, will investors care if a dramatic deviation is to their benefit? So do your stats capture this? Or are you just looking at deviations from expectations relative to the prior trend, regardless of direction?

I've attached another version. The leverage varies between 1 and 2 (it can be tweaked by adjusting context.leverage = 1.0). Basically, if the algo is neutral without SPY, the leverage is 1. If it is all long or short stocks, then the leverage is 2, since SPY will be short or long, to neutralize beta. This may make no sense, but I'd coded it this way originally and just let it fly.

Note set_commission(commission.PerTrade(cost=0.0)) in the attached algo.

Clone Algorithm
138
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.1
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    # context.qqq = sid(19920)
    context.spy = sid(8554)
    
    set_commission(commission.PerTrade(cost=0.0))
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(50)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    num_secs = 0
    
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def trade(context,data):
    
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
    
    prices = history(30*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
            return
    
    sum_weighted_port = np.zeros(len(context.stocks))
    sum_weights = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(3,31):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n*390)
        sum_weighted_port += weighted_port
        sum_weights += weight
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
        
    allocation = sum_weighted_port/sum_weights
    allocation = allocation/np.sum(allocation)
        
    rebalance_portfolio(data, context, allocation)
        
def get_weighted_port(data,context,prices,n):
    
    prices = prices.tail(n).as_matrix(context.stocks)
    prices = pd.ewma(prices, span=390)
    
    b_t = np.zeros(len(context.stocks))
    
    # update portfolio
    for i, stock in enumerate(context.stocks):
        b_t[i] = abs(context.portfolio.positions[stock].amount*data[stock].price)
    
    denom = np.sum(b_t)
    # test for divide-by-zero case
    if denom > 0:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = np.ones(len(context.stocks)) / len(context.stocks)

    x_tilde = np.zeros(len(context.stocks))

    b = np.zeros(len(context.stocks))
    
    # find price relative for each secuirty
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 1.0:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        else:
            context.ls[stock] += 1
        x_tilde[i] = price_rel
    
    ###########################
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod = np.dot(b_t, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
    else:     
        lam = max(0, num/denom)
    
    b = b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    
    weight = np.dot(b_norm,x_tilde)
    
    return (weight,weight*b_norm)

def rebalance_portfolio(data, context, desired_port):
    
    record(sum_port = np.sum(desired_port))
    
    # check for open orders      
    for stock in context.stocks:
        if get_open_orders(stock):
            return
    
    pct_ls = 0
        
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    # scale = 1.0-0.5*abs(pct_ls)
    scale = 1.0
    
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    # order_target_percent(context.qqq, -0.5*context.leverage*pct_ls)
    # order_target_percent(context.spy, -0.5*context.leverage*pct_ls)
    
    order_target_percent(context.spy, -context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.spy]:
            order_target_percent(stock,0)

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()
1.0

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).
"""

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w
There was a runtime error.

Thomas - is the bayesian cone with beta/APT/FF factors part of pyfolio? That looks pretty neat.

Hi Thomas,

Do you have access to any of the individuals looking to fund Quantopian algos (presumably VCs at this point)? Presumably, they have a pretty good idea of what they'd like to see before opening their wallets. In the end, that's all that really matters, otherwise we are just doing an academic exercise. Have you gotten any feedback from them on any of this?

Grant

Simon: I'm glad you like it. I think it's a much more useful model than the Bayesian cone that does not take market correlations into account. It's possible. I will try to come up with an example.

Grant: Sure, we have great advisors and have a pretty good handle on what they want, but how to get there is the question and something we will have to solve :).

Hi Thomas,

I guess I'm still trying to understand how I go from what I've presented above to capital from one or more of your "advisors" (presumably people with money or access to money). You said "So what I would do in this instance is to paper-trade this algorithm (the longer the better, at least 1 Month) and look at the cone plot in the tear-sheet." So, then what? You say "we have great advisors and have a pretty good handle on what they want" so I need to know how I get to the point that I am able to give them what they want.

I have no problem continuing to iterate on this, but there needs to be a reasonable shot at getting some money. Have your advisors given you anything definite like "Show us A, B, C, & D, and we'll put up some money" or are you still trying to work that out?

On a more technical note, it seems that with the cone business, you are trying to determine if the algo is generating returns in a consistent fashion. However, isn't there a difference in upside versus downside? If paper trading or real-money performance is better than backtesting would suggest, then shouldn't it be treated differently than if the algo tanks (unless the game plan is to pull real money out of algos that do exceptionally well relative to their past performance, to lock in gains)?

Thomas,

Also, I would think a little harder about the greatness of your advisors. The whole Q fund/contest effort has been pretty rough, it seems. The fact that they let you launch with long-only algos, correlated to the market, unhedged, etc. would suggest that either they weren't tuned in, or they didn't know what they wanted. And it has been a year since you announced the fund concept, and you've yet to get any real capital deployed, which doesn't seem like a resounding success. And they've sunk a lot of capital into Q, without any revenue yet. At the risk of sounding like a pessimist, the data would suggest that your advisors might not be VC rock stars.

Of course, they are paying you, so you are supposed to say that they are great.

Grant

Grant: What we're looking for is actually pretty simple:
* A strategy with a reasonable backtest without crazy drawdowns, that it is beta-hedged, doesn't have crazy concentration risk and is long/short (more specifics can be found in Jess' webinar: https://www.youtube.com/watch?v=-VmZAlBWUko).
* A good OOS period over a few months.

I really recommend the webinar and looking at your algorithms using pyfolio as that obviates certain issues that are easy to miss otherwise.

Does it have to be long short? I've been looking into different ways to lower beta and volatility without going short there are many different ways to do that. It seems so limiting that you eliminate entire groups of strategies without individual consideration given to the strategy.

Thanks Thomas,

Ha! Just starting listening. Guess that's what I get for posting! If only I could get paid for writing bad algos...

Grant

hahaha grant!
I would be a millionaire if that was the case :)!
best
Andrew

Spencer: A valid question. My take is that long/short is a quick way for us to trim down the uninteresting strategies but my opinion is that there for sure can be beta-neutral strategies that are long-only and still interesting for the fund.

Grant: :)

What we're looking for is actually pretty simple

Hi Thomas,

As the saying goes, the proof is in the pudding. Has anyone at Q managed to write an algo that passes muster? Or a Q user? Without 5 or so unique, viable, published examples, to take us out of the realm of dreams, with real money allocated, it is hard to tell what you are looking for. You can say anything, but until you actually put up the money with full disclosure, it is just words. At one point, I think Jess said that even 7% annual returns at 0.7 Sharpe would be o.k. So, I'd hope that you could whip something together that meets your minimum criteria, put $1M or so toward it, and get it rolling as an example that you mean business. Or are you hoping for something more spectacular?

Grant

Hi Grant,

I agree with everything you wrote and that's indeed what we're doing. We'll share more info as it becomes available.

Thomas

Here's another variant, in case somebody wants to play around with it. Looks sorta decent. --Grant

Clone Algorithm
16
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
from scipy import optimize
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.005
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    context.etf = symbol('QQQ')
    
    set_commission(commission.PerTrade(cost=0.0))
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:  
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
            
    num_secs = 0
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def get_allocation(context,data,prices):
      
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(abs(context.portfolio.positions[stock].amount*data[stock].price))
         
    m = len(b_t)
    b_0 = 1.0*np.ones(m)/m
    denom = np.sum(b_t)
    
    num_secs = 0
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1

    if denom > 0 and num_secs > 15:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = b_0
    
    x_tilde = []
    
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 0.995:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        elif price_rel > 1.005:
            context.ls[stock] += 1
        else:
            price_rel = 1.0
            context.ls[stock] += 0
        x_tilde.append(price_rel)
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_t, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})
        
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        return (allocation,np.dot(allocation,x_tilde))
    else:
        return (b_t,1)

def trade(context,data):
    
    # find average weighted allocation over range of trailing window lengths
    
    prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    a = np.zeros(len(context.stocks))
    w = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(1,21):
        (a,w) = get_allocation(context,data,prices.tail(n*390))
        a += w*a
        w += w
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
    
    allocation = a/w
    
    denom = np.sum(allocation)
    if denom > 0:
        allocation = allocation/np.sum(allocation)
    
    allocate(context,data,allocation)

def allocate(context, data, desired_port):
    
    # check for open orders      
    for stock in context.stocks + [context.etf]:
        if get_open_orders(stock):
            return
    
    pct_ls = 0     
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
        
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    order_target_percent(context.etf, -0.5*context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.etf]:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
There was a runtime error.

Longer-term, doesn't look like much. I haven't been able to ever find anything useful signal-wise from moving averages....

Clone Algorithm
2
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
from scipy import optimize
import pandas as pd
import datetime

def initialize(context):
    
    context.eps = 1.005
    context.leverage = 1.0
    
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
    
    context.etf = symbol('QQQ')
    
    set_commission(commission.PerTrade(cost=0.0))
    
def before_trading_start(context,data): 
    
    fundamental_df = get_fundamentals(
        query(
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.company_reference.primary_exchange_id == 'NAS')
        .filter(fundamentals.valuation.market_cap != None)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(20)) 
    update_universe(fundamental_df.columns.values)
    context.stocks = [stock for stock in fundamental_df]
    
def handle_data(context, data):
    
    leverage = context.account.leverage
    
    if leverage >= 3.0:
        print "Leverage >= 3.0"
    
    record(leverage = leverage)
            
    for stock in context.stocks:  
        # if stock.security_end_date < get_datetime() + datetime.timedelta(days=5):  # de-listed ?
        #     context.stocks.remove(stock)
        if stock in security_lists.leveraged_etf_list: # leveraged ETF?
            context.stocks.remove(stock)
            
    # check if data exists
    for stock in context.stocks:
        if stock not in data:
            context.stocks.remove(stock)
            
    num_secs = 0
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1
            
    record(num_secs = num_secs)

def get_allocation(context,data,prices):
      
    prices = pd.ewma(prices,span=390).as_matrix(context.stocks)
    
    b_t = []
    
    for stock in context.stocks:
        b_t.append(abs(context.portfolio.positions[stock].amount*data[stock].price))
         
    m = len(b_t)
    b_0 = 1.0*np.ones(m)/m
    denom = np.sum(b_t)
    
    num_secs = 0
    for stock in data:
        if context.portfolio.positions[stock].amount != 0:
            num_secs += 1

    if denom > 0 and num_secs > 15:
        b_t = np.divide(b_t,denom)
    else:     
        b_t = b_0
    
    x_tilde = []
    
    for i,stock in enumerate(context.stocks):
        mean_price = np.mean(prices[:,i])
        price_rel = mean_price/prices[-1,i]
        if price_rel < 0.995:
            price_rel = 1.0/price_rel
            context.ls[stock] += -1
        elif price_rel > 1.005:
            context.ls[stock] += 1
        else:
            price_rel = 1.0
            context.ls[stock] += 0
        x_tilde.append(price_rel)
        
    bnds = []
    limits = [0,1]
    
    for stock in context.stocks:
        bnds.append(limits)
        
    bnds = tuple(tuple(x) for x in bnds)
     
    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,x_tilde) - context.eps})
    
    res= optimize.minimize(norm_squared, b_t, args=b_t,jac=norm_squared_deriv,method='SLSQP',constraints=cons,bounds=bnds, options={'disp': False,  'maxiter': 100, 'iprint': 1, 'ftol': 1e-8})
        
    allocation = res.x
    allocation[allocation<0] = 0 
    allocation = allocation/np.sum(allocation)
    
    if res.success and (np.dot(allocation,x_tilde)-context.eps > 0):
        return (allocation,np.dot(allocation,x_tilde))
    else:
        return (b_t,1)

def trade(context,data):
    
    # find average weighted allocation over range of trailing window lengths
    
    prices = history(20*390,'1m','price')[context.stocks].dropna(axis=1)
    context.stocks = list(prices.columns.values)
    a = np.zeros(len(context.stocks))
    w = 0
    
    context.ls = {}
    
    for stock in context.stocks:
        context.ls[stock] = 0
    
    for n in range(1,21):
        (a,w) = get_allocation(context,data,prices.tail(n*390))
        a += w*a
        w += w
        
    for stock in context.stocks:
        context.ls[stock] = np.sign(context.ls[stock])
    
    allocation = a/w
    
    denom = np.sum(allocation)
    if denom > 0:
        allocation = allocation/np.sum(allocation)
    
    allocate(context,data,allocation)

def allocate(context, data, desired_port):
    
    # check for open orders      
    for stock in context.stocks + [context.etf]:
        if get_open_orders(stock):
            return
    
    pct_ls = 0     
    for i, stock in enumerate(context.stocks):
        pct_ls += context.ls[stock]*desired_port[i]
        
    scale = 1.0-0.5*abs(pct_ls)
        
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, scale*context.leverage*context.ls[stock]*desired_port[i])
        
    order_target_percent(context.etf, -0.5*context.leverage*pct_ls)
    
    record(pct_ls = pct_ls)
    
    for stock in data:
        if stock not in context.stocks + [context.etf]:
            order_target_percent(stock,0)
    
def norm_squared(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
     
    return 0.5*np.dot(delta_b,delta_b.T)

def norm_squared_deriv(b,*args):
    
    b_t = np.asarray(args)
    delta_b = b - b_t
        
    return delta_b
There was a runtime error.

Hello all. I'm back for a shot of nostalgia.

The tear sheet introspection into a strategy's efficacy looks to be truly useful. There is one aspect however, which looks like it's missing. The Ulcer Index, or underwater measurement seems to not represent what I consider a more important measurement. Namely the Surrender or Give-back timespan. This duration measurement is where one finds that one's portfolio has fallen to some historic level indicating that from that point to this you've earned exactly nothing on your money.

The underwater metric seems to measure from peak through a valley then up through the highwater mark. And is useful I suppose. But this other measurement examines the timespans between portfolio highs and the return to those highs -- no matter what has occurred between those points. This timespan, to me, represents the strategy's actual profit retention capability. The duration for this measurement, over the life of a back test, is the true test of whether a human would put up with a strategy and keep it in the market. If you knew that your strategy's portfolio value returned to prior absolute high after a year or so you'd say to yourself, "what the hell! Criminy, I'm back where I started. Forget this thing, I'm outta here." If the back test did that over and over would any trader have the stomach to keep their money in that strategy? Of course not.

What is the psychological maximum for this number? For investors who have a generational time horizon, that number might be years. For traders looking to make a buck every month or three, a six month surrender maximum would be too much.

[Red boxes are the obvious surrender timespans.]

Red boxes == dead money

Hi Simon,

Yeah, I haven't been able to tweak up this strategy for a 2-year timeframe, and then get a decent long-term backtest. Doesn't really matter to me at this point. It would seem that if I have a decent 2-year backtest and then 6 months or more of out-of-sample paper trading that looks good, it might be enough, but who knows. I've submitted some algos to the contest and they are doing well so far, but they could turn out to be turds or maybe I'll win again and get rejected for whatever reason, or get funded and fall flat. All fun.

Grant

This is probably more of a rant than I'm used to participating in...buuut....

Is a ten year back test even feasible?...it doesn't pass my sniff test.
Is there any strategy that will EVER pass that filter...a single timeseries measurement taking into account all
market forces and just winning....sounds too Issac-Asimov-ian to me!

I can see modeling events that happen in that ten year frame, and accounting for them with a strategy
that takes care of those events. To me that's learned risk management...pyfolio lists some of those events...

So a two-year backtest that just wins...along with a boatload of learned-risk management add-ons,
which the contest doesn't take into account...unless you happen to hit one of those events...
sounds like the way I'll go...guess I'm more on Grant's side on this than Quantopian's...
alan

Alan,

I think that's the challenge for Q. Let's say I give them a 2-year backtest that looks decent, and then it is followed by N months of out-of-sample consistent returns. It ought to get some capital, but how much? And when to back off and reduce the allocation? That's the whole crowd-source concept in my mind. At this point, my sense is that they are looking for a handful of institutional-grade uber-algos, but in the long run they need hundreds/thousands of algos from the crowd to have something unique.

Grant