Back to Community
regress excess returns on XLE components

Here is something I tried and it looks decent.

Clone Algorithm
263
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

   
def initialize(context):
    set_symbol_lookup_date('2015-05-01')
    context.XLE = sid(19655)
    context.stocks = symbols('XOM','CVX','SLB','KMI','EOG','COP','PXD','APC','OXY','HAL','WMB','PSX','VLO','BHI','TSO','SE')  
    schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))
    
def handle_data(context, data):
    record(l=context.account.leverage)
    pass

def myfunc(context, data):
    prices = history(90, "1d", "price")
    prices = prices.drop([context.XLE], axis=1)
    ret = prices.pct_change(5)
    ret.fillna(0, inplace=True)
    ret = np.log1p(ret).values
    cumret = ret #np.cumsum(ret, axis=0)
    xle = np.mean(cumret, axis=1)
        
    i = 0
    score = []
    for sid in prices:
        diff = np.diff(cumret[:,i])
        X = smapi.add_constant(diff, prepend=True)
        Y = np.diff(cumret[:,i] - xle)
        res = smapi.OLS(Y, X).fit()
        score.append(res.params[1] / res.ssr)
        i += 1
        
    netscore = np.sum(np.abs(score))
    
    i = 0
    wsum = 0
    for sid in prices:
        try:
            val = 140000 * score[i] / netscore
            order_target_value(sid,  val)
            wsum += val
        except:
            log.info("exception")
            i += 1
            continue
            
        i += 1      
    order_target_value(context.XLE, -wsum)           
There was a runtime error.
12 responses

It'd be nice to hear from the Q folks regarding return. The return is well under SPY, but with a low beta. Is that of interest for the Q Fund? Or would the return need to be higher? I guess you could submit it to the contest, to see how it ranks, but so far, that hasn't given much feedback regarding Q Fund worthiness.

Yep, strategies such as these that attempt to arbitrage an ETF against some of its constituent stocks is something we are interested in. Researching these types of strategies has always been interesting to me because it can often have an art to it -- Sometimes you can get more bang for your buck by having large baskets of stocks vs the ETF and other times smaller baskets can work better (simply because there is more variance between small basket and the ETF, or if the ETF itself is somewhat low volatility so you need some "noise" in order to establish an arbitrage opportunity). The fact that a strategy such as this underperforms SPY is not an issue. In fact I may even argue that because it achieves 27% return vs the 43% return of the SPY, but that the algo does it with an only 0.19 beta it actually 'outperformed' because if you normalize the returns by beta, e.g. if you multiply 27% * (1/0.19) = 140% then the algo outperformed on a risk basis :) I recognize this is a pretty crude attempt at risk normalization, since if you have a better really close to zero it results in "infinite" outperformance, but I guess what I'm trying to allude to is that SPY isn't the perfect benchmark by which to compare the performance of an arbitrage type strategy like this. I personally feel like algos like this simply deserve to be judged on their own vis-a-vis other hedged, low-beta algos, rather than the SPY. Unfortunately I've never actually discovered a benchmark index that is comprised of portfolios of hedged stocks.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Can I ask what the theory behind this is? I'm new and still learning. Thanks!

Another way of thinking about comparing performance of a hedged algo like this one, say, to a benchmark -- and the SPY can work in this example actually -- is to choose a certain level of risk (i.e. annual volatility) you are willing to accept as an investor. For me, I think a value around 25% is sensible. Remember though that annual volatility is just financial lingo for a 1-standard deviation expected fluctuation of your algo over the course of 1 year. So what this also means is that if I were to suffer a 2-standard deviation event, I should also be willing to lose 50% over the course of 1 year. That actually seems like quite a bit now that I think about it, because if I lose 50% in 1 year, then that means I need to earn 100% on my remaining capital just to "get even." So, given that, I think I'll ratchet back the risk I'm willing to take down to 15% annual volatility (thus 30% at 2-standard deviations) which seems moderately more sensible...

Now that I've settled on sensible risk to take, I then divide my algo's annual volatility into 15% (using this algo above's annual volatility of 7% results in: 15% / 7% = 2.1), and then I do the same thing for my benchmark, SPY, and take 15% divided by the annual volatility of the SPY over the same period which I was backtesting my algo. The backtester doesn't currently report this for the specified benchmark (I don't think so anyway), but I just computed it offline as 11.2%. So to scale that risk up to 15%, results in 15% / 11.2% = 1.34. I then take each of these "risk parity ratios" and multiply the total returns of each by the ratio to get the returns per unit of my "risk budget":

algo: 26% * 2.1 = 55%
SPY benchmark: 42% * 1.34 = 56%

So on a "risk parity" basis this algo achieves the same return as the SPY. And now that I think about it, this is a much more rigorous approach to comparing the algos than the silly beta ratio comparison I did in the previous comment on this thread :)

FWIW allocating capital on risk parity, e.g. on a risk unit basis, is how many funds have approached the portfolio construction process for quite some time. Although some funds took it to some extremes and allocated capital to low-volatility instruments like bonds in conjunction to stocks in this same manner, and have "blown up" because of huge black swan events in the bond market where a bond's volatility has gone from say 2% per year, but then the bond's sold off 50% in 1-month, and because much more $ was allocated to the bonds because they were low-volatility and say the fund's risk unit was 10% volatility, the brunt of the losses came from the "low-risk bonds!" Oops! I only mention this example because surely these poor, unsuccessful implementations of risk parity are what would surely be at the top of Google search results if one would search for more information on the "risk parity" approach to building portfolios. But in the end, if approached sensibly it can certainly be a sensible way to allocate capital.

Justin, I appreciate the specificity. How can this be implemented in a single algorithm? It seems to me more like a way to allocate funds among many different algorithms and strategies.

Thanks Justin,

You are basically saying:

(algo return) / (algo risk) = x / (risk I'm willing to take)

So, hypothetically, if I could take more risk, I might get x return (or end up bankrupt).

So, I think you are explaining that as a stand-alone strategy, the algo beginner posted above would not be attractive to the customers you have in mind for the Q Fund, correct?

But what if you stirred it into your Q Fund with bunch of other ones, with similarly low betas (which, of course, doesn't guarantee that they'll be uncorrelated, since everyone could just submit tweaked variants of beginner's algo)? Would it be useful? Or is it a turd? Somehow, intuitively, I feel like 49 lines of code and a handful of securities is not gonna cut it. But maybe I'm overthinking things.

As a general note to the Q team, if you want the backtester and this forum with posted backtests to be useful for writing algos for the Q Fund, it should be a lot easier to understand, in a general sense, whether a given result might make the cut (assuming no explicit or hidden bias). In this case, I suppose beginner could submit to the contest, wait for his backtest ranking, and then post here with the ranking, and then we could try to sort out how the ranking relates to the Q Fund worthiness of his posting. Why not just embed some sort of automated feedback into the system? Even thumbs up/sorta/thumbs down feedback would help.

The backtest score for this algo is 65.2. But I agree with you Grant. I would never trade my own money on this strategy.

The reason, I wouldn't trade my own money is because 2 years of back test is not enough. I once contacted a prop firm in Hong Kong and they asked me for 7 years of back test results so I guess the last 2 years (especially when markets are doing so well) is not sufficient to judge an algorithm.

IS has a very nice results in terms of beta , DD and sharpe, out of sample (from May 2012) I got beta of 2.5 and %65 DD.
Have you noticed that your leverage is X 2.8 most of the time? Is it realistic to have such leverage in real IB trading. I wonder why is that and how to reduce it to 1

Hi Pravin,
I cloned your original algo above, and made a 1-line revision to how positions sizes are ordered to ensure leverage never goes above ~1.0. As well, I removed a couple of the tickers that do not have longer histories so I could run a backtest all the way back to 2003. I've attached it here. I think the risk-return profile of this algo is pretty decent. Low annual volatility, almost 1.0 Sharpe, extremely low beta. One minor issue is the ~2 yr drawdown thru 2008, but it's somewhat manageable at around -16%.

Clone Algorithm
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

   
def initialize(context):
    set_symbol_lookup_date('2015-01-01')
    context.XLE = sid(19655)
    context.stocks = symbols('XLE','APC','EOG','PXD','OXY','TSO','WMB' ,'SLB', 'XOM', 'VLO', 'HAL', 'CVX', 'COP')  
    schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))
    
def handle_data(context, data):
    record(l=context.account.leverage)
    pass

def myfunc(context, data):
    prices = history(90, "1d", "price")
    prices = prices.drop([context.XLE], axis=1)
    ret = prices.pct_change(5)
    ret.fillna(0, inplace=True)
    ret = np.log1p(ret).values
    cumret = ret #np.cumsum(ret, axis=0)
    xle = np.mean(cumret, axis=1)
        
    i = 0
    score = []
    for sid in prices:
        diff = np.diff(cumret[:,i])
        X = smapi.add_constant(diff, prepend=True)
        Y = np.diff(cumret[:,i] - xle)
        res = smapi.OLS(Y, X).fit()
        score.append(res.params[1] / res.ssr)
        i += 1
        
    netscore = np.sum(np.abs(score))
    
    i = 0
    wsum = 0
    for sid in prices:
        try:
            # val = 140000 * score[i] / netscore
            val = (0.5 * context.portfolio.portfolio_value) * score[i] / netscore
            order_target_value(sid,  val)
            wsum += val
        except:
            log.info("exception")
            i += 1
            continue
            
        i += 1      
    order_target_value(context.XLE, -wsum)           
There was a runtime error.

Pravin,
And here is the pyfolio tearsheet for the algo I shared above.

Loading notebook preview...

It's pretty hard to judge algos which have hard-coded lists of symbols like this. A version which uses the top 10 stocks by current market cap (SharesOutstanding * Price in Pipeline) belonging to the Energy sector would be a better test.