Inverse volatility weighting for traditional stock/bond portfolios

Here is my implementation of so-called naive risk parity where the portfolio weights are calculated as the inverse of volatility of each asset. This is considered naive because (well, among other things) it does not take correlations between assets into account.

Note that CAGR is about 4% and the very high Sharpe which is due to the approach generally favouring bonds

228
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
from math import log as ln, exp, sqrt
from numpy import array
import numpy as np

def initialize(context):
context.secs = [sid(8554),sid(25485)]
context.returns = []

@batch_transform(refresh_period=0, window_length=21)
def get_volatility(datapanel, sid):
if datapanel['price'] is None: return None
prices_df = datapanel['price']
prices = prices_df[sid]
logreturns = array([ln(y)-ln(x) for x,y in zip(prices[:-1],prices[1:])])
return np.std(logreturns)*sqrt(252)

def reweight(context,data,wt):
liquidity = context.portfolio.positions_value+context.portfolio.cash
for sec in wt.keys():
target = liquidity*wt[sec]/data[sec].price
current = context.portfolio.positions[sec].amount
#log.info(("%s ordering %d" % (sec, target-current)))
order(sec,target-current)

def handle_data(context, data):
wt = {}
totalwt = 0
for sec in context.secs:
vol = get_volatility(data, sec)
if vol is None: return
wt[sec] = 1/vol
totalwt += 1/vol
for sec in wt.keys():
wt[sec] /= totalwt
#log.info(("%s = %.2f" % (sec,wt[sec]*100)))

reweight(context,data,wt)
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
5 responses

So, wondering if this is a bug or a feature (i.e., I'm doing it wrong)

It seems that @batch_transform functions are not handled properly when the arguments change. For example, in the source for this backtest, if one changes the refresh_period to 1 then it only calls get_volatility once per day (rather than what I expect, twice per day, once per security per day).

Also, specifying window_length seems to give a window of size window_length-1

Hi John,

The batch will be called once every refresh period. We combine all the events for all the sids into a pandas datapanel (one frame for prices, another for volume). That way, you can do operations on all the data in one structure. I would say that you should remove the second parameter to get_volatility and then just calculate the volatility in the dataframe. The reason you are only seeing one call instead of two is that batch_transform decorated methods expect to be called just once per handle_data. Calling them multiple times is frying our book-keeping for the moving window. You are not the first person bitten by the api, so we are looking into modifications to make it more clear.

I'm looking into the window_length, you may have found a bug.

thanks for all the feedback,
fawce
thanks,
fawce

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Fawce. I think I understand what you're saying, have written it up - it's attached, let me know if that is consistent. Also, the confusion may arise mainly due to the minmax() batch_transform example in your API docs.

228
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
from math import log as ln, exp, sqrt
from numpy import array
import numpy as np

def initialize(context):
context.secs = [sid(8554),sid(25485)]
set_commission(commission.PerShare(cost=.005))

@batch_transform(refresh_period=30, window_length=21)
def get_volatility(datapanel):
if datapanel['price'] is None: return None
prices_df = datapanel['price']
vol = {}
for sid in prices_df.columns:
prices = np.log(prices_df[sid])
logreturns = array([y-x for x,y in zip(prices[:-1],prices[1:])])
vol[sid] = np.std(logreturns)*sqrt(252)
return vol
#return datapanel['vol']

def reweight(context,data,wt,min_pct_diff=0.1):
liquidity = context.portfolio.positions_value+context.portfolio.cash
orders = {}
pct_diff = 0
for sec in wt.keys():
target = liquidity*wt[sec]/data[sec].price
current = context.portfolio.positions[sec].amount
#log.info(("%s ordering %d" % (sec, target-current)))
orders[sec] = target-current
pct_diff += orders[sec]/target*wt[sec]
if pct_diff > min_pct_diff:
for sec in orders.keys(): order(sec, orders[sec])

def handle_data(context, data):
wt = {}
totalwt = 0
voldict = get_volatility(data)
if voldict is None: return
for sec in context.secs:
vol = voldict[sec]
wt[sec] = 1/vol/vol
totalwt += 1/vol/vol
for sec in wt.keys():
wt[sec] /= totalwt
#log.info(("%s = %.2f" % (sec,wt[sec]*100)))

reweight(context,data,wt,0.1)
This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Also, another couple of bugs/features:

Scrolling the log gets confused for me if I use my mouse wheel. I think this has to do with Mac OS smooth scrolling damping, which might send a rapid succession of up/down movements to damp the movement... it doesn't happen if i use page up/down

The backtest plot disappears after a few runs without any orders, for example, while printing data structures to the log

John,

Yes, that's the intended use. I would suggest making the volatility calculation completely as vector operations, rather than iterating through the columns. You can use the data_frame as though it were a scalar value and it will magically align everything and create a new dataframe of the volatility. It should be quite a bit faster than iterating over everything.

thanks,
fawce