Optimal Portfolio Choice - Quick Intertemporal Hedging Backtest

Optimal Portfolio as outlined by Merton. Uses the tools of Malliavin Calculus to in an economy with the short rate following a Vasicek Model. The Optimal Portfolio is split into a mean variance component and an inter-temporal hedging demand.

95
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import pandas as pd
import numpy as np
import numpy.linalg as la
import math
import scipy.stats as sp

def rename_col(df):
df = df.rename(columns={'Value': 'price'})
df = df.fillna(method='ffill')
df = df[['price', 'sid']]
return df

def initialize(context):
#parameters
context.riskAversion = 10
context.nobs = 252
context.recalibrate = 126 #re-estimate every so often (in days)
context.leverage= 1
context.horizon = 100000
context.bondProxy = sid(23870)

#setup the identifiers and data storage
#context.tickers = ['spy', 'ief']
#context.sids = [ sid(8554), sid(23870) ]
context.tickers = ['spy']
context.sids = [ sid(8554)]
context.data = pd.DataFrame({ k : pd.Series() for k in context.tickers } )
context.daysToRecalibration = 0
context.onevec = np.asmatrix(np.ones((len(context.tickers), 1)))
fetch_csv('http://www.quandl.com/api/v1/datasets/FRED/DTB6.csv?&trim_start=1958-12-09&trim_end=2013-12-05&sort_order=desc',
date_column='Date',
symbol='treasury',
post_func=rename_col,
usecols=['Value'],
date_format='%Y-%m-%d')
context.treas = []
context.t = 0

def handle_data(context, data):
if context.portfolio.starting_cash == context.portfolio.cash:
#buy into the benchmark while we build the starting data set
order(sid(8554), math.floor(context.portfolio.starting_cash/data[sid(8554)].close_price) )

if len(context.data.index) < context.nobs:
#still recording data
newRow = pd.DataFrame({k:float(data[e].returns()) for k,e in zip(context.tickers, context.sids) },index=[0])
context.data = context.data.append(newRow, ignore_index = True)
if data['treasury'].price <= 0:
context.treas.append(0.00001)
else:
context.treas.append(data['treasury'].price/100.0)

else:
context.t = context.t+(1.0/252.0)
newRow = pd.DataFrame({k:float(data[e].returns()) for k,e in zip(context.tickers, context.sids) },index=[0])
context.data = context.data.append(newRow, ignore_index = True)

if data['treasury'].price <= 0:
data['treasury'].price = 0.0001
context.treas.append(0.00001)
#log.info(context.treas[-1])
else:
context.treas.append(data['treasury'].price/100.0)

#context.data = context.data[1:len(context.data.index)]
context.data = context.data[1:]
context.treas = context.treas[1:]

#log.info(context.daysToRecalibration)

if context.portfolio.positions[sid(8554)].amount != 0:
#data gathering time is done, get out of the benchmark
#order(sid(8554), -1.0*context.portfolio.positions[sid(8554)].amount)
#wait a day for the trades to clear before placing the new trades.
#return
pass

if context.daysToRecalibration == 0:
context.daysToRecalibration = context.recalibrate
#recalibrate
#log.info('recalibrating...')

#calculate the mean variance portfolio;
precision = np.asmatrix(la.inv(context.data.cov()))
pimv = precision*context.onevec / (context.onevec.T*precision*context.onevec)
mu = np.asmatrix(context.data.mean()).T
mumv = mu.T*pimv
#log.info({'precision': precision, 'mumv':mumv})
pimeanv = precision*(mu-context.onevec*context.treas[-1]/252.0)
log.info(pimeanv)
pimeanv = { e:pimeanv[i,0] for i,e in enumerate(context.tickers) }
delta = 1.0/252.0
rho = 1- (1.0/context.riskAversion)

#calibrate the vasicek model
slope, intercept, r_value, p_value, std_err = sp.linregress(np.array(context.treas[1:]),np.array(context.treas[:(context.nobs-1)]))
kappa = -math.log(slope)/delta
rbar = intercept/(1-slope)
if slope <= 1.1 and slope >= 0.9:
sigmar = std_err/math.sqrt(delta)
pih= rho*sigmar/2.0
else:
sigmar = std_err*math.sqrt((-2*math.log(slope))/(delta*(1-math.pow(slope,2))))
#calculate the intertemporal hedging portfolio
pih = rho*(sigmar/(2*kappa))*(1-math.exp(-kappa*(context.horizon-context.t)))

#open all positions:1
startingCash = (context.portfolio.starting_cash+context.portfolio.pnl)*context.leverage
for i, e in enumerate(context.sids):
currentPosition = context.portfolio.positions[e].amount
newPosition = math.floor(startingCash*(pimeanv[context.tickers[i]]/context.riskAversion+pih)/data[e].price)
if newPosition-currentPosition < 10000:
order(e, newPosition - currentPosition)
else:

currentPosition = context.portfolio.positions[context.bondProxy].amount
newPosition = math.floor(startingCash*(1-pih-pimeanv['spy']/context.riskAversion)/data[context.bondProxy].price)
if newPosition-currentPosition < 10000:
order(context.bondProxy, newPosition - currentPosition)
else:

record(mean_variance = pimeanv['spy']/context.riskAversion)
record(hedging = pih)

else:
context.daysToRecalibration -= 1
#record(c = context.portfolio.positions_value)

This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
12 responses

The idea of this is to see how a portfolio performs with an inter-temporal hedging demand against a Vasicek Short Rate Model. The idea is from Merton, to decompose the optimal portfolio into a mean-variance component and an inter-temporal hedging demand. The tools of Malliavin Calculus were used to get a closed form time-dependent solution for the hedge.

The Vasicek Model was calibrated via regression. Feel free to ask if you need to see the algebra or have any other questions.

Dhruv

I couldnt find a way to show the algebra, so I uploaded a PDF to SSRN which has all the details as to how the results were obtained. Feel free to reply or email me if you have any questions or if you've seen any portfolios/constructions like this, I'm quite interested in them. Considering extending this to a multivariate sense, with more than an interest rate hedge (forward density (bond) hedge, volatility hedge etc.)

Derivation of Optimal Portfolio

Dhruv,

This is really cool. Could you start by explaining intertemporal hedging?

Thanks,
fawce

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

John,

Sure. Essentially, Modern Portfolio Theory comes down to an optimization problem over maximizing the mean over the smallest possible variance (which means it assumes a quadratic utility function, which might be a second order approximation at best in the real world). Also inherently, Modern Portfolio Theory is a myopic single-period problem, whereas we think of most investment problems as involving longer horizons with intermediate portfolio rebalancing. The easiest solution to this was outlined by Merton in 1969, where he used Dynamic Programming to set up an optimal portfolio that consisted of two portions - the first was the mean-variance part (which is what I called Pi(m)) and another component which was used to hedge out fluctuations in the economy (this component called Pi(h) is the "intertemporal hedge").

Essentially to be a bit more technical, every economy has state variables (risk free rate, mean returns, volatilities etc.). What the intertemporal hedge does, is pick out the assets in such a way as to maximize correlation with them (this was a pretty important result by Breeden in 1979), thus taking an opposite position in these combinations of assets hedges out the risk of fluctuations in the state variables. So, you have a component that theoretically maximizes returns (Mean-Variance Pi(m)) and a hedge against the changes in the economy (Inter-temporal Hedging).

There are many ways to obtain these portfolios, some use PDE's. I used the tools of Malliavin Calculus (which basically extracted the volatility coefficient in the model). So, the entire hedge component of the portfolio is for the only state variable I've allowed (the interest rate), but a true optimal portfolio would have hedges against all of them. The hedge also depends on the Risk Aversion (R), the higher the risk aversion, the smaller the mean-variance component (which depends on 1/R) and the larger the hedge (which depends on 1-1/R).

There are a few good tools to understand the idea, see here and here.

Let me know if you have any other questions about how I got some of the results.

Dhruv

Hey Dhruv,

I cloned your algorithm and ran it over a longer time horizon (please see the attached backtest). The performance is strong, and your strategy is relatively shielded from the fall in equity markets in 2008, while not capturing all of the upside in bull markets. Is this an intended result of the inter-temporal hedging strategy?

How are you hedging against interest rate risk using equities? How would you hedge IR risk if fixed income instruments were available on the platform? Could you describe further how you determined which assets have the highest correlation with the risk free rate?

How would you extend the model to hedge against other state variables (i.e. volatility)? What model would you use to model volatility? Heston, GARCH?

Ryan

26
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import pandas as pd
import numpy as np
import numpy.linalg as la
import math
import scipy.stats as sp

def rename_col(df):
df = df.rename(columns={'Value': 'price'})
df = df.fillna(method='ffill')
df = df[['price', 'sid']]
return df

def initialize(context):
#parameters
context.riskAversion = 10
context.nobs = 252
context.recalibrate = 126 #re-estimate every so often (in days)
context.leverage= 1
context.horizon = 100000
context.bondProxy = sid(23870)

#setup the identifiers and data storage
#context.tickers = ['spy', 'ief']
#context.sids = [ sid(8554), sid(23870) ]
context.tickers = ['spy']
context.sids = [ sid(8554)]
context.data = pd.DataFrame({ k : pd.Series() for k in context.tickers } )
context.daysToRecalibration = 0
context.onevec = np.asmatrix(np.ones((len(context.tickers), 1)))
fetch_csv('http://www.quandl.com/api/v1/datasets/FRED/DTB6.csv?&trim_start=1958-12-09&trim_end=2013-12-05&sort_order=desc',
date_column='Date',
symbol='treasury',
post_func=rename_col,
usecols=['Value'],
date_format='%Y-%m-%d')
context.treas = []
context.t = 0

def handle_data(context, data):
if context.portfolio.starting_cash == context.portfolio.cash:
#buy into the benchmark while we build the starting data set
order(sid(8554), math.floor(context.portfolio.starting_cash/data[sid(8554)].close_price) )

if len(context.data.index) < context.nobs:
#still recording data
newRow = pd.DataFrame({k:float(data[e].returns()) for k,e in zip(context.tickers, context.sids) },index=[0])
context.data = context.data.append(newRow, ignore_index = True)
if data['treasury'].price <= 0:
context.treas.append(0.00001)
else:
context.treas.append(data['treasury'].price/100.0)

else:
context.t = context.t+(1.0/252.0)
newRow = pd.DataFrame({k:float(data[e].returns()) for k,e in zip(context.tickers, context.sids) },index=[0])
context.data = context.data.append(newRow, ignore_index = True)

if data['treasury'].price <= 0:
data['treasury'].price = 0.0001
context.treas.append(0.00001)
#log.info(context.treas[-1])
else:
context.treas.append(data['treasury'].price/100.0)

#context.data = context.data[1:len(context.data.index)]
context.data = context.data[1:]
context.treas = context.treas[1:]

#log.info(context.daysToRecalibration)

if context.portfolio.positions[sid(8554)].amount != 0:
#data gathering time is done, get out of the benchmark
#order(sid(8554), -1.0*context.portfolio.positions[sid(8554)].amount)
#wait a day for the trades to clear before placing the new trades.
#return
pass

if context.daysToRecalibration == 0:
context.daysToRecalibration = context.recalibrate
#recalibrate
#log.info('recalibrating...')

#calculate the mean variance portfolio;
precision = np.asmatrix(la.inv(context.data.cov()))
pimv = precision*context.onevec / (context.onevec.T*precision*context.onevec)
mu = np.asmatrix(context.data.mean()).T
mumv = mu.T*pimv
#log.info({'precision': precision, 'mumv':mumv})
pimeanv = precision*(mu-context.onevec*context.treas[-1]/252.0)
log.info(pimeanv)
pimeanv = { e:pimeanv[i,0] for i,e in enumerate(context.tickers) }
delta = 1.0/252.0
rho = 1- (1.0/context.riskAversion)

#calibrate the vasicek model
slope, intercept, r_value, p_value, std_err = sp.linregress(np.array(context.treas[1:]),np.array(context.treas[:(context.nobs-1)]))
kappa = -math.log(slope)/delta
rbar = intercept/(1-slope)
if slope <= 1.1 and slope >= 0.9:
sigmar = std_err/math.sqrt(delta)
pih= rho*sigmar/2.0
else:
sigmar = std_err*math.sqrt((-2*math.log(slope))/(delta*(1-math.pow(slope,2))))
#calculate the intertemporal hedging portfolio
pih = rho*(sigmar/(2*kappa))*(1-math.exp(-kappa*(context.horizon-context.t)))

#open all positions:1
startingCash = (context.portfolio.starting_cash+context.portfolio.pnl)*context.leverage
for i, e in enumerate(context.sids):
currentPosition = context.portfolio.positions[e].amount
newPosition = math.floor(startingCash*(pimeanv[context.tickers[i]]/context.riskAversion+pih)/data[e].price)
if newPosition-currentPosition < 10000:
order(e, newPosition - currentPosition)
else:

currentPosition = context.portfolio.positions[context.bondProxy].amount
newPosition = math.floor(startingCash*(1-pih-pimeanv['spy']/context.riskAversion)/data[context.bondProxy].price)
if newPosition-currentPosition < 10000:
order(context.bondProxy, newPosition - currentPosition)
else:

record(mean_variance = pimeanv['spy']/context.riskAversion)
record(hedging = pih)

else:
context.daysToRecalibration -= 1
#record(c = context.portfolio.positions_value)

This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hey Ryan,

Thats definitely the idea, to maximize over the terminal date wealth while reducing intermediate shocks. The rebalancing (context.recalibrate) is once every six months, but if you increase that the portfolio performance should improve (although transaction costs would be a lot higher).

So, essentially (if you see the derivation) the entire risk return trade-off of the market is captured by one variable (I called it Xi, its usually referred to as the state price density.) So, taking the Malliavin Derivative of this provided me with the combination of assets that would be needed (since that tools basically extracts the volatility coefficient in the model, let me know if you want me to be a bit more technical about how/why this worked..) It provides the fraction of the portfolio in the equity which would hedge the IR risk, since the Malliavin Derivative in a very elementary sense gives a non-linear sensitivity.

There are definitely better ways of hedging the IR risk with other instruments, and that would be a multivariate extension (which might provide a better combination of assets, since everything I've done would then be multidimensional). Problem is, you might need Monte Carlo simulation to estimate some of those parameters or hedge values..

If you were to extend this to a volatility hedge, the procedure would be the same (you would need a continuous time process though, like Heston.) And essentially you'd get a component of the portfolio (say Pi(v)) which would be the hedge against volatility.

Does that help?

Dhruv

Hey Dhruv,

Thank you for the explanation. I do not have a background (yet) in the calculus of variations, and my knowledge of stochastic calculus is only basic, but did my best to follow the derivations. Most of my dynamic programming experience has been in discrete rather than continuous time also.

Why did you choose Vasicek as a short rate model over other one-factor models like CIR or Black-Derman-Toy? Did you consider a multi-factor model like Longstaff-Schwartz? It would be great to see you implement one of these models in the algo.

You mentioned mean returns as a state variable. How would you hedge this? Expected returns are notoriously difficult to estimate, and estimates are usually more suspect than estimates for volatility and interest rates. Would you take a Black-Litterman-style approach and look at returns that are implied by a market portfolio?

Ryan

Hey Ryan,

I chose the Vasicek primarily for simplicity of the solution. I haven't tried it with the BDT, but I think the CIR doesn't yield a closed form solution, so Monte Carlo would be needed. The Longstaff Schwartz is a multifactor model from what I remember, so you would need two Malliavin Derivatives (as a row vector, each with respect to one of the Brownian Motions). Im not sure the result would be too simple.

When I say mean returns, usually that equates to a stochastic market price of risk, which is what is modeled (thats just, in some sense, the Sharpe Ratio (mean - riskfree/st.dev.)) So, it is difficult to model, but it definitely has been done. There are some here. So, you could assume the entire MPR follows a mean - reverting or whatever process and then hedge it entirely, that might be a useful algo, since we know the Vasicek has a closed form solution.

Im working out one or two more that hopefully have tractable solutions, but the lack of instruments is a problem since you would need to make it multivariate and different hedges with equities might not be so good. Also, I've never tried a Monte Carlo simulation in Quantopian (which is what Heston would need), any references I could look at?

I have been thinking about how to use the Black-Litterman Style approach to this, or maybe an extension of the Ross Recovery Theorem, which might help imply the Market Price of Risk for better estimation, but that's pretty cutting edge research, so Im not too sure. It would definitely be a great idea though.

Dhruv

Hey Dhruv,

Quantopian currently does not have a feature available for Monte Carlo simulation. It may be added in the future, but you will most likely see a parameter optimizer first. Please see https://www.quantopian.com/posts/yet-another-feature-request.

Ryan

OK, it is very long code. Let me guess what you doing here.

Try OLS calibrate the Vasicek model, and used that model to do the projection, since you have closed form, no simulation needed.

Then build portfolio by mean variance method. Used the return rate from Vasicek, and get min variance portfolio combination.

After a while, re balance it. Am i right?

Very interesting algorithm, I am still reading on it.

However, I believe you can certainly do Monte Carlo on Quantopian. Monte Carlo is just a method with random simulation. If you can do it on Python, so certainly you can do it on Quantopian.

The following is I used truncated Euler method to do CIR model simulation, which is very crude, but enough to show MC you want. (you can ignore the order of AAPL part)

26
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
from math import sqrt

def initialize(context):
context.r = 0.1

def handle_data(context, data):
order(sid(24), 0)

#Monte Carlo simulation
record(CIR = context.r)
a = 0.22
b = 0.06
c = 0.443
temp = []
W = np.random.normal(0, 1, 1000)
if context.r < 0:
rt = 0
else:
rt = context.r
for i in range(1000):
temp.append( context.r + a*(b-context.r)/365 + c*sqrt(rt/365)*W[i] )
context.r = np.mean(temp)


This backtest was created using an older version of the backtester. Please re-run this backtest to see results using the latest backtester. Learn more about the recent changes.
There was a runtime error.

Xin Li,

You have it right, I calibrated the Vasicek Model and used the derived formula with its parameters for the hedge. The multiple if statements are because of 0 data the calibration tends to explode. So the limits were taken when the slope was 1 etc to see what the hedge looked like (they're very simple limits if you use L'Hopitals' Rule) and found to be a particular value (rho*sigmar/2.)

You're right about the MC, I just worry about convergence etc(the method allows any specification for the model but the Monte Carlo may take millions of iterations if multiple state variables such as Interest Rate, Volatility, and Market Price of Risk need be calibrated and since those iterations constantly change, and it may be very computationally intensive as well.

I could try the Heston with a single asset, but Im pretty sure the multi variable case will provide far better portfolio results.

Dhruv