Discussion of blog post on Bayesian Correlation Estimation

I really like the idea of using the Bayesian Correlation Estimation as presented in the blog post. However, it would seem that the algorithms as discussed in the docs are only able to extract the covariance. To obtain the correlation I need to divide the covariance by the standard deviation, which is most likely also time-dependent. Now it seems quite clear that last graphs on the correlations between the S&P500 and Amazon has done something like that. Is there some example code on how to do it properly? Or am I missing something here ?

16 responses

Fred, great points. It's possible that the last plot did not use a time-varying estimate of the standard deviation, I'll have to check. In any case, we agree that's what should be done. I think a better model would use a Multivariate Normal Randomwalk process that would model the whole covariance matrix. This could then also handle more than 2 stocks. In fact, that likelihood is now in PyMC3 3.1: https://github.com/pymc-devs/pymc3/blob/3.1/pymc3/distributions/timeseries.py#L164. Do let me know if you want to experiment with that, definitely an area were significant contributions can be made.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Thomas,

thanks a lot for the information. I am definitely interested to experiment with that as it could make for a very clean way to estimate the correlations, which seem at the heart of everything.

I will try to give it a try in minimal example and I hope to find the time, but I have a full-time job and these things aren't exactly trivial....

Hi,

here you can find a version that builds up exclusively on already existing code snippets. I think that it should do roughly the right thing, even if it is not the cleanest of all models.

I will have to dive into the MvGaussianRandomWalk. However, I am not sure yet what the Mv should model. I can only imagine an implementation, where each process would corresponds to a different entries in the correlation matrix, which would be three for two stocks.
On the other hand it would be much clearer if it would directly describe the correlation matrix.

@Thomas, if you have input it would be of great help. Otherwise, it might take some time.

Hi Fred,

Wow, this looks like a great start. You could estimate the stoch vol and correlation in the same model I think. But yeah, the best path would be to use the MvGaussianRandomWalk instead. It should just model the joint returns distribution. So if you have 3 strategies over 100 days, you would model this with a 3x100 MvGaussianRandomWalk (or maybe sub-sample if this is too brittle), that provides you with the cov matrix changing over time. This cov-matrix would be passed to an MvNormal likelihood with the data (observed) being the 3x100 array of the returns. Does that make sense?

Ok,

so I started to look into the MvGaussianRandomWalk. I tried to install the dev branch via:

pip install git+https://github.com/pymc-devs/[email protected]


As I run

import pymc3


I obtain the following weird error:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-2-94d4d415559e> in <module>()
----> 1 import pymc3 as pm

retDevEnv/lib/python3.5/site-packages/pymc3/__init__.py in <module>()
3 from .blocking import *
4 from .distributions import *
----> 5 from .external import *
6 from .math import logsumexp, logit, invlogit
7 from .model import *

ImportError: No module named 'pymc3.external'


What went wrong here ? And if you would like to move this part of the discuss into the gitHub I am happy too.

Let's move it to github.

Ok,

I now got to work with the MvGaussianRandomWalk and I am quite excited, but I also have problems as you can see here:

• You can directly test the random walk on the price. No need to go to the returns. Might be less exact, but you can understand of the modelsthe concept more visually.
• Looking on the prices directly you also get the feeling that the normal distribution is very unlikely to describe the trajectory that we observed.
• The MvRandomWalk for the sign throws a weird AssertionError. I am honestly not sure how to attack it and would have to play around a lot. So if you understand what this is trying to tell me it would be quite awesome.

That's really cool!

You can directly test the random walk on the price. No need to go to the returns. Might be less exact, but you can understand of the modelsthe concept more visually.

That's fine, but you need to also model the intercept in that case, which also follows a random walk. That is not required for returns.

Looking on the prices directly you also get the feeling that the normal distribution is very unlikely to describe the trajectory that we observed.

Yeah, fine for viz and for building an intuition, but the final model needs to be for returns.

The MvRandomWalk for the sign throws a weird AssertionError. I am honestly not sure how to attack it and would have to play around a lot. So if you understand what this is trying to tell me it would be quite awesome.

The reason is that you pass in a 3D-covariance (time-changing 2D-cov matrix) with a 1D-mu, this would need to be 2-D -- one mu-vector per time-point. You probably also need to specify the shape in that case. That, however, might be problematic in PyMC3, as higher-dim multivariate distributions are not well supported (https://github.com/pymc-devs/pymc3/issues/535#issuecomment-261582945). It's worth a try though. An alternative would be to loop and create a single MvNormal per time-point. As that will be excessive, I would bin time to have one parameter-setting (and one MvNormal) for e.g. 50 data-points, similar as is done in the blog post.

Hi,

I went again through the code and I think that the problem actually is already happen well before the definition of the mu. At some point I have to run:

    s = pm.MvGaussianRandomWalk('s', cov= sigma_diag, shape=(1000,2))
s_diag = pm.Deterministic('s_mat', T.nlinalg.diag(pm.math.exp(-2*s)))
nu = pm.Uniform('nu', 0, 5)
C_triu = pm.LKJCorr('C_triu', nu, 2)
C = pm.Deterministic('C', T.fill_diagonal(C_triu[np.zeros((2, 2), dtype=np.int64)], 1.))
cov = pm.Deterministic('cov', T.nlinalg.matrix_dot(s_diag, C, s_diag))


The shape of s is (2,2), but the shape of s_diag is (2,) . This also means that cov has then the shape (,). This all is very different to the case of the time-independent standard deviation and I assume that theano has problems in the conversion of a (2,1000) matrix to the diagonal ...

The reason is that you pass in a 3D-covariance (time-changing 2D-cov
matrix) with a 1D-mu, this would need to be 2-D -- one mu-vector per
time-point. You probably also need to specify the shape in that case.
That, however, might be problematic in PyMC3, as higher-dim
multivariate distributions are not well supported
(https://github.com/pymc-devs/pymc3/issues/535#issuecomment-261582945).
It's worth a try though. An alternative would be to loop and create a
single MvNormal per time-point. As that will be excessive, I would bin
time to have one parameter-setting (and one MvNormal) for e.g. 50
data-points, similar as is done in the blog post.

So it feels very likely that we have to loop through the time-points. I am however not sure how to keep the memory of the values of the last point in time. This seems like a very nice feature of the GaussianRandomWalk. Most likely this would have to happen through the prior of the next time-step ?

I think it should work differently.

The covariance matrix follows an MvRandomWalk. For the initial, first covariance matrix, we need to set a prior. That's where we could use the LKJ. The result of that MvRandomWalk we'll pass into the observed MvNormal. Does that make sense?

Hi Thomas,

I'm reaching a similar issue here - having implemented what you mentioned, I get the issue that the MvNormal doesn't like a 3 dimensional covariance matrix (throws error that cov needs to be dimension 2). Is there a way to get around this?

@Mark: Are you up-to-date on pymc3?

Last few days I have been trying to implement based on this thread. But I am having trouble following the shape parameter. Is there some documentation on the shape parameter ? Or did anyone got this to work?

Do you know if the time series itself has autocorrelation, what model should be used to account for that?

@vivek: I actually ended up using bootstrapping in the end. The results were super similar, the code ran substantially faster, it was more flexible and much simpler to implement as you just have to role the dice. It also felt a bit more intuitive as I was not assuming that many models. (Sorry for the statement @Thomas). If there is interest I would be happy to look for the notebook again and upload it ...