Back to Community
Grid-searching for profitable cointegrating portfolios?

I have (in MATLAB) built up a grid search routine based off of Ernie Chan's basic structure - searching for cointegrating portfolios among all permutations of a set of ETFs/stocks, testing with Johansen's method, ADF, hurst, variance ratio etc etc, running a simple bollinger band backtest and testing for geometric return/maximum drawdown etc.

Anyone else doing similar work? Once the portfolio size gets >4 for a set of 550 instruments, the permutations get out of hand, so I'm looking into possibly doing a clustering analysis to try and narrow the search beforehand, but it's not obvious what features to cluster on at first glance. I am wondering if some clustering on some sort of mutual pairwise cointegration at a lower threshold of significance might help.

In the meantime, I'm going to move on to the execution/OMS side of things and make some progress there.

Maybe I should start a blog lol.

28 responses

Interesting, glad I am not the only one thinking along these lines. My concern though is that there might be good portfolios of three or four assets that no combination of which would make a good pair (like a crack spread triplet between three uncointegrated ETNs on crude, NG and HO or whatnot). But, we must cut down these combinations somehow...

I was also considering approaching the problem from reverse, and building DAGs of potentially connected stocks by webcrawling seekingalpha or stocktwits for tickers mentioned in the same articles.

For me though, it's enough of a proof-of-concept that I am correctly discovering portfolios with excellent daily mean reversion properties prior to taking into account transaction costs - the next step is to switch focus to execution simulation etc.

Hello Simon (& Anony),

I ain't no expert (which may be an advantage in this field), but you might consider just generating random portfolios and testing each for goodness. Say you want a portfolio of 5 securities picked out of a pool of 550. So, to construct each portfolio, you randomly pick 5 securities and randomly weight each portfolio to construct the portfolio. Then, apply your set of tests and roll up the results into a single goodness metric.

With MATLAB, you could look into multi-core/cluster or parallel/GPU computing (possible with Python, as well, I've seen).

Sorta brute force, but if there are a bunch of portfolios that are essentially equivalent (i.e. there actually is no super-duper global optimum), you should get something that will work in practice. A plateau of goodness versus a mountain peak.

You might also contact Quantopian's Thomas Wiecki who has a background in such computationally intensive problems.

Grant

Well, picking the weights is the easy part, but even randomly sampling, how do you know when to stop? 550 choose 5 is 411 billion possibilities. Currently I can test about 500 portfolios per second per CPU core.

I wonder if the clustering could be machine-learned, just extract a bunch features as Anony mentioned and then do some sort of ML clustering algorithm.

I think my problem with searching for pairs/basket using this brutal force search is that it is difficult to be sure that the pair/basket you found actually make sense. I search pairs out of > 6000 stocks and ETF and there are thousands of pairs that cointegrates even if I put strict requirement on the confidence level and the mean-reversion period. For example, if you back test on a 12-year time frame you will find that Automatic Data Processing, Inc. (ADP) and Novartis AG (NVS) are cointegrated with > 95% confidence and if you back test the pair using simple Bollinger band it is actually profitable. But I won't trade this pair because it doesn't make sense that ADP and NVS will co-integrate because their business are completely different. With 3 or 4 securities in a basket, it is getting harder to analyze if their cointegration is just a mathematical coincidence or it actually make sense

How do you guys deal with this problem with you do this kind of brutal force search? Do you check if pair/basket actually make sense?

I think you need much more than 95% confidence when doing vast searches, since by the law of large numbers many will match by chance. So yes, I have additional restrictions, including eyeballing them afterwards to see if they make sense. That's why it's tempting to try and go in reverse, by finding clusters of assets which are a priori related by customer/supplier relationships and then search for mean-reverting portfolios within those clusters, but reliable data of that sort is hard to come by cheaply.

Basically I have the same problem - you need strict enough requirements that your final sample of portfolios is small enough that you can still go through them all by hand to see if they make sense, but where do you draw the line?

It is indeed hard to draw the line. Here is how I narrow down my search:
1. narrow down the stock and ETF list: there are > 8000 ETF and stocks, and I select them based on these criteria:
a. find all securities whose daily volume is > 1M, and the increase in price in past 12 years is < 10 times. After this criteria there are 617 securities left.
b. find all securities that are self-stationary (such as bond EFT). After this criteria there are 579 securities
2. I use the follow criteria to found pairs out of the 579 securities:
a. correlation > 0.9
b. Johansen test result > 90% confidence
c. time series created by first Johansen test eigen vector is stationary with > 95% confidence
d. half life < 60 days

I still get > 100 pairs and many of them don't make sense. However if I tighten up the criteria, such as increasing the Johansen test confidence level, some pairs that do make sense would be filtered out.

Any comment or suggestion? Mind sharing your criteria of choosing pairs/baskets?

Could you explore the combinations in a greedy way?

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I will have more data once this 50-day quadruplet run finishes, but it's not obvious that it would work that way. If a pair is already stationary, then adding a third anything to it will tend to be redundant -- this was the original reason for my step 6, because the search was resulting in hundreds of trivial extensions of an already-cointegrated pair or triplet.

Likewise, I am not sure that clustering on correlation will do what one wants, in that by definition, the goal of this enterprise is to discover non-obvious stationary portfolios which aren't already over-crowded.

Note that there are various papers outlining alternate methods of finding constrained/sparse mean-reverting portfolios using LASSO etc techniques, like http://arxiv.org/pdf/0708.3048.pdf , but I have not implemented any of them.

Simon, thanks for sharing. Could you elaborate on these two criteria?

Exclude portfolios where any leg is < 1/2k by dollar value
Exclude portfolios which don't neutralize the first principal component to some tolerance

Simon, thanks for explaining. I do have a similar check for the first condition, but not for your second one. Actually the second one make good sense. I will check it out and see how well it works.

Anony, you are right. The pair don't have to make sense, but if I have a choice, I still think that the pair that make sense would make me feel more comfortable. But it could be just my personal feeling. There are tons of pairs which makes perfect sense that don't cointegrate. So making sense don't necessarily guarantee anything.

Hello Simon,

Above, you mention "I'm going to move on to the execution/OMS side of things and make some progress there." Presumably, you are considering how to handle orders in the context of Quantopian/IB? If so, I'd appreciate your insights. The whole concept is kinda murky and tricky in my mind presently, but perhaps there are standard coding structures that make it straightforward. Even the simple idea of submitting a market order and covering the downside risk with a stop order is not as easy as it sounds, if one wants to do it right. Perhaps we could initiate another discussion thread?

Grant

Well, stop orders on mean reversion systems are particularly tricky -- they're much more straight-forward on momentum-based systems.

For the time being, I am not going to target Quantopian, but try going it alone. I'll be keeping a close eye of course though! The challenges will be the same, structuring order placement to minimize transaction costs, handling partial and delayed fills correctly to avoid unhedged exposure, aggregating and disaggregating orders from multiple concurrent systems, per-system and portfolio level risk management and user overrides, etc etc. Lots to do!

Thanks...sounds like you should hire some help! Or just have a lot of time on your hands. Best wishes, Grant

Well, perhaps, we'll see - with all my own capital going into this, there's a strong motivation to understand every nook and cranny!

I'm currently developing methods for finding good pairs/baskets and encountered a lot of the same problems mentioned here. I've been mainly using the ideas in http://www.slideshare.net/nattyvirk/pairs-trading but more good ideas here to explore.

Excellent slides, good to get some validation of the challenges and solutions!

Simon, when you do your grid search, do you use "close" or "adj_close"? I found that if I use close, there are problems caused by the price gap resulted from stock splits/mergers, but if I use the "adj_close" from Yahoo Finance it is adjusted for dividends and the eigen vector that I got doesn't work as well when I code it into Quantopian because when you actually back test in Quantopian (and actually trade as well), you are supposed to use the historical price adjusted for splits/mergers only, but not adjusted for dividends. Do you have a data source where you can get historical prices adjusted for splits/mergers only or you have some other ways to deal with the problem? Thanks.

Yeah, I am not using Yahoo data any more, for that reason. When the time comes, I will likely buy an IQFeed subscription or something like that.

Thanks, Simon. It's good to have free data though. I saw that Google Finance's historical data are adjusted for splits but not dividends. I should probably give that a try.

@Peter, Google Finance data is available on Quandl and they have a Python API, which downloads data to a pandas object.

Just saw that Quandl also has a community generated database of >3K+ symbols with unadjusted prices, and splits and dividends so you could calculate the adjusted prices yourself.

Anyone else doing similar work? Once the portfolio size gets >4 for a set of 550 instruments, the permutations get out of hand, so I'm looking into possibly doing a clustering analysis to try and narrow the search beforehand, but it's not obvious what features to cluster on at first glance. I am wondering if some clustering on some sort of mutual pairwise cointegration at a lower threshold of significance might help.

I would advise that doing any clustering or principal components tend to provide some overfitted solution. Nevertheless, go for it and let us know the results! Have you implemented this on Q before? if so, hows the computation time?

Been doing this as well. Like Simon I have been using MATLAB. No particular reason except I am familiar with it. Been trying to shift to Python now.

I filter the universe via the following steps. Just heuristic reason, no solid rationale. Market Cap > 1bil, remove those with short history, sector according to Bloomberg classification, and CADF test. That's just the filtering bit. Then there is the in sample test using various smoothing technique - OLS, Bollinger, Kalman Filter etc.

This was over a year ago. All the portfolios I found didn't hold up to backtesting with actual bid-ask data. I haven't revisited this project in a few months...

Check out small mean reverting portfolio

I am think of using clustering techniques on correlation matrices to group "similar" stocks and then run above small mean reverting portfolios algorithm. If anyone wants to collaborate please let me know.

@Parvin,
I too have been looking into using clustering in this way. Because of the issues that arise from multiple comparison bias with the grid search method, I am exploring the use of clustering on the assets, then running tests for cointegration between the assets found in the same cluster. My hope is that this may remove some of the spurious relationships by only trading spreads on similar assets.

I have noticed that when multiple tickers show up in an article on a financial site, the stocks are usually in the same sector. Editors may turn down articles that discuss unrelated companies, and the sector is the easiest way to create a relationship. Stocktwits frequently follows this rule, but not always. A poster may include several tickers from different sectors because all of the stocks are in their portfolio (which could indicate that a similar selection process was used). Stock promoters might tag a big company to promote a smaller company; there may or may not be a business relationship between the two companies, but many investors will be looking for news about the big company so this tactic can bring attention to the smaller company. Financial site editors are aware of this and may reject articles that use this promotional tactic; if tickers for a big company and a small company are in the same article in a credible publication, a potential or existing business relationship is more likely to exist.