Back to Community
Article: Quant Funds vs Dumb Money

Interesting article on zerohedge talking about quant a strategies drying up.
http://www.zerohedge.com/news/2017-08-10/we-need-more-suckers-table-quant-funds-stumble-dumb-money-disappears

Here are a couple quotes:

Returns have been decaying for a year, suggesting the rest of the
market has figured out what the robots are doing and started taking
evasive action, Berger said.

Bloomberg notes that June was the worst month on record for Berger’s
fund, as usually robust strategies lost their footing and the firm
fell 2.4 percent. The worst pain has been among quants in the
market-neutral equity space, which take long and short positions to
isolate bets on price patterns and relationships

Also, as quant funds become more ubiquitous and everybody else flocks to etfs, there are fewer and fewer inefficient participants to exploit:

We have a condition amongst the traditional quantitative strategies
whereby we have robots trading against robots. Without a steady source
of 'edge providers', these 'edge demanders' are just trading money
back and forth with each other.

Is the situation as dire as the article claims? Anybody else observing the same?

5 responses

I haven't observed it except that a hedge fund manager lost his bet against warren buffet for beating the SPY. It doesn't look like a market neutral strategy will actually pay off. The fundamental premise of capitalism is that to earn more reward, you need to take more risk. An algorithm that reacts before the market does may pay off, till someone comes with a competing algorithm which then cancels out the first mover advantage.

thanks
-kamal

worst pain has been among quants in the market-neutral equity space

Hmm, that's us.
Possible solution: In the fund, go for combined overall market neutral while letting individual algos stray from it.

Here's where I lose sleep or the reason I don't have entries in the contest yet (that I can have enough confidence in), several points it seems to me matter quite a bit, intertwined and kind of tough to describe.

With 10 million allocation in the fund, one has to use the full 10M or returns will appear to be low. If just 1 million is invested, those could be making 40% per year and would look like only 4%, the volume required comes with difficulties.

At first it might be typical to find a great source of alpha in pipeline factors while working with just 20 stocks long and 20 short for testing efficiency. Results look good.
Then expanding on that, an effort to trade that strategy with a 1000 stocks test would take a long time.
You may think a possible timeout is the main problem and you may be wrong.
The main problem could be that the top and bottom percentiles that worked so well are now approaching each other for stocks with scores very close to the mean that don't respond in your strategy so well. That's quite a problem.
Only covered a few of the challenges so far.

Trim back to 100 stocks each long and short where the signals still perform fairly well.
To use all of the capital at 1x leverage, you need to keep 2x short value in cash. That's how the leverage equation pans out. So that represents $50,000 value in each of 100 positions with long and short leverage multipliers set at .5, pretty common.
At the 10M dollar level, most people do not realize how large the number of partial fills can be. There's another tool I'm writing, and caught up in the intense allure of continuous improvements with it. Anyway, among the countless things it does is count partial fills so I see those numbers.

For example, taking just the cream of the crop, the top 100 in volume in Q500US just now, at market open for best case scenario when volumes are high I think, and giving them all day to fill using order_target_percent and equal weights long only, every day with default slippage: Increment a counter each time any stock has an order that is not completely filled for every minute a partial happens ...

... and the counts over just the last 5 trading days for those highest volume Q500US stocks, is:

Buys: x partial [Edited the numbers, not sure they were right when this was written]
Sells: y partial

If using technical in part, it means you won't get the price your code saw as desirable to the degree you would like. The rest is more of a question than a statement.
In partially filling an order, buying some shares (talkin' only long for simplicity), that'll drive the price up to some degree, with more fills to go until an order is complete. So you may resort to limit orders. Now the leverage throughout the backtest is very low since a lot of the orders didn't fill. The contest wants a leverage of 1. However, highly intelligent or at least very determined ...

... you find a way to work around all of those things so far. You even win the contest, congratulations. And even if you didn't win, your algorithm is selected to be part of the Quantopian fund. Awesome. A milestone achieved. The next step is for it to really make lots of money in the market.

Up to now it has a been a backtesting and paper trading world, although slippage does model price impact, that world does not really see you.

Now to the article referenced above by Viridian, thanks Pravin. In the real market, what happens when you place 100 of those 50,000 dollar orders? How do other algorithms respond?

Both day traders and algorithms see the volume and price movement from your orders. Your code does a buy, driving the price up and they react, seeing a higher price so they sell for example, and unlike in backtesting or paper, the price goes back downward from that a little, no? I could be wrong, in backtesting, the model expects higher. What happens in real? Or even further, at those volumes, if monthly balancing at set times, are there insiders who will be able to predict what your algo will be doing at 9:50 AM on that particular day each month and take advantage of it? Then in closing positions, there are times when almost everybody wants to close while few opening to accept the offers, resulting in more extreme partials. That's what is usually happening when we see big drawdowns in backtesting (another of the many things I can see in the latest tool I'm writing).

It would be educational to hear from someone experiencing anything near this:

In the real market, what typically happens when you place orders with a value of 50,000+ dollars each?

And have you detected changes over the years with increased algorithmic trading?

@Blue Seahawk, that's a great point that the slippage model doesn't account for the market reaction to our algorithms' price and volume impact. That could be a serious pitfall. If our algo places a large buy order that has some significant price impact,another might come along and try to ride the momentum, and then another might cash in on a mean revert. And what about the HFT systems that will front-run all our orders?

If you think about it, if there were an algo that could predictively model the other participants' moves, it would have such an extreme edge. That kind of sophistication exists in other fields. It doesn't seem feasible to develop that kind of model with the Q paradigm, since we're all developing in a vacuum... But considering the amounts of capital at play, isn't it likely that we're competing against some firm that has the resources and real-time feedback data to do just that?

If it's not this way yet, it's probably not long until our algorithms developed in a vacuum become the "dumb money" for market-feedback-aware algos to profit off of. It would seem most hedge funds are in the same boat through.

When I thought of ways to avoid investing $10 million in a single algorithm, the portfolio strategy the main fund uses seems like a possible approach. For example, you could create an algorithm with 10 sub-algorithms and invest $1 million in each of them and enter that in the contest. I don't think the rules prohibit using a portfolio strategy like this but I might be wrong. Seems like it would be much easier to develop a strategy like this with hard-coded tickers, since you can't run pipeline more than once in the same algorithm.

As for predictive modeling, basically instead of scraping news articles for company-specific news you could scrape financial sites to see what investing strategies are gaining popularity and have the algorithm adjust itself to take advantage of that. Thinking of something like the Hurst exponent where the algorithm uses news sentiment to determine whether investors are currently more interested in momentum or mean reversion. If index funds are gaining popularity, the SPY ticker might show increasing sentiment, and that sort of thing.

@Eric -- You can probably use columns in pipeline to output for different strategies. I agree the portfolio strategy a smart way to handle the capacity issue, but I believe I've read that for an allocation they'll want to evaluate the strategies independently of each other.

Combining algorithms is also what I do for my live-trading, since I can only have one algorithm connected at a time I combine my strategies into a single algorithm in order to diversify my risk.