Back to Community
performance attribution at hedge fund level?

Awhile back, Simon Thornington posed an interesting question regarding performance attribution at the hedge fund level. The basic idea was, assuming for the 1337 Street Fund, a workflow similar to that described on A Professional Quant Equity Workflow is used, then how can the relative performance of an individual algo (an alpha) be determined, such that the author can be compensated accordingly? Assuming that the individual licensed algos are combined as a simple weighted sum of alphas, would this performance attribution technique allow one to sort out the relative performance contributions of each algo, even after the optimization step? Or maybe the idea is to apply all of the optimization and risk management at the level of individual algos, and just weight them by capital invested in each (in which case, the performance attribution is simple)?

It seems that authors should be paid based on the attribution of their alpha to the overall return of the fund, rather than their algo return. So, if the performance attribution technique applied in the risk model works, then perhaps it could be used at the hedge fund level, as well, to determine the relative weighting of licensed algos in the fund?

I think this all goes out the window if ML is used to combine algos, since as I understand, everything will get totally jumbled up point-in-time, and so it will be impossible to tell the relative contribution of each licensed algo in the combined alpha.

3 responses

@Grant, it does not matter much how the strategies are combined.

You can always know each strategy's contribution to the mix. And the equation is relatively simple. It has the following 3-dimensional payoff matrix: Σ(H.*ΔP). Summing on axis=0 will give you the ending results for each strategy, summing on axis=1 provides what each stock contributed to the mix, while summing on axis=2 will give you the cumulative P&L for the whole thing. All you need to add is a strategy identifier to the transaction matrix.

However, you do raise an interesting question. What would happen if your trading strategy was the major contributor of overall profits and your take would be based on the fund's average performance?

There are so many questions in all this that have been left opened...

@ Guy -

If each licensed also is funded directly and the total profit is simply the sum of individual algo profits, then there is no problem with attribution. Presumably, to first order, this is how the 1337 Street Fund will work. And if each algo is reasonably diversified, risk-managed, and scalable (see, then they can be run independently. Each licensed author gets a simple 10% of the profit of his algo. In theory, though, his algo and all of the others could be providing features at the individual stock level that allow for a kind of second order fund to be created, with additional profit beyond that of the sum of the profits of the individual licensed algos.

An obvious question then is, are the licensed algos actually being funded directly? Or are their profits estimated from simulation and funded by the combined profit of all of the algos? The latter approach would seem to be the way to go.

@Grant, not necessarily. Why would you want to pay the lesser performing strategies more than what they could bring to the mix? And why wouldn't the best trading strategy of the group not get its fair share?

Any trading strategy can be summarized into a single payoff matrix: Σ(H.∙ΔP). As traders, we are trying to solve H as a 2-dimensional array, even in a competition.

Q is trying to solve a 3-dimensional matrix of its own in its quest to corral the best trading strategies it can find. To date, there has been over 4,000,000 backtests done on Q!, and they are still looking.

Point 72 is trying to solve a 4-dimensional matrix as a fund of funds of strategies.

The long-term perspective is very different at each level. The requirements and objectives appear about the same since they do have to answer to a single expression for the overall P&L, namely: Σ(H.∙ΔP). But for Point 72, due to the sheer number of funds and strategies in place, they might be confronted more with Σ(H(P72).∙ΔP) → Σ(H(SPY).∙ΔP).

To compare strategies is easy, you order them by performance, as in:

Σ(H(1).∙ΔP) > Σ(H(2).∙ΔP) > Σ(H(...).∙ΔP) > Σ(H(SPY).∙ΔP) >…. > Σ(H(n).∙ΔP).

We, as traders, apply the same comparison for the strategies we design. It is the same equation that it be for Q, Point72, or anyone else. Only the array's dimensions are changing.

There is no good reason to mix a high performing strategy to a low performing one, except if you want to affect other factors like, for instance, reduce overall volatility.

But, there will always be a cost associated with it. In this case, it will be lower overall returns. But, those are choices one has to make. No mixing of strategies will produce more than Σ(H(1).∙ΔP), the best of the group.

You design a trading strategy that will blow up in 3 to 5+ years. What is it worth today? That is the real question, contest or not. The answer is very simple: it is worth absolutely nothing. It should never have been used in the first place. And, I have to say that a 2-year backtest will not be able to answer that critical question of sustainability either. So, yes, the testing period should be longer than 2 years.