Back to Community
Can Q share sample hedge fund performance?

We know Q team has already developed lot of tearsheets to compare algos and also
check if they are fit in the hedge funds.

I wanted to see if Q can share something like :
Sample hedge fund made from 10~20 top ranking algo's in contest (of Q choice)
as on date to demonstrate how it would perform in backtest and paper treading.

7 responses

The top ten scores on the leaderboard would have average annual returns of 63.0473%. I would say realistically 60% after shorting interest. What I find more interesting is the discrepancy which exists in annual returns as opposed to ranking.

  1. 94.91%
  2. 37.89%
  3. 15.95%
  4. 125.00%
  5. 12.88%
  6. 3.963%<---- This with any substantial amount of hedging would be close to zero returns (but don't worry it's consistent).
  7. 35.61%
  8. 79.35%
  9. 40.42%
  10. 184.5%

60% return would be too good but what about Beta . . .

If there could be a tearsheet to demonstrate all parameters .

Rnk annRet  corr        beta spy  
1   88.6%   0.067604918 0.034647094  
2   16.9%   0.04916386  0.112116778  
3   38.0%   0.082857376 0.049442053  
4   120.4%  0.177039148 0.11435404  
5   3.3%    0.390468171 0.07023014  
6   8.3%    0.096965969 0.084948312  
7   22.0%   0.075459018 0.031814751  
8   17.3%   0.41094443  0.399519804  
9   100.1%  0.101637159 0.179357759  
10  85.7%   0.157359131 0.352099414  

Average corelation = 0.1609 ( this value is corelation amongst all contestants and not these 10)
Average Beta spy = 0.1428 (here average value might be misleading)

However fairly good pool is available.
Q can start hosting a paper trading score of top 10 algo's monthwise to showcase the concept.

In my opinion it is unfair to compare the hedge funds potential performance with previous winners.

  1. The contest is more competitive now than it was originally (more entries)
  2. No reason to overfit backtests now that live trading is all that matters
  3. Better scoring in general (less ways to game the system)
  4. Also Q is only going to choose algos that have positive live REAL money trading performance to fund with substantial money I would argue Simon's algo is a testament to what Q is looking for 20 percent returns is nothing to sneeze at so there is defiantly some proof of concept there Michael looks promising as well

I would also argue that 6 months to find 1 profitable algo is a bit long to get a nice sampling of 20 would take 10 years obviously they are speeding up selection of algos with the new longer contest. Still in my opinion there is going to have to be a way to pick more and test them with REAL money live trading performance as that is the only way to see if they make money in the real world. The past winners is a perfect testament to this only 1 out of the 5 with decent track records was actually profitable. These are the "Cream of the Crop" algos as well so of these showed annual returns north of 20% and double digit Sharpe ratios. Basically I'm saying realistically if Q wants to get to their "Holy Grail" they are going to have to find a way to increase the selection and testing (As I believe they have realized themselves hence the added longer contest).

It is unfair because the rules have tighten significantly and the pool has increased I believe the entries have gone from like ~300 in the Feb contest to 800 plus July contest. The first 5 winners benefited from the ability to heavily overfit back test results since backtest factored into scoring. I'm not blaming anyone just stating what could easily have happened to describe the lackluster live trading results many people have seen. Numbers don't lie if they rolled with this basket of algos returns would probably not be good. I am just saying your post kind of illustrated that they would roll with the winners algos no matter what which I was trying to point out was a unfair assumption (sorry if that came across as hostile). I was just pointing out that Q would need to start picking algos faster and that they will have a better time now that they are closing contest loopholes. My post wasn't even meant to invalidate your post just to point out some general observations I have made so far as both a participator and a watcher of the contest I apologize if it came across as thus. Basically to sum it up selection has become better in the last month with some of the rule changes and using past winners to indicate future fund performance is not going to be as good as seeing how the next few contest winners algos perform.

How do the contest algos compare with shared algos? Are there any shared algos that are remotely competitive in the contest?

Almost all the shared algos are about beating the
market and mostly buy good stuff and wait types.

So they have higer Beta and does not qualify for contest.

if performance = 1/beta:  
    shared algo are junks

elif performance = returns:  
    shared algo stand chance  

You can find some high scorring algos ranking 110 and above
for you to compare.