To me at least it seems like this strategy is falling for the same fallacy that numerous other Quant strategies fall prey to. Let us look at another example, the 'Foolish 4' which was a strategy pushed by the Motley Fool in the 90's. It was sold as a low risk way to outperform the markets by a significant margin. There were five steps to it:
- Take the five stocks in the DJIA with the lowest price and highest yields
- Discard the one with the lowest price
- Put 40% of your cash in the stock with the second lowest price
- Put 20% in the three remaining stocks
- One year later, rinse and repeat.
The fallacy this fell to was over-fitting, there was no investment logic behind it and I believe it made the same mistake as this algorithm: If you look at a large quantity of data, a large number of patterns will emerge. The fact that this pattern existed in the past, neither means that the pattern will exist in the future, nor that the pattern is the reason for the movements.
Let's look at this as a hypothetical, you've got all the data from the past and you find that every Monday at 15:47 the market rises, you then write an algorithm to exploit this. It has a 0% drawdown because historically the market has always risen on Monday at 15:47, this seems like the 'Holy Grail' in backtests so you take it into live trading and go bust. You then check and see that the market has gone down for the first time ever on Monday at 15:47, you then realise that there is no investment logic behind the strategy you've made and as a result you're guilty of overfitting.