Very interesting paper, but I felt the conclusion left me wanting more. On the one hand, they say that overfitting is unavoidable when selecting parameters from a very large universe of possibilities, yet suggest that more granular data, which creates a larger universe of possiblities, has likely already been mined by 'large quantitative funds' through their 'expertise and facilities.'
I wonder what specific expertise and facilities they are referring to.
"So when a computer program, such as ours, produces an optimal set of weights, it is selecting from an inconceivably large set of possible weighting
sets, and thus statistical overfitting of the backtest data is unavoidable."
"Any underlying actionable information that might exist in such data has long been mined by highly sophisticated computerized algorithms operated by large quantitative funds and other organizations, using much more detailed data (minute-by-minute or even millisecond-by-millisecond records of many thousands of securities), who can afford the expertise and facilities to make such analyses profitable."