Yes, the backtest was biased by adding securities that performed well. There was no reason to think, however, that they might not have continued to do well for the six months of real-money trading. In fact, for the one month of simulated trading, the algorithm did quite well. It was long-only, though, so when the market turned down, so did the strategy.
Regarding tweaking the code, I did systematically optimize one parameter (
context.eps); there turned out to be a sweet spot for minimizing drawdown, with a little boost in return. It was not a multi-parameter over-fitting. The other key parameters, such as the 20-day look-back and the 390 minute smoothing of prices are pretty generic. I didn't spend a lot of time fiddling with those.
The algo was stopped, per the rules, once it lost 10%. See attached for a backtest starting at the point when real money was applied.
Regarding your #4, I can't speak for Quantopian, but my sense is that they needed to start somewhere, and move up the learning curve. I suspect, too, that they were initially a bit fast-and-loose with the rules, because they needed to build crowd participation in the fund concept, which had just been introduced in the fall of the prior year. As now, they did not look at my code, and did not perform any more detailed analysis of its "exhaust" to determine its suitability for funding. However, keep in mind that this is a kind of promotional contest and not an investment. My understanding is that they are following a more rigorous process for fund inclusion.
Regarding the press releases, I did not share the algo with anyone until after it was stopped by Quantopian. As I recall, the press releases came out shortly after the algo was launched with real money. Quantopian was just promoting their contest and business; I wouldn't fault them.
It is important to note that there has been a dramatic evolution of the contest since its inception (the latest was a move from $1M to $10M in capital, per https://www.quantopian.com/posts/contest-20-rules-changes-$10m-capital-base-new-entry-required). One important difference is that winners are determined solely based on the 6-month paper trading performance, assuming I'm interpreting this correctly (https://www.quantopian.com/open/rules):
We will calculate an overall rank by averaging the Participant's rank in each criterion in paper trading.
So gaming won't help, although one can get lucky with a goof-ball backtest followed by good out-of-sample performance.