Looking at the leader in contest 24 and his same algorithm entered (and winning) contest 23, all of the performance metrics are the exact same. The only difference between the two contests is the rank within the contest. So this suggests that his algorithm's period of performance is from the moment it started trading and not from the start of the contest.
I had always assumed that the contests compared different algorithms' performance for that 6 month stretch. Which is true, but it also seems to include the previous time prior to the beginning of the contest. Doesn't this result in an apples-to-oranges contest where algorithms period of performances are different?
I'm just curious the motivation for setting up the contests in this way. Is it to encourage folks to keep algorithms running for a long time that helps stabilize the performance metrics? Or is to encourage us to constantly resubmit strategies until we hit a good 6 month stretch?