To understand the ranking step a little better, I would suggest taking a look at the Spearman Correlation Rank lecture in the Quantopian lectures section of the Learn tab. In fact, I've found it helpful to work my way through all of the lectures.
To put the answer in my own words I would say that it is really hard to predict whether any individual stock will go up or down in the future. One of the main problems is that a stock's returns are usually heavily dependent on the movement of the whole market. Because the market is pretty efficiently priced (all known events have been priced in), it basically takes knowing what future events will occur to be able to predict it (i.e. a crystal ball).
However, it may be easier to say: "well, I don't know what the returns of stock A will be, but can I predict that it will have a high chance of doing better than stock B according to some factor analysis?" If the answer is yes you can short B and go long A to make money on their relative movements. So, by ranking the returns we're really looking at which stocks will do the best relative to the others and which will do worse.
Then we take it even further and say that even the specific rank of a stock is hard to predict. So, we try to make it an even easier question, which is "is this stock in the top half of the rankings or the bottom half?". That should be easier for a ML algorithm to predict (apparently it's still pretty hard since our accuracy is only 53%).
Then instead of predicting the category (top half or bottom half) we output the precentage chance of being in top or bottom. Why do we do that? Because it allows us choose 10 to go long (highest chance of being in top half) and 10 to go short (highest chance of being in the bottom half). If we just had the categories as predictions we'd just select a random basket from the top half category and another random basket from the bottom half category.
As for why the input factors are ranked: well, for one, outliers. I've seen some crazy outliers in financial ratios that would really confuse an ML algorithm. Also, it goes back to data being very noisy. It's actually beneficial to not tell the algorithm the exact value for the factor. It is more helpful to just tell it how strong the factor value is for that stock relative to the others. There might be periods where the PE ratios of the market are low and there might be times where they are high, but what the algorithm needs to know is "how high is this PE ratio relative to the other stocks in the market", because that will help it figure out how well that stock would do relative to the other stocks, which is what we're actually trying to answer here.
In the other thread you also seem to be focused on trying to fit a polynomial model to the factors. This is okay if you're dealing with a 1-D problem, but as you add more factors there's a combinatorial explosion of possible terms. In any case, you're just choosing a different ML model. I would suggest looking into the CART algorithm, into random forests, into gradient boosted machines and adaboost. It's a bit of rabbithole, but worth knowing. By the way, any algorithm that is based on decision trees (like the ones I listed) can handle categorical factors. The reason is a node in the decision tree can branch based on a category or based on a threshold of numeric value.
Anyway, hope some of that helps.