@ Thomas -
Transform the long and short baskets to portfolio weights is part of the portfolio construction step. Given the baskets (just binary long/short signals or probabilities) it would run portfolio optimization
My point is that the workflow that is being discussed and engineered is a kind of system with interfaces. There is a step-by-step flow of inputs and outputs (for a high-level sketch, see https://blog.quantopian.com/a-professional-quant-equity-workflow/ ). As a side note, I would encourage the whole financial industry to stop using the term "alpha" and use "factor" instead, in this context. For one thing, it pre-supposes that after the universe definition step, you've magically found factors that have been expunged of any overall market influences, and can print money in an uncorrelated fashion. Can "alpha" as it is conventionally defined, even be calculated accurately until the backtest step anyway? Save the buzz-words for the marketing guys.
Back to my point...as I understand, you are working to put together a factor combination step using ML that will be general, and applicable to the vast majority of users. You want a kind of module (like the Q500US & Q1500US universes) that is broad brush and configurable, and that will provide the most general set of outputs to the next step in the workflow, which as you describe, is another optimization. So, my sense is that if your factor combination step takes in N securities, it ought to spit out N securities, as a normalized portfolio weight vector (e.g. a series of calls to
order_target_percent(stock,weight[stock]) would update my portfolio). There would be no baskets of long-short; the output of the factor combination step could be all long or all short, for example.
The next step in the workflow, portfolio construction, would operate on the portfolio weight vector, as you say:
run portfolio optimization to e.g. (i) minimize volatility, (ii) minimize exposure to certain risk factors, (iii) take the previous portfolio into account to reduce turn-over, (iv) take transaction and borrow costs into account etc
At this step, for example, you would apply the market neutral constraint, which is a risk factor to be managed. If you impose the market neutral constraint by design earlier in the workflow, then one can't apply it in the portfolio construction step, where it would seem to reside. Your factor combination step, I think, should be an unconstrained maximization of the overall portfolio return, point-in-time (which may be a problem, since there is no constraint on the new portfolio vector bearing any resemblance to the old one...this is handled in the OLMAR algo by minimizing the square of the Euclidean distance between the old and new portfolio vectors, subject to an overall forecasted return inequality constraint). In the factor combination step, are you effectively finding the point-in-time overall return response surface and then finding the peak, with no constraints in the optimization (under a stationarity assumption, I gather)? Is the idea that you can sorta patch things up in the next step of portfolio construction, by re-normalizing the portfolio vector output from the factor combination step?
I'm wondering if you can actually approach things as you are. Your combination and optimization steps may be better thought of as one step. You are wanting to solve a constrained global optimization problem, I think, to control the overall portfolio return versus time. Shouldn't you be combining the factors to maximize the forecast return for the next portfolio update, subject to a set of equality and inequality constraints? I don't understand how you can break things up, and still get the right answer.
EDIT: Maybe what you should be passing to the portfolio optimization step is effectively a point-in-time response surface model for the overall return (the "mega-alpha")? Is that what you are doing? Then you can use the model in the constrained portfolio optimization problem? This would seem to make sense.