Max long exposure and max short exposure with a quadratic objective?

I want to use a quadratic objective function in addition to a linear objective function and create a long short dollar neutral portfolio that has max long and short weight constraints. Anyone knows how I can achieve that?

14 responses

Aha! Found the trick. I need to create three sets of variables. One for long, one for short and one for the sum of the two. Will post the function when I code it up.

Hi Pravin -

Any luck getting this to work? Just curious.

Thanks,

Grant

Hi Grant,

It is possible. You need four sets of variables.

• First set is always positive
• Second set in always negative
• Third set is sum of first and second set
• Fourth set is difference of first and second set.

Since CVXPY does not have SOS1 constraints, you need to use mixed integer programming to create 2 sets of binary variables (0 or 1) where at any time only of them can be 1.

The first set of variables that is always positive should be less than or equal to the first set of binary variables.

The negation of second set of variables that is always negative (so positive with negation) should also be less than or equal the second set of binary variables.

Hope that helps.

Here is the code. Please provide feedback if it does not work for you.

def getW(cov, signal):
(m, m) = cov.shape
zcov = np.hstack((cov * 0., cov * 0., cov * 0, cov * 0))
cov = np.hstack((cov * 0., cov * 0., cov, cov * 0))
cov = np.vstack((zcov, zcov, cov, zcov))
signal = np.hstack((signal * 0, signal * 0, signal, signal * 0))
x = cvx.Variable(4 * m)
objective = cvx.Maximize(signal.T * x - risk)
y = cvx.Int(2*m)
maxexposure = 0.1 # 10%
constraints = [cvx.sum_entries(x[2*m:-m]) == 0, y <= 1, y >= 0, cvx.sum_entries(x[-m:]) == 1, x[:m] >= 0, x[m:2*m] <= 0]
for i in range(0, m):
constraints.append(x[i] - x[m+i] - x[2*m+i] == 0)
constraints.append(x[i] + x[m+i] - x[3*m+i] == 0)
constraints.append(x[i] <= y[i])
constraints.append(-x[m+i] <= y[m+i])
constraints.append(y[i] + y[m+i] <= 1)
constraints.append(x[3*m+i] <= maxexposure)
prob = cvx.Problem(objective, constraints)
prob.solve()
return np.squeeze(np.asarray(x.value[2*m:-m]))


Thanks Pravin -

I'm afraid that without a derivation in mathematical symbols with some explanation, I'm lost. I understand the idea of maximizing signal.T * x but why subtract a risk term (and what is the risk term)? And what are all of the constraints?

Is there a paper that explains what you're doing? Or is this a home-brew effort?

Hi Grant,

It is pretty standard method when you want to put absolute value constraints. The risk is nothing both x.T * cov * x. That is we are maximizing signal and minimizing portfolio variance.

Best regards,
Pravin

Duh, after banging my head several times on the wall, I realized that CVXPY only support MILP and not quadratic objectives with mixed integer programming. That means we need a commercial solver.

Hi @Pravin, It is straightforward to achieve the goal of your original question above without integer variables or special ordered sets. CVXPY makes it quite simple. A sketch solution is:

import cvxpy as cvx
import numpy as np

def calc_mvo(alpha_vector, covmat, risk_aversion, max_position=0.05):

num_stocks = len(alpha_vector)
x = cvx.Variable(num_stocks)
risk = cvx.quad_form(x, covmat)       # x' covmat x

objective = cvx.Maximize(
alpha_vector*x - risk_aversion*risk
)

constraints = [
sum(x) == 0,              # dollar-neutral long/short
sum(cvx.abs(x)) <= 1,     # leverage constraint
x <= max_position,        # max long position
x >= -max_position        # max short position
]
prob = cvx.Problem(objective, constraints)
prob.solve(verbose=False)
sol = np.asarray(x.value).flatten()
return sol


Keep in mind there are known pitfalls in implementing MVO in practice. For example, the sample covariance matrix is unstable and you need to take care in estimation (see for example http://ledoit.net/honey.pdf and http://scikit-learn.org/stable/modules/generated/sklearn.covariance.LedoitWolf.html ); this problem becomes intractible when the number of stocks is large (say on the order of 500+) and a reduced structure covariance matrix is needed in that case. A general critique of MVO is that weights can be unstable (i.e., turnover becomes high) and overly sensitive to the input alpha vector. Nonetheless, I applaud your efforts to use convex optimization and risk-awareness in your portfolio construction.

@Grant, Pravin's formulation is based on Markowitz and MPT (https://en.wikipedia.org/wiki/Modern_portfolio_theory). The idea is that you make a tradeoff between return and risk. The scaling parameter risk_aversion controls how you balance return and risk.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Jonathan. You saved my day. I had been trying all day with

sum(cvx.abs(x)) == 1,     # leverage constraint


Only to realize that it fails DCP rules. I now understand that the trick is to set it to <= instead of ==.

Best regards,
Pravin

@ Jonathan -

Thanks. Is there a standard way of constructing the alpha_vector and cov_mat? I understand x to be the relative allocation across the universe, in the range of -1 to +1. Presumably, alpha_vector is constructed such that its dot product with x is the projected return, and the risk is a measure of the projected variability in returns, on the same time scale.

Regarding the risk, is there a reason to use the variance, rather than the square root of the variance? Is the variance required to be able to use the quadratic solver?

Note that the limit on x can be written:

cvx.abs(x) <= (1.0+delta)/num_stocks


where delta is the measure of the maximum allowed excursion of the allocation above equal weight. This way, one doesn't have the magic number for the position sizing; it'll scale with the size of the universe.

@Grant, You are asking all the right questions; there are no "standard" answers for any of these. Along with alpha discovery, portfolio construction is a field where you can bring to bear your own creativity and research in formulating solutions. The alpha_vector could be anything -- ranks, z-scored values, actual expected returns, etc. -- anything where "higher is better". For MVO, the classical approach would indeed be to have expected returns in the alpha_vector and, yes, the cov_mat would be expressed over the same time scale. That doesn't need to be the case though. The risk_aversion parameter can be viewed as a scale factor which could be tuned in-sample to convert between the (arbitrary) units of the alpha_vector and the risk expression. There are three forms of classic MVO. The above formulation is the "risk aversion" form. The other two are:

• Risk minimization, subject to minimum alpha fitness
    objective = cvx.Minimize(risk)

constraints = [
alpha_vector*x >= min_fitness,
sum(x) == 0,              # dollar-neutral long/short
sum(cvx.abs(x)) <= 1,     # leverage constraint
x <= max_position,        # max long position
x >= -max_position        # max short position
]

• Alpha maximization subject to a risk budget
    objective = cvx.Maximize(
alpha_vector*x
)

constraints = [
risk <= max_risk,         # risk budget
sum(x) == 0,              # dollar-neutral long/short
sum(cvx.abs(x)) <= 1,     # leverage constraint
x <= max_position,        # max long position
x >= -max_position        # max short position
]


The third form is most intuitive to me: I know how much risk I want to take and I want to get the maximum alpha possible. In this formulation it is most clear to me that you don't need to express the alpha in an interpretable unit like expected returns.

Lastly, let me just repeat that classic MVO has known pitfalls; importantly the results are very sensitive to the inputs and turnover can therefore be very high.

Thanks for this innovation:

Note that the limit on x can be written: cvx.abs(x) <= (1.0+delta)/num_stocks where delta is the measure of the maximum allowed excursion of the allocation above equal weight. This way, one doesn't have the magic number for the position sizing; it'll scale with the size of the universe.

Thanks Jonathan -

Another thought is that whatever constraint is placed on the risk, one approach would be to scale it in some fashion with the projected reward for the period--more risk would be allowed if the reward will be bigger, and vice versa. With a fixed "magic number" on the risk constraint, it is not really a risk-reward decision.

Regarding turnover/choppiness, one approach is to average the results over an expanding, trailing window, weighting by the expected return (see the discussion in Section 4.3 of http://icml.cc/2012/papers/168.pdf ). One ends up running the optimization N times per day (or whatever the re-balancing period might be), where N is the number of expanding windows. One can also sample randomly over the trailing window, if N needs to be relatively small, for computational efficiency. Presumably, there is Q interest in this optimization jazz, in the context of the Q fund, and the optimization/risk management API. I suspect that if you implement it as a single-shot deal, without the trailing window smoothing, you may have problems, as the author of the OLMAR paper did.

The other input is that whenever technical indicators are used, it would seem that minute data should be used. Any technical indicators from pipeline will have been based on daily values, derived from single trades, as I understand. So, for relatively short-term trading (daily/weekly), the data are sparse and noisy.

Here's an example that may be of interest. Probably not realistic as-is (maybe someone can tweak it into reality), but I think it captures some of the ideas above in an algo implementation.

Note:

    set_slippage(slippage.FixedSlippage(spread=0.00))

82
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58dff3771b6dd21c04d7856b
There was a runtime error.