Back to Community
Anyone good at optimization - please help

I am trying to set the factor betas to zero, but scipy minimize 'cobyla' fails due to constraint violation. Anyone?

I get the following error when I set factor betas = zero.

Did not converge to a solution satisfying the constraints. See maxcv for magnitude of violation.

Clone Algorithm
7
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import math
import numpy as np
import pandas as pd
import talib
from sklearn.decomposition import PCA    
import statsmodels.api as smapi
from sklearn.covariance import OAS
from scipy.optimize import minimize
from sklearn.linear_model import LassoCV

def getweights(params, cov, signal):
    cons = []
    (m,n) = np.shape(params)
    
    for i in range(0, n):
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: np.dot(x.T, params[:, i])})
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: -np.dot(x.T, params[:, i])})
    
    cons.append({'type': 'ineq', 'fun': lambda x: np.dot(x, signal) - 0.01})
    
    for i in range(0, m):
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: 1 - abs(x[i]) })
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: abs(x[i]) - 0.1 })
        
    x0 = [1.] * m
    res = minimize(lambda x: -np.dot(x.T, signal) / np.dot(np.dot(x.T, cov), x), x0,
                   constraints = cons, method='cobyla',options={'maxiter':2000})
    print res.message
    return res.x


def initialize(context):
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=30))
    context.SPY = sid(19655)
    context.done = False

def handle_data(context, data):
    pass

def trade(context, data):
    prices = data.history(context.univ, "price", 120, "1d")
    prices = prices.dropna(axis=1)
   
    returns = prices.pct_change().dropna().values
    returns = returns * 1000.
    cov = OAS().fit(returns).covariance_
    e, v = np.linalg.eig(cov)
    idx = e.argsort()
    comp = v[:, idx[-25:]]
    
    if comp[0, 0] < 0:
        comp *= -1
        
    sources = np.dot(returns, comp)
    betas = np.zeros((np.shape(returns)[1], np.shape(sources)[1]))

    for i in range(0, np.shape(returns)[1]):
        model = smapi.RLM(returns[:, i], smapi.add_constant(sources)).fit()
        betas[i, :] = model.params[1:]

    signal = np.sum(returns[-15:, :], axis=0) / np.std(returns, axis=0)        
    W = getweights(betas, cov, signal)
    den = np.sum(np.abs(W))
    if den == 0:
        den = 1
    wsum = 0           
    for i, sid in enumerate(prices):
        val = W[i] / den * context.portfolio.portfolio_value * 2.
        order_target_value(sid, val)
        wsum += val
    order_target_value(context.SPY, -wsum)
    

def before_trading_start(context, data):
    if context.done:
        return
    context.done = True
    fundamental_df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.asset_classification.morningstar_sector_code == 309)
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(25)).T
    context.univ = fundamental_df[0:25].index
    
There was a runtime error.
10 responses

Pravin -

I changed to the optimizer to SLSQP and it seems to work (except there are a few "Positive directional derivative for linesearch" notices in the logs, whatever that means).

Grant

Clone Algorithm
3
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import math
import numpy as np
import pandas as pd
import talib
from sklearn.decomposition import PCA    
import statsmodels.api as smapi
from sklearn.covariance import OAS
from scipy.optimize import minimize
from sklearn.linear_model import LassoCV

def getweights(params, cov, signal):
    cons = []
    (m,n) = np.shape(params)
    
    for i in range(0, n):
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: np.dot(x.T, params[:, i])})
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: -np.dot(x.T, params[:, i])})
    
    cons.append({'type': 'ineq', 'fun': lambda x: np.dot(x, signal) - 0.01})
    
    for i in range(0, m):
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: 1 - abs(x[i]) })
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: abs(x[i]) - 0.1 })
        
    x0 = [1.] * m
    res = minimize(lambda x: -np.dot(x.T, signal) / np.dot(np.dot(x.T, cov), x), x0,
                   constraints = cons, method='SLSQP',options={'maxiter':2000})
    print res.message
    return res.x


def initialize(context):
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=30))
    context.SPY = sid(19655)
    context.done = False

def handle_data(context, data):
    pass

def trade(context, data):
    prices = data.history(context.univ, "price", 120, "1d")
    prices = prices.dropna(axis=1)
   
    returns = prices.pct_change().dropna().values
    returns = returns * 1000.
    cov = OAS().fit(returns).covariance_
    e, v = np.linalg.eig(cov)
    idx = e.argsort()
    comp = v[:, idx[-25:]]
    
    if comp[0, 0] < 0:
        comp *= -1
        
    sources = np.dot(returns, comp)
    betas = np.zeros((np.shape(returns)[1], np.shape(sources)[1]))

    for i in range(0, np.shape(returns)[1]):
        model = smapi.RLM(returns[:, i], smapi.add_constant(sources)).fit()
        betas[i, :] = model.params[1:]

    signal = np.sum(returns[-15:, :], axis=0) / np.std(returns, axis=0)        
    W = getweights(betas, cov, signal)
    den = np.sum(np.abs(W))
    if den == 0:
        den = 1
    wsum = 0           
    for i, sid in enumerate(prices):
        val = W[i] / den * context.portfolio.portfolio_value * 2.
        order_target_value(sid, val)
        wsum += val
    order_target_value(context.SPY, -wsum)
    

def before_trading_start(context, data):
    if context.done:
        return
    context.done = True
    fundamental_df = get_fundamentals(
        query(fundamentals.valuation_ratios.ev_to_ebitda)
        .filter(fundamentals.valuation.market_cap != None)
        .filter(fundamentals.asset_classification.morningstar_sector_code == 309)
        .filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
        .filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
        .filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
        .filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
        .filter(fundamentals.share_class_reference.is_depositary_receipt == False) # !ADR/GDR
        .filter(fundamentals.valuation_ratios.ev_to_ebitda > 0)
        .order_by(fundamentals.valuation.market_cap.desc()).limit(25)).T
    context.univ = fundamental_df[0:25].index
    
There was a runtime error.
    for i in range(0, n):  
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: np.dot(x.T, params[:, i])})  
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: -np.dot(x.T, params[:, i])})  

This code is strange. Are you sure that's what you mean? To judge from the documentation, that's going to result in constraints ∀i, np.dot(x.T, params[:, i]) >= 0 & -np.dot(x.T, params[:, i]) >= 0. I.e., np.dot(x.T, params[:, i]) == 0, i.e., x is orthogonal to every constraint plane, i.e., x == 0.

You probably want some constants in those lambdas, like you have in the subsequent set of constraints, which translate to ∀i, 0.01 <= abs(x[i]) <= 1.

Thanks Grant and Alex.

@Grant, I think COBYLA is used for quadratic optimization. I am not so sure if SLSQP can be used for that.

@Alex, I am trying to set the portfolio exposure to all params to zero and hence the constraint.

Pravin

I've been working on some optimization stuff with Jonathan and we came to the same conclusion as you did in your other post. The CVXOPT user experience is trash, especially to those not familiar with linear algebra or converting equations into standard matrix form. What's more scipy.minimize often does fail to find the correct minima, plus it is inefficient.

Here is a long only optimization ran on the Mean Absolute Deviation technique I posted about last year. Now this is a non-quadratic optimization and can be expressed as a linear program. You can see in the below graph how both scipy.minimize (green) and CVXOPT (blue) perform with optimization time on the Y axis and number of assets on the X axis. Needless to say the CVXOPT lp outperforms.

If you can, use CVXOPT. I have a working knowledge of linear algebra and the only way I was able to use CVXOPT was with their modeling api. I don't know if you've already tried using it but looking at your above implementation it seems it would not be hard to translate over. The trickier part I felt was setting correct bounds and constraints.

Anyhow hopefully this provides you and anyone else who looks at this thread with some decent information.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks James. I have the same feelings about cvxopt and scipy minimize and am glad that you share the same opinion.I could not use cvxopt for this because their qp uses xPx + xq whereas I need xq / xPx which is nearly impossible to formulate using cvxopt qp.

@ James -

CVXPY (http://www.cvxpy.org/en/latest/) looks friendly. See also http://www.jmlr.org/papers/volume17/15-408/15-408.pdf. I recall that Pravin had requested Quantopian look into it.

I came across CXPY too, I agree it would be good to get onto the platform.

Did not converge to a solution satisfying the constraints. See maxcv for magnitude of violation.

This is happening because your constraints are unsatisfiable. The determinant of params, a 25x25 matrix, is very close to 1. You are effectively requiring that params times x is 0, i.e. x is 0. On the other hand, your abs constraints require that x be nonzero.

Brilliant. Thanks Alex. I will change it so that it is satisfies constraints. Many thanks again.

Glad I could help.