Back to Community
How relevant is Style risk if common returns are relatively small?

(I think this was asked in one of the risk megathreads a while ago by Grant, but it wasn't answered and I'm wondering about it now:)

Let's say I have a mean-reversion style algo that has 4.5% annualized specific return and 0.5% annualized common return, with a specific Sharpe ratio of 2.

How problematic is it if it has a 'Short Term Reversal' style risk exposure of let's say 50% (> required 40%)? The whole point of the algo is to use mean reversion, so some style risk is expected, and if 95% of the returns are 'pure' alpha that's not too bad right? (I can see why 50% risk on the total returns would be more problematic.)

Just wondering if any progress was made on this philosophical question in the mean time :).

17 responses

You are focusing on the annualized returns where I think the role of risk pertains more to volatility. As such, I would be curious what the vol of your specific and common returns are. Since your exposure is high I would imagine it's quite a lot. So if that is a static (over time) high exposure I would still argue that the strategy takes on unnecessary risk. Or put another way, 50% (or whatever) of your risk budget is allocated to something that is already well known and an investor will not pay you a lot of money for.

As such, I would try and make your mean-reversion signal orthogonal to the existing one and just capture how your factor differs. You can try this (experimental) code snippet that Max wrote:

class OrthogonalAlpha(CustomFactor):  
    window_length=504  
    def compute(self, today, assets, out, alpha, symbol, *risk_exposures):  
        regressor = LinearRegression(fit_intercept=True)  
        risk_exposures = np.array(risk_exposures)  
        residuals = np.zeros((1, risk_exposures.shape[-1]))  
        for i in range(risk_exposures.shape[-1]):  
            X = risk_exposures[:,:,i].T  
            X = np.nan_to_num(X)  
            sectors = X[:, :11][:, ~(np.all(X[:,:11]==0., axis=0))]  
            if sectors.shape[-1] == 0:  
                residuals[:,i] = 0  
                continue  
            styles = X[:,11:]  
            y = alpha[:,i].T  
            y = np.nan_to_num(y)  
            # Do the sector regression  
            regressor.fit(sectors, y)  
            sector_resid = regressor.intercept_ + \  
                           (regressor.predict(sectors) - y)  
            # Do the style regression on the residuals of the sector regression  
            regressor.fit(styles, sector_resid)  
            residuals[:,i] = regressor.intercept_ + \  
                             (regressor.predict(styles) - y)[-1]  
        out[:] = residuals[-1]  

which you would call like:
orthogonal_alpha = OrthogonalAlpha( inputs=[my_custom_mean_reversion_factor], mask=universe )

As I said, this is still experimental so it might not work but I would be curious if you try it what the outcome is.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Would be very interesting indeed, bummer I can't get it to work (also had to make my CustomFactor window_safe=True):

TypeError: compute() takes at least 6 arguments (5 given)  

I've attached a backtest notebook so you can look at volatility, I couldn't really see the part where it shows the difference between common and specific vola.

Note that I managed to bring the style risk down to ~35% by aiming for a [-0.10,0.10] constraint in the optimizer.

Loading notebook preview...
Notebook previews are currently unavailable.

Maybe a stupid question but how is this different from setting the style exposure to close to 0 using the Risk API?

Joakim: I'm not quite sure to be honest, so it warrants exploring / thinking about this further. One scenario were the difference might become more apparent is if you imagine to have multiple factors, you would orthogonalize each one individually, combine, and then do risk optimization on the aggregate vs just doing risk optimization at the end. Probably requires some experimentation as to what is preferable.

Thanks Thomas, and for the code snippet. Will play around with it.

Ivory, Joakim: I've included a NB here that has a helper function that I use to create these orthogonal factors more easily. It is still super experimental, but it won't give you that compute error.

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks!

An initial test using Alphalens suggests alpha goes from 0.074 to -0.012 by applying orthoganalization to a simple mean-reversion factor. I kind of expected it to leave some alpha with the same sign, since the backtest showed just a small part of the returns can be explained by the common mean reversion factor.

Did anyone else try this?

We did notice that it often has quite a detrimental effect, not quite sure why yet. But it makes sense if you apply it to a factor that's already in the risk-model. A good test would be a factor that should be largely independent like maybe a Quality factor.

how is this different from setting the style exposure to close to 0 using the Risk API

Interesting point by Joakim above. I have to say, the whole apparatus of the risk model, its implementation in risk_loading_pipeline and:

    risk_model_exposure = opt.experimental.RiskModelExposure(  
        context.risk_loading_pipeline,  
        version=opt.Newest,  
    )  
    constraints.append(risk_model_exposure)  

has been a matter of faith for me, mostly. By some magic, it does seem to work, but I don't understand it well enough, given its importance in shifting money from Q to my wallet.

It seems that the Optimize API should be sufficient, no? I'm wondering if one could achieve the same end result by calling the Optimize API with TargetWeights and calculate_optimal_portfolio, and returning the resulting portfolio weight vector? In other words, can the Optimize API be re-purposed to better align individual alpha factors with the desired risk profile, prior to their combination? Alternatively, one could write a custom implementation using CVXPY.

For example, on a given alpha factor, if we run the Optimize API on it, and constrain only short_term_reversal we should be left with a residual that contains everything except the 14-day RSI (to within a specified tolerance), right?

Another observation is that all of this relies on the past being predictive of the future. For example, I've found that depending on the stock universe, the effectiveness of SimpleBeta and the Optimize API vary. Using a trailing 260-day computation of beta isn't sufficient; for example, I need to bias the beta exposure, to achieve beta ~ 0:

MIN_BETA_EXPOSURE = -0.2  
MAX_BETA_EXPOSURE = -0.2  

My hypothesis is that the trailing beta doesn't persist into the future. I suspect that a similar problem is at play in the application of the risk model; it only works ideally under the assumption that the future equals the past.

One thing to try is to include the z-scored style risk factors in the alpha combination, but instead of adding them with the alpha factors, subtract them. I tried this once with the RSI style risk factor, and it seemed to work well.

The above code doesn't do the right thing, but this snippet should work:

def orthogonalize_alpha(pos_pct, risk_loadings, return_coeffs=False):  
    from sklearn import linear_model  
    import pandas as pd

    def _run_regression(pos_dt, risk_loadings_dt):  
        overlap_sids = sorted(list(set(pos_dt.index).intersection(set(risk_loadings_dt.dropna().index.tolist()))))  
        pos_dt = pos_dt.reindex(overlap_sids)  
        risk_loadings_dt = risk_loadings_dt.loc[overlap_sids]  
        clf = linear_model.LinearRegression()  
        clf.fit(risk_loadings_dt, pos_dt)  
        resid = pos_dt - clf.predict(risk_loadings_dt)  
        return resid, clf.coef_  
    if isinstance(pos_pct, pd.DataFrame):  
        pos_resid = {}  
        coeffs = pd.DataFrame(index=pos_pct.index, columns=risk_loadings.columns)  
        for dt in pos_pct.index:  
            if (dt not in pos_pct.index) or (dt not in risk_loadings.index.get_level_values(0)):  
                continue  
            pos_resid[dt], coeffs.loc[dt] = _run_regression(pos_pct.loc[dt].dropna(),  
                                                            risk_loadings.loc[dt])  
        pos_resid = pd.concat(pos_resid)  
        coeffs = pd.DataFrame(coeffs).T  
    elif isinstance(pos_pct, pd.Series):  
        pos_resid, coeffs = _run_regression(pos_pct, risk_loadings)  
    else:  
        raise ValueError('Pass either DataFrame or Series.')  
    if return_coeffs:  
        return pos_resid, coeffs  
    else:  
        return pos_resid  

I haven't turned this into a factor, but if you do, please post it here.

I've been trying to implement this code snippet into the L/S Equity template strategy, but I get the below error message:

There was a runtime error.  
AttributeError:'Index' object has no attribute 'levels'  
Line: 91 inorthogonalize_alpha  
        if (dt not in pos_pct.index) or (dt not in risk_loadings.index.levels[0]):  

Would anyone be able to help?

Clone Algorithm
6
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import SimpleMovingAverage

from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.experimental import risk_loading_pipeline

from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.data import Fundamentals

# Constraint Parameters
MAX_GROSS_LEVERAGE = 1.0
TOTAL_POSITIONS = 600

MAX_SHORT_POSITION_SIZE = 2.0 / TOTAL_POSITIONS
MAX_LONG_POSITION_SIZE = 2.0 / TOTAL_POSITIONS


def initialize(context):

    algo.attach_pipeline(make_pipeline(), 'long_short_equity_template')
    algo.attach_pipeline(risk_loading_pipeline(), 'risk_factors')

    algo.schedule_function(func=rebalance,
                           date_rule=algo.date_rules.week_start(),
                           time_rule=algo.time_rules.market_open(hours=0, minutes=30),
                           half_days=True)

    algo.schedule_function(func=record_vars,
                           date_rule=algo.date_rules.every_day(),
                           time_rule=algo.time_rules.market_close(),
                           half_days=True)
    

def make_pipeline():

    value = Fundamentals.ebit.latest / Fundamentals.enterprise_value.latest
    quality = Fundamentals.roe.latest
    sentiment_score = SimpleMovingAverage(
        inputs=[stocktwits.bull_minus_bear],
        window_length=3,
    )

    universe = QTradableStocksUS()

    value_winsorized = value.winsorize(min_percentile=0.05, max_percentile=0.95)
    quality_winsorized = quality.winsorize(min_percentile=0.05, max_percentile=0.95)
    sentiment_score_winsorized = sentiment_score.winsorize(min_percentile=0.05,                                                                             max_percentile=0.95)

    combined_factor = (
        value_winsorized.zscore() + 
        quality_winsorized.zscore() + 
        sentiment_score_winsorized.zscore()
    )

    longs = combined_factor.top(TOTAL_POSITIONS//2, mask=universe)
    shorts = combined_factor.bottom(TOTAL_POSITIONS//2, mask=universe)

    long_short_screen = (longs | shorts)

    pipe = Pipeline(
        columns={
            'longs': longs,
            'shorts': shorts,
            'combined_factor': combined_factor
        },
        screen=long_short_screen
    )
    return pipe


def before_trading_start(context, data):

    context.pipeline_data = algo.pipeline_output('long_short_equity_template')
    context.risk_loadings = algo.pipeline_output('risk_factors')


def record_vars(context, data):

    algo.record(num_positions=len(context.portfolio.positions))

# Below function from Thomas Wiecki to orthogonalize the alpha factor    
def orthogonalize_alpha(pos_pct, risk_loadings):  
    from sklearn import linear_model  
    import pandas as pd  
    pos_resid = {}  
    coeffs = pd.DataFrame(index=pos_pct.index, columns=risk_loadings.columns)  
    for dt in pos_pct.index:  
        if (dt not in pos_pct.index) or (dt not in risk_loadings.index.levels[0]):  
            continue  
        pos_dt = pos_pct.loc[dt].dropna()  
        overlap_sids = sorted(list(set(pos_dt.index).intersection(set(risk_loadings.loc[dt].dropna().index.tolist()))))  
        if len(overlap_sids) == 0:  
            continue  
        pos_dt = pos_dt.reindex(overlap_sids)  
        risk_loadings_dt = risk_loadings.loc[dt].loc[overlap_sids]  
        clf = linear_model.LinearRegression()  
        clf.fit(risk_loadings_dt, pos_dt)  
        pos_resid[dt] = pos_dt - clf.predict(risk_loadings_dt)  
        coeffs.loc[dt] = clf.coef_

    pos_resid = pd.concat(pos_resid)  
    coeffs = pd.DataFrame(coeffs).T  
    return pos_resid, coeffs 

def rebalance(context, data):

    pipeline_data = context.pipeline_data
    risk_loadings = context.risk_loadings

    # Changed from MaximizeAlpha to normalized weigths using TargetWeights instead:
    alpha_weight = pipeline_data['combined_factor']
    alpha_weight_norm = alpha_weight / alpha_weight.abs().sum() 
    
    # Orthogonalize the alpha through the function using the risk_loadings)
    orth_alpha = orthogonalize_alpha(alpha_weight_norm, risk_loadings)
    
    objective = opt.TargetWeights(orth_alpha)
    
    constraints = []
    constraints.append(opt.MaxGrossExposure(MAX_GROSS_LEVERAGE))

    constraints.append(opt.DollarNeutral())

    neutralize_risk_factors = opt.experimental.RiskModelExposure(
        risk_model_loadings=risk_loadings,
        version=0
    )
    constraints.append(neutralize_risk_factors)

    constraints.append(
        opt.PositionConcentration.with_equal_bounds(
            min=-MAX_SHORT_POSITION_SIZE,
            max=MAX_LONG_POSITION_SIZE
        ))

    algo.order_optimal_portfolio(
        objective=objective,
        constraints=constraints
    )
There was a runtime error.

Hi Joakim,

Try using risk_loadings.index.get_level_values(0) instead. However there is other errors since, unles I am mistken, the function seems to have been wrote for an 'pos_pct' matrix and not a vector.

Thanks Mathieu, I appreciate your reply. I'm afraid this one might be a too tough nut for me to crack unfortunately. I'll put it on hold for now. I can't even pronounce 'orthogonalize' but it seems like very useful code for anyone looking for 'pure alpha.'

I think this fixes it. I also updated the function above.

Clone Algorithm
21
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import SimpleMovingAverage

from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.experimental import risk_loading_pipeline

from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.data import Fundamentals

# Constraint Parameters
MAX_GROSS_LEVERAGE = 1.0
TOTAL_POSITIONS = 600

MAX_SHORT_POSITION_SIZE = 2.0 / TOTAL_POSITIONS
MAX_LONG_POSITION_SIZE = 2.0 / TOTAL_POSITIONS


def initialize(context):

    algo.attach_pipeline(make_pipeline(), 'long_short_equity_template')
    algo.attach_pipeline(risk_loading_pipeline(), 'risk_factors')

    algo.schedule_function(func=rebalance,
                           date_rule=algo.date_rules.week_start(),
                           time_rule=algo.time_rules.market_open(hours=0, minutes=30),
                           half_days=True)

    algo.schedule_function(func=record_vars,
                           date_rule=algo.date_rules.every_day(),
                           time_rule=algo.time_rules.market_close(),
                           half_days=True)
    

def make_pipeline():

    value = Fundamentals.ebit.latest / Fundamentals.enterprise_value.latest
    quality = Fundamentals.roe.latest
    sentiment_score = SimpleMovingAverage(
        inputs=[stocktwits.bull_minus_bear],
        window_length=3,
    )

    universe = QTradableStocksUS()

    value_winsorized = value.winsorize(min_percentile=0.05, max_percentile=0.95)
    quality_winsorized = quality.winsorize(min_percentile=0.05, max_percentile=0.95)
    sentiment_score_winsorized = sentiment_score.winsorize(min_percentile=0.05,                                                                             max_percentile=0.95)

    combined_factor = (
        value_winsorized.zscore() + 
        quality_winsorized.zscore() + 
        sentiment_score_winsorized.zscore()
    )

    longs = combined_factor.top(TOTAL_POSITIONS//2, mask=universe)
    shorts = combined_factor.bottom(TOTAL_POSITIONS//2, mask=universe)

    long_short_screen = (longs | shorts)

    pipe = Pipeline(
        columns={
            'longs': longs,
            'shorts': shorts,
            'combined_factor': combined_factor
        },
        screen=long_short_screen
    )
    return pipe


def before_trading_start(context, data):

    context.pipeline_data = algo.pipeline_output('long_short_equity_template')
    context.risk_loadings = algo.pipeline_output('risk_factors')


def record_vars(context, data):

    algo.record(num_positions=len(context.portfolio.positions))

# Below function from Thomas Wiecki to orthogonalize the alpha factor    
def orthogonalize_alpha(pos_pct, risk_loadings, return_coeffs=False):  
    from sklearn import linear_model  
    import pandas as pd

    def _run_regression(pos_dt, risk_loadings_dt):
        overlap_sids = sorted(list(set(pos_dt.index).intersection(set(risk_loadings_dt.dropna().index.tolist()))))  
        pos_dt = pos_dt.reindex(overlap_sids)  
        risk_loadings_dt = risk_loadings_dt.loc[overlap_sids]
        clf = linear_model.LinearRegression()  
        clf.fit(risk_loadings_dt, pos_dt)  
        resid = pos_dt - clf.predict(risk_loadings_dt)  
        return resid, clf.coef_
    
    if isinstance(pos_pct, pd.DataFrame):
        pos_resid = {}   
        coeffs = pd.DataFrame(index=pos_pct.index, columns=risk_loadings.columns)    
 
        for dt in pos_pct.index:  
            if (dt not in pos_pct.index) or (dt not in risk_loadings.index.get_level_values(0)):  
                continue
            pos_resid[dt], coeffs.loc[dt] = _run_regression(pos_pct.loc[dt].dropna(), 
                                                            risk_loadings.loc[dt])
        
        pos_resid = pd.concat(pos_resid)  
        coeffs = pd.DataFrame(coeffs).T
        
    elif isinstance(pos_pct, pd.Series):
        pos_resid, coeffs = _run_regression(pos_pct, risk_loadings)
        
    else:
        raise ValueError('Pass either DataFrame or Series.')
    
    if return_coeffs:
        return pos_resid, coeffs 
    else:
        return pos_resid

def rebalance(context, data):

    pipeline_data = context.pipeline_data
    risk_loadings = context.risk_loadings

    # Changed from MaximizeAlpha to normalized weigths using TargetWeights instead:
    alpha_weight = pipeline_data['combined_factor']
    alpha_weight_norm = alpha_weight / alpha_weight.abs().sum() 
    
    # Orthogonalize the alpha through the function using the risk_loadings)
    orth_alpha = orthogonalize_alpha(alpha_weight_norm, risk_loadings)
    
    objective = opt.TargetWeights(orth_alpha)
    
    constraints = []
    constraints.append(opt.MaxGrossExposure(MAX_GROSS_LEVERAGE))

    constraints.append(opt.DollarNeutral())

    neutralize_risk_factors = opt.experimental.RiskModelExposure(
        risk_model_loadings=risk_loadings,
        version=0
    )
    constraints.append(neutralize_risk_factors)

    constraints.append(
        opt.PositionConcentration.with_equal_bounds(
            min=-MAX_SHORT_POSITION_SIZE,
            max=MAX_LONG_POSITION_SIZE
        ))

    algo.order_optimal_portfolio(
        objective=objective,
        constraints=constraints
    )
There was a runtime error.

That's awesome, thanks heaps! Appears to be working.

Here's a version of Thomas' code above that doesn't error with NaN values (and forces them stay NaN). Potentially useful if you want NaNs to persist in your signals for whatever reason:

def orthogonalize_alpha(pos_pct, risk_loadings, return_coeffs=False):  
    def _run_regression(pos_dt, risk_loadings_dt):  
        # Keep NaNs at NaN  
        isnan = pos_dt.isnull()  
        pos_dt_nonans = pos_dt[~isnan].copy()  
        # Get the overlapping SIDs for each case  
        overlap_sids = sorted(list(set(pos_dt.index).intersection(set(risk_loadings_dt.index))))  
        overlap_sids_nonans = sorted(list(set(pos_dt_nonans.index).intersection(set(risk_loadings_dt.index))))  
        # Subset the data sets for each case  
        pos_dt = pos_dt.loc[overlap_sids]  
        risk_loadings_dt = risk_loadings_dt.loc[overlap_sids]  
        pos_dt_nonans = pos_dt_nonans.loc[overlap_sids_nonans]  
        risk_loadings_dt_nonans = risk_loadings_dt.loc[overlap_sids_nonans]  
        # Fit the regression to non-NaN data  
        try:  
            clf = LinearRegression()  
            clf.fit(risk_loadings_dt_nonans, pos_dt_nonans)  
        except:  # All data points are NaN  
            return pos_dt, None  
        # Perform the regression  
        resid = pos_dt - clf.predict(risk_loadings_dt)  
        # Keep NaNs at NaN  
        resid[isnan] = np.nan  
        return resid, clf.coef_

    pos_resid, coeffs = _run_regression(pos_pct, risk_loadings)  
    if return_coeffs:  
        return pos_resid, coeffs  
    else:  
        return pos_resid  

I've been exploring how to individually orthogonalize each of my factors and then later on combined the two for a final orthogonalize process. However, while the below code works, I am not sure if it's the best approach. In this case, I'm just adding the two values together. These steps can be performed in either: def before_trading_start(context, data): or in your def rebalance(context, data): . Any suggestions?

    alpha_weight_A = pipeline_data['alpha_a']  
    alpha_weight_norm_A = alpha_weight_A / alpha_weight_A.abs().sum()  
    orth_A = orthogonalize_alpha(alpha_weight_norm_A, risk_loadings)  

    alpha_weight_C = pipeline_data['alpha_c']  
    alpha_weight_norm_C = alpha_weight_C / alpha_weight_C.abs().sum()  
    orth_C = orthogonalize_alpha(alpha_weight_norm_C, risk_loadings)  


    alpha_weight_norm = orth_A + orth_C  
    alpha_weight_norm_combind = alpha_weight_norm / alpha_weight_norm.abs().sum()

    # Orthogonalize the alpha through the function using the risk_loadings)  
    orth_alpha = orthogonalize_alpha(alpha_weight_norm_combind, risk_loadings)

    #objective = opt.TargetWeights(orth_alpha)  
    objective = opt.MaximizeAlpha(orth_alpha)