Back to Community
Long/Short Cross-Sectional Momentum

This is a pretty simple concept that seems to hold water.
It looks at an N day window of M day returns on a basket of large cap stocks, then the cross-sectional average for each day is subtracted out. I then use the average of the result as a ranking for the universe, long the top and short the bottom in equal amounts.

It seems to do okay through most market conditions, but it runs into some universe bug when running through '08 and shorts the hell out of the market, so that period is pretty unrealistic.

This seems like a pretty blank slate to me, a lot of layers could be added on top of this to improve it. Considering volatilities when when weighting the portfolio could help reduce drawdowns and volatility I'm guessing.

David

Clone Algorithm
296
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime


def initialize(context):
    
    context.lookback = 300
    context.return_window = 50
    context.longleverage = 0.5
    context.shortleverage = -0.5
    
    # There's bad data for this security so I ignore it
    context.ignores = [sid(7143)]
    schedule_function(trade,
                      date_rule=date_rules.month_start(),
                      time_rule=time_rules.market_open(minutes=20))
    

    
def handle_data(context, data):
    leverage=context.account.leverage
    exposure=context.account.net_leverage
    record(leverage=leverage, exposure=exposure)
    
def trade(context, data):
    
    prices = np.log(history(context.lookback, '1d', 'price').dropna(axis=1))
    R = (prices / prices.shift(context.return_window)).dropna()

    # Subtracts the cross-sectional average out of each data point on each day. 
    ranks = (R.T - R.T.mean()).T.mean()
    lower, upper = ranks.quantile([.05, .95])
    shorts = ranks[ranks <= lower]
    longs = ranks[ranks >= upper]

    for stock in data:
        if stock in context.ignores:
            continue
        if stock in shorts.index:
            order_target_percent(stock, context.shortleverage/ len(shorts))
        elif stock in longs.index:
            order_target_percent(stock, context.longleverage / len(longs))
        else:
            order_target(stock, 0)
   
  
def before_trading_start(context):
    num_stocks = 500
    fundamental_df = get_fundamentals(
        query(
            # To add a metric. Start by typing "fundamentals."
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.valuation.market_cap > 1e8)
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(num_stocks)
    )
    update_universe(fundamental_df)
    
    
There was a runtime error.
11 responses

Thanks for sharing. There is a lot of code in there that I didn't know I could use to manipulate dataframes.

If I understand correctly,
R= (prices / prices.shift(context.return_window)).dropna()
This line divides current price by the old price?

Could someone explain to me why this line is needed? Why do you need to subtract the average?

Subtracts the cross-sectional average out of each data point on each day.

ranks = (R.T - R.T.mean()).T.mean()

Doing so normalizes the individual stock returns vs the average of all stocks on that day. Cross-sectional momentum is all about relative momentum, so if everything is down, the stocks that are down still have positive cross-sectional momentum. If he'd done (R.T - R.T.mean()) / R.T.std() he'd have calculated the cross-sectional momentum z-scores.

If you're serious about getting into pandas, I recommend http://www.amazon.com/gp/product/1449319793/ written by the author Wes McKinney himself.

The line:
R = (prices / prices.shift(context.return_window))

calculates the rolling context.return_window return.

eg in the case presented with a 50-day return window it calculates the rolling 50 day return for a 300 day look back period (251 )

ranks = (R.T - R.T.mean()).T.mean() calculates the return over the mean.

I think in this case he is using transpose because the mean default is axis=0 where he wants axis =1.

ranks = (R - R.mean(axis=1)).mean(axis=1) would have achieved the same outcome.

ranks = (R - R.mean(axis=1)).mean(axis=1)  

--> generate an error TimeSeries broadcasting along DataFrame index by default is deprecated.

ranks = ((R.T - R.T.mean()) / R.T.std()).T.mean()  

--> returns same results as

ranks = (R.T - R.T.mean()).T.mean()  

They will have the same ordering, but standardizing is important when you combine ranks/factors from multiple sources.

Simon
How to add the condition that only positive z-scores will be in longs and only negative z-scores will be in shorts?

Here's the same algorithm run on minute data. It's exactly the same so no need to re-clone, but I do suggest sticking with the minute data when possible. It's pretty similar here because the rebalance frequency is monthly.

@Vladimir, assuming you were able to get a series of z-scores you can select the positive and negative ones like so.

longs = zscores[zscores > 0]  
shorts = zscores[zscores < 0]  

Simon is correct that if you had multiple rankings from different sources then you should standardize by dividing by the stdev. It's fine not to here since only one metric is used the ordering remains the same either way.

Clone Algorithm
296
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
import datetime


def initialize(context):
    
    context.lookback = 300
    context.return_window = 50
    context.longleverage = 0.5
    context.shortleverage = -0.5
    
    # There's bad data for this security so I ignore it
    context.ignores = [sid(7143)]
    schedule_function(trade,
                      date_rule=date_rules.month_start(),
                      time_rule=time_rules.market_open(minutes=20))
    

    
def handle_data(context, data):
    leverage=context.account.leverage
    exposure=context.account.net_leverage
    record(leverage=leverage, exposure=exposure)
    
def trade(context, data):
    
    prices = np.log(history(context.lookback, '1d', 'price').dropna(axis=1))
    R = (prices / prices.shift(context.return_window)).dropna()

    # Subtracts the cross-sectional average out of each data point on each day. 
    ranks = (R.T - R.T.mean()).T.mean()
    lower, upper = ranks.quantile([.05, .95])
    shorts = ranks[ranks <= lower]
    longs = ranks[ranks >= upper]

    for stock in data:
        if stock in context.ignores:
            continue
        if stock in shorts.index:
            order_target_percent(stock, context.shortleverage/ len(shorts))
        elif stock in longs.index:
            order_target_percent(stock, context.longleverage / len(longs))
        else:
            order_target(stock, 0)
   
  
def before_trading_start(context):
    num_stocks = 500
    fundamental_df = get_fundamentals(
        query(
            # To add a metric. Start by typing "fundamentals."
            fundamentals.valuation.market_cap,
        )
        .filter(fundamentals.valuation.market_cap > 1e8)
        .order_by(fundamentals.valuation.market_cap.desc())
        .limit(num_stocks)
    )
    update_universe(fundamental_df)
    
    
There was a runtime error.

Why do you need to normalize returns? If you rank returns from highest to lowest, you can still capture stocks with positive relative momentum but negative absolute momentum.

@Johnny, you're right that you will end up going long on stocks with a negative absolute momentum. The strategy only attempts to profit on a relative basis, the hope is the basket held short will under perform the basket held long, they can both go down as long as the short basket goes down more.

@David, now I have another idea. I want my positions long and short to be not equally weighted but weighted , let say, proportionally to z-score.
Something like this:
weight_i=z-score_i/sum(abs(z-score_i)).
I can do this easy in Excel but not in Python.

It would be easier to help if you post the lines of code giving you a problem. If you have the zscores in a pandas Series the calculations are vectorized and this will work for you.

weights = zscores / zscores.abs().sum()