Back to Community
Multi-factor long short with Twitter & StockTwits trader mood

This is a long-short multi-factor strategy based off the Goldman Sachs’ GSLC implementation by James Christopher. For those unfamiliar with long-short strategies, I highly recommend going over both the full lecture and accompanying algorithm before exploring this algorithm:

While James’ original algorithm used factors based off of momentum, value, volatility and quality (aka profitability), I chose to remove value (it didn't give great results) and replaced price momentum with a factor based off of trader mood sentiment. This new trader mood sentiment factor is the 30-day average bull minus bearish intensity score weighted by the number of StockTwits and Twitter messages. Each factor (trader mood sentiment, volatility, and quality) are weighted equally and ranked to determine long/short portfolios.

The raw trader mood sentiment data is provided by PsychSignal and made available through Pipeline. PsychSignal uses its own natural language processing (NLP) engine that analyzes messages from both StockTwits and Twitter in order to assign bullish and bearish sentiment scores for each security.

Notes:

For questions on accessing this data, please email [email protected]

This algorithm is for education - the algorithm is not intended to provide investment advice.

Clone Algorithm
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage, AverageDollarVolume
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import Q500US
from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.data import morningstar

import numpy as np
import pandas as pd

class Value(CustomFactor):
    
    inputs = [morningstar.valuation_ratios.book_value_yield,
              morningstar.valuation_ratios.sales_yield,
              morningstar.valuation_ratios.fcf_yield] 
    
    window_length = 1
    
    def compute(self, today, assets, out, book_value, sales, fcf):
        value_table = pd.DataFrame(index=assets)
        value_table["book_value"] = book_value[-1]
        value_table["sales"] = sales[-1]
        value_table["fcf"] = fcf[-1]
        out[:] = value_table.rank().mean(axis=1)

class Momentum(CustomFactor):
    
    inputs = [USEquityPricing.close]
    window_length = 252
    
    def compute(self, today, assets, out, close):       
        out[:] = close[-20] / close[0]

class MessageVolume(CustomFactor):
    inputs = [stocktwits.total_scanned_messages]
    window_length = 21
    def compute(self, today, assets, out, msgs):
        out[:] = -np.nansum(msgs, axis=0)
        
def make_pipeline():
    """
    Create and return our pipeline.
    
    We break this piece of logic out into its own function to make it easier to
    test and modify in isolation.
    
    In particular, this function can be copy/pasted into research and run by itself.
    """
    pipe = Pipeline()
    
    initial_screen = Q500US()

    factors = {
        "Message": MessageVolume(mask=initial_screen),
        "Momentum": Momentum(mask=initial_screen),
        "Value": Value(mask=initial_screen),
    }
    
    clean_factors = None
    for name, factor in factors.items():
        if not clean_factors:
            clean_factors = factor.isfinite()
        else:
            clean_factors = clean_factors & factor.isfinite()  
            
    combined_rank = None
    for name, factor in factors.items():
        if not combined_rank:
            combined_rank = factor.rank(mask=clean_factors)
        else:
            combined_rank += factor.rank(mask=clean_factors)
    pipe.add(combined_rank, 'factor')

    # Build Filters representing the top and bottom 200 stocks by our combined ranking system.
    # We'll use these as our tradeable universe each day.
    longs = combined_rank.percentile_between(80, 90)
    shorts = combined_rank.percentile_between(10, 20)
    
    pipe.set_screen(longs | shorts)
    
    pipe.add(longs, 'longs')
    pipe.add(shorts, 'shorts')
    return pipe


def initialize(context):
    context.long_leverage = 1.0
    context.short_leverage = -1.0
    context.spy = sid(8554)
    
    attach_pipeline(make_pipeline(), 'ranking_example')
    
    # Used to avoid purchasing any leveraged ETFs 
    context.dont_buys = security_lists.leveraged_etf_list
     
    # Schedule my rebalance function
    schedule_function(func=rebalance, 
                      date_rule=date_rules.month_start (days_offset=0), 
                      time_rule=time_rules.market_open(hours=0,minutes=30), 
                      half_days=True)
    
    # Schedule a function to plot leverage and position count
    schedule_function(func=record_vars, 
                      date_rule=date_rules.every_day(), 
                      time_rule=time_rules.market_close(), 
                      half_days=True)

def before_trading_start(context, data):
    # Call pipeline_output to get the output
    # Note this is a dataframe where the index is the SIDs for all 
    # securities to pass my screen and the columns are the factors which
    output = pipeline_output('ranking_example')
    ranks = output['factor']
    
    long_ranks = ranks[output['longs']].rank()
    short_ranks = ranks[output['shorts']].rank()

    context.long_weights = (long_ranks / long_ranks.sum())
    log.info("Long Weights:")
    log.info(context.long_weights)
    
    context.short_weights = (short_ranks / short_ranks.sum())
    log.info("Short Weights:")
    log.info(context.short_weights)
    
    context.active_portfolio = context.long_weights.index.union(context.short_weights.index)


def record_vars(context, data):  
    
    # Record and plot the leverage, number of positions, and expsoure of our portfolio over time. 
    record(num_positions=len(context.portfolio.positions),
           exposure=context.account.net_leverage, 
           leverage=context.account.leverage)
    

# This function is scheduled to run at the start of each month.
def rebalance(context, data):
    """
    Allocate our long/short portfolio based on the weights supplied by
    context.long_weights and context.short_weights.
    """
    # Order our longs.
    log.info("ordering longs")
    for long_stock, long_weight in context.long_weights.iterkv():
        if data.can_trade(long_stock):
            if long_stock in context.dont_buys:
                continue
            order_target_percent(long_stock, context.long_leverage * long_weight)
    
    # Order our shorts.
    log.info("ordering shorts")
    for short_stock, short_weight in context.short_weights.iterkv():
        if data.can_trade(short_stock):
            if short_stock in context.dont_buys:
                continue
            order_target_percent(short_stock, context.short_leverage * short_weight)
    
    # Sell any positions in assets that are no longer in our target portfolio.
    for security in context.portfolio.positions:
        if data.can_trade(security):  # Work around inability to sell de-listed stocks.
            if security not in context.active_portfolio:
                order_target_percent(security, 0)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

8 responses

@Seong - cool algo! Ran into an error tho when backtesting...seems it has something to do with the pipeline? Do I not have the current StockTwits data downloaded?

'Something went wrong. Sorry for the inconvenience. Try using the built-in debugger to analyze your code. If you would like help, send us an email. ConnectionError: [Errno None] None: None
There was a runtime error on line 79.'

Hi Daniel,

We've tried testing on a few accounts on our end and don't seem to be able to replicate the error. Were there any variables you changed while attempting to test?

We were also experiencing a few blips which may have resolved by now.

The first version didn't work, so I tried again and it now works. Thanks. Question - have you considered limiting the database of stocks to those that are much more liquid? Some of the trades made on this example are very illiquid and, if using retail platforms such as Robinhood, could have serious slippage and liquidity concerns if trying to unload shares.

Hi Daniel,

Thanks for the suggestion and you have a very good point. This algorithm was created to be used a simple example for the PsychSignal dataset.

As you suggested, it'd be different when trying to adjust for liquidity concerns.

Seong

Here's a tearsheet of the performance.

Some thoughts:

  • Beta is a little bit low (hoping to be between -.3 and .3)
  • Rolling Sharpe decreases as times goes on
  • Turnover is as expected, every month we're refreshing our portfolio
  • Drawdown is still under .20 at .15
  • Volatility could be better

I'll be updating this with a few liquidity stock improvements as well as running this factor through Andrew's factor tearsheet (https://www.quantopian.com/posts/factor-tear-sheet)

Loading notebook preview...
Notebook previews are currently unavailable.

Great! Anxious to see the outcome of the updates!

Great share! Any idea how this perform in 2015-present?

Hello Seong - Thanks for sharing the algo.
I also ran into the same error as Daniel when running the backtest from 2014-01-04 to 2015-12-31 with $10,000 initial capital (minute data): Screenshot

Line 79: results = pipeline_output('factors').dropna()