Back to Community
Social media message volume as a proxy for stock volatility

Hey all,

This is something I've been working on and wanted your feedback. Social media data is generally pretty new in finance and I think we're only beginning to discover the uses for them. If you follow my other posts, I've tried creating a few fun projects using PsychSignal's trader mood signals as buy/sell signals for securities. However, after much reading and discussion with folks much smarter than I am, I've been playing around with the idea of using social media as a proxy for stock price volatility within the context of the much documented volatility effect.

As a quick summary, the volatility effect often shows that securities with low volatility often outperform (risk-adjusted returns) securities with high volatility. Much of the studies on this effect ended in the mid-2000s and the research I'm doing now can be considered "out-of-sample".

So here's what I've done so far:

  • I've begun testing two factors in Andrew Campbell's Factor Tearsheet. The two factors are stock price volatility and StockTwits message volume. From the initial tearsheet it looks like social media message volume contains more Information Coefficient and is more consistent in returns over the different factor quantiles. Your thoughts are appreciated in expanding this notebook.
  • Taking the observations from above, I plugged in the StockTwits message volume factor into James Christopher's long-short multi-factor pipeline algorithm as a quick validation of the tearsheet. I found that the factor itself alone did not provide useful as an alpha factor.

Critique, thoughts, feedback are appreciated.

Loading notebook preview...
Notebook previews are currently unavailable.

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

2 responses

Here's the algorithm that's discussed above

Clone Algorithm
Backtest from to with initial capital
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage, AverageDollarVolume, Latest, RSI
from import USEquityPricing
from import morningstar as mstar
from quantopian.pipeline.filters.morningstar import IsPrimaryShare
from import stocktwits
from quantopian.pipeline.classifiers.morningstar import Sector
from import morningstar

import numpy as np
import pandas as pd

class Value(CustomFactor):
    inputs = [morningstar.valuation_ratios.book_value_yield,
    window_length = 1
    def compute(self, today, assets, out, book_value, sales, fcf):
        value_table = pd.DataFrame(index=assets)
        value_table["book_value"] = book_value[-1]
        value_table["sales"] = sales[-1]
        value_table["fcf"] = fcf[-1]
        out[:] = value_table.rank().mean(axis=1)

class Momentum(CustomFactor):
    inputs = [USEquityPricing.close]
    window_length = 252
    def compute(self, today, assets, out, close):       
        out[:] = close[-20] / close[0]

class MessageVolume(CustomFactor):
    inputs = [stocktwits.total_scanned_messages]
    window_length = 21
    def compute(self, today, assets, out, msgs):
        out[:] = -np.nansum(msgs, axis=0)
def make_pipeline():
    Create and return our pipeline.
    We break this piece of logic out into its own function to make it easier to
    test and modify in isolation.
    In particular, this function can be copy/pasted into research and run by itself.
    pipe = Pipeline()
    initial_screen = filter_universe()

    factors = {
        "Message": MessageVolume(mask=initial_screen),
        "Momentum": Momentum(mask=initial_screen),
        "Value": Value(mask=initial_screen),
    clean_factors = None
    for name, factor in factors.items():
        if not clean_factors:
            clean_factors = factor.isfinite()
            clean_factors = clean_factors & factor.isfinite()  
    combined_rank = None
    for name, factor in factors.items():
        if not combined_rank:
            combined_rank = factor.rank(mask=clean_factors)
            combined_rank += factor.rank(mask=clean_factors)
    pipe.add(combined_rank, 'factor')

    # Build Filters representing the top and bottom 200 stocks by our combined ranking system.
    # We'll use these as our tradeable universe each day.
    longs = combined_rank.percentile_between(80, 90)
    shorts = combined_rank.percentile_between(10, 20)
    pipe.set_screen(longs | shorts)
    pipe.add(longs, 'longs')
    pipe.add(shorts, 'shorts')
    return pipe

def initialize(context):
    context.long_leverage = 1.0
    context.short_leverage = -1.0
    context.spy = sid(8554)
    attach_pipeline(make_pipeline(), 'ranking_example')
    # Used to avoid purchasing any leveraged ETFs 
    context.dont_buys = security_lists.leveraged_etf_list
    # Schedule my rebalance function
                      date_rule=date_rules.month_start (days_offset=0), 
    # Schedule a function to plot leverage and position count

def before_trading_start(context, data):
    # Call pipeline_output to get the output
    # Note this is a dataframe where the index is the SIDs for all 
    # securities to pass my screen and the columns are the factors which
    output = pipeline_output('ranking_example')
    ranks = output['factor']
    long_ranks = ranks[output['longs']].rank()
    short_ranks = ranks[output['shorts']].rank()

    context.long_weights = (long_ranks / long_ranks.sum())"Long Weights:")
    context.short_weights = (short_ranks / short_ranks.sum())"Short Weights:")
    context.active_portfolio = context.long_weights.index.union(context.short_weights.index)

def record_vars(context, data):  
    # Record and plot the leverage, number of positions, and expsoure of our portfolio over time. 

# This function is scheduled to run at the start of each month.
def rebalance(context, data):
    Allocate our long/short portfolio based on the weights supplied by
    context.long_weights and context.short_weights.
    # Order our longs."ordering longs")
    for long_stock, long_weight in context.long_weights.iterkv():
        if data.can_trade(long_stock):
            if long_stock in context.dont_buys:
            order_target_percent(long_stock, context.long_leverage * long_weight)
    # Order our shorts."ordering shorts")
    for short_stock, short_weight in context.short_weights.iterkv():
        if data.can_trade(short_stock):
            if short_stock in context.dont_buys:
            order_target_percent(short_stock, context.short_leverage * short_weight)
    # Sell any positions in assets that are no longer in our target portfolio.
    for security in context.portfolio.positions:
        if data.can_trade(security):  # Work around inability to sell de-listed stocks.
            if security not in context.active_portfolio:
                order_target_percent(security, 0)
def filter_universe():  
    9 filters:
        1. common stock
        2 & 3. not limited partnership - name and database check
        4. database has fundamental data
        5. not over the counter
        6. not when issued
        7. not depository receipts
        8. primary share
        9. high dollar volume
    Check Scott's notebook for more details.
    common_stock = mstar.share_class_reference.security_type.latest.eq('ST00000001')
    not_lp_name = ~mstar.company_reference.standard_name.latest.matches('.* L[\\. ]?P\.?$')
    not_lp_balance_sheet = mstar.balance_sheet.limited_partnership.latest.isnull()
    have_data = mstar.valuation.market_cap.latest.notnull()
    not_otc = ~mstar.share_class_reference.exchange_id.latest.startswith('OTC')
    not_wi = ~mstar.share_class_reference.symbol.latest.endswith('.WI')
    not_depository = ~mstar.share_class_reference.is_depositary_receipt.latest
    primary_share = IsPrimaryShare()
    # Combine the above filters.
    tradable_filter = (common_stock & not_lp_name & not_lp_balance_sheet &
                       have_data & not_otc & not_wi & not_depository & primary_share)
    high_volume_tradable = AverageDollarVolume(
        ).rank(ascending=False) < 500

    mask = high_volume_tradable
    return mask
There was a runtime error.

I'm wondering what these lines do.
def src_std_error(rho,n):
return np.sqrt((1-rho**2)/(n-2))
err=ic.apply(lamda x:src_std_error(x,obs_count))
err=err.reset_index().groupby(['sector_code']).agg(lambda x:np.sqrt((np.sum(np.power(x,2))/len(x))))