Back to Community
Buy at month end and sell at month begin

Hi all,

I implemented a strategy based on paper "Month-End Liquidity Needs and the Predictability of Stock Returns". It simply long SPY 6 days before the end of the month and hold the position until 2 days after month begin. Despite the simplicity of this strategy, it can actually achieve a 200% return from 2002 to now :)

Abstract of the paper:

Abstract:

This paper uncovers strong return reversals in stock returns around the last monthly settlement day, T-3, which guarantees liquidity for month-end cash distributions. We show that these return reversals are stronger in countries where the mutual fund ownership is large, and that in the US the return reversals have become stronger over time as the mutual fund ownership of stocks has increased. Finally, in the cross-section of stocks, the reversals around turn of the month are stronger for stocks more commonly held by mutual funds, for liquid stocks, and for more volatile stocks (controlling for liquidity).

Please feel free to play with the strategy :)

Clone Algorithm
216
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# For this example, we're going to write a simple momentum script.  
# When the stock goes up quickly, we're going to buy; 
# when it goes down we're going to sell.  
# Hopefully we'll ride the waves.

# To run an algorithm in Quantopian, you need two functions: 
# initialize and handle_data.
def initialize(context) :
  # The initialize function sets any data or variables that 
  # you'll use in your algorithm. 
  # For instance, you'll want to define the security 
  # (or securities) you want to backtest.  
  # You'll also want to define any parameters or values 
  # you're going to use later. 
  # It's only called once at the beginning of your algorithm.
    
  # In our example, we're looking at Apple.  
  # If you re-type this line you'll see 
  # the auto-complete that is available for security. 
  context.security = symbol('SPY')
  context.entered = False
  context.enter_cash = 0


  schedule_function(func=enter,
                    date_rule=date_rules.month_end(days_offset=6),
                    time_rule=time_rules.market_open())

  schedule_function(func=exit,
                    date_rule=date_rules.month_start(days_offset=2),
                    time_rule=time_rules.market_open())
    
# The handle_data function is where the real work is done.  
# This function is run either every minute 
# (in live trading and minute backtesting mode) 
# or every day (in daily backtesting mode).
def handle_data(context, data):
  #record(leverage=context.account.leverage)
  return

def enter(context, data) :
  if not context.entered :
    order_target_percent(symbol('SPY'), 1.0)
    context.entered = True
    log.info("Entered")
    context.enter_cash = context.portfolio.cash
    

def exit(context, data) :
  if context.entered :
    order_target(symbol('SPY'), 0.0)
    context.entered = False
    log.info("Exit")
    record(pnl=context.portfolio.portfolio_value / context.enter_cash - 1)

There was a runtime error.
17 responses

Looks like we can further boost the performance of the strategy if we invest our money into risk free asset in the rest of the time. I use "TLT - 20+ Year Treasury Bond ETF" here.

Clone Algorithm
216
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# For this example, we're going to write a simple momentum script.  
# When the stock goes up quickly, we're going to buy; 
# when it goes down we're going to sell.  
# Hopefully we'll ride the waves.

# To run an algorithm in Quantopian, you need two functions: 
# initialize and handle_data.
def initialize(context) :
  # The initialize function sets any data or variables that 
  # you'll use in your algorithm. 
  # For instance, you'll want to define the security 
  # (or securities) you want to backtest.  
  # You'll also want to define any parameters or values 
  # you're going to use later. 
  # It's only called once at the beginning of your algorithm.
    
  # In our example, we're looking at Apple.  
  # If you re-type this line you'll see 
  # the auto-complete that is available for security. 
  context.security = symbol('SPY')
  context.risk_free = symbol('TLT')
  context.entered = False
  context.enter_cash = 0


  schedule_function(func=enter,
                    date_rule=date_rules.month_end(days_offset=6),
                    time_rule=time_rules.market_open())

  schedule_function(func=exit,
                    date_rule=date_rules.month_start(days_offset=2),
                    time_rule=time_rules.market_open())
    
# The handle_data function is where the real work is done.  
# This function is run either every minute 
# (in live trading and minute backtesting mode) 
# or every day (in daily backtesting mode).
def handle_data(context, data):
  #record(leverage=context.account.leverage)
  return

def enter(context, data) :
  if not context.entered :
    order_target(context.risk_free, 0)
    order_target_percent(symbol('SPY'), 1.0)
    context.entered = True
    log.info("Entered")
    context.enter_cash = context.portfolio.portfolio_value
    

def exit(context, data) :
  if context.entered :
    order_target(symbol('SPY'), 0.0)
    if context.risk_free in data :
      order_target_percent(context.risk_free, 1.0)
    context.entered = False
    log.info("Exit")
    record(pnl=context.portfolio.portfolio_value / context.enter_cash - 1)

There was a runtime error.

Try to further fool around with the parameters.

Clone Algorithm
216
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# For this example, we're going to write a simple momentum script.  
# When the stock goes up quickly, we're going to buy; 
# when it goes down we're going to sell.  
# Hopefully we'll ride the waves.

# To run an algorithm in Quantopian, you need two functions: 
# initialize and handle_data.
def initialize(context) :
  # The initialize function sets any data or variables that 
  # you'll use in your algorithm. 
  # For instance, you'll want to define the security 
  # (or securities) you want to backtest.  
  # You'll also want to define any parameters or values 
  # you're going to use later. 
  # It's only called once at the beginning of your algorithm.
    
  # In our example, we're looking at Apple.  
  # If you re-type this line you'll see 
  # the auto-complete that is available for security. 
  context.security = symbol('SPY')
  context.risk_free = symbol('TLT')
  context.entered = False
  context.enter_cash = 0

  enter_span = 2
  for i in range(1, 1 + enter_span) :
    schedule_function(func=(lambda j : lambda context, data : enter(context, data, j, enter_span))(i),
                      date_rule=date_rules.month_end(days_offset= 6 + 1 - i),
                      time_rule=time_rules.market_open(hours=1))


  exit_span = 2
  for i in range(1, 1 + exit_span) :
    offset = i
    date = date_rules.month_start(days_offset=offset) if offset >= 0 else date_rules.month_end(-offset - 1)
    schedule_function(func=(lambda j : lambda context, data : exit(context, data, j, exit_span))(i),
                      date_rule=date,
                      time_rule=time_rules.market_open(hours=1))
    
# The handle_data function is where the real work is done.  
# This function is run either every minute 
# (in live trading and minute backtesting mode) 
# or every day (in daily backtesting mode).
def handle_data(context, data):
  #record(leverage=context.account.leverage)
  return

def enter(context, data, i, span) :
  price = history(2, '1d', 'close_price').iloc[0]
  context.security_enter_price = price[context.security]
    
  step = 1.0 / span

  #Liquidate risk free assert if it is in the portfolio
  if context.risk_free in context.portfolio.positions :
    order_target_percent(context.risk_free, 1 - step * i)

  order_target_percent(context.security, step * i)
  log.info("+0.25 %d" % i)

  if not (context.security in context.portfolio.positions):
    context.entered = True
    context.enter_cash = context.portfolio.portfolio_value


def exit(context, data, i, span) :
  step = 1.0 / span

  if context.security in context.portfolio.positions:
    order_target_percent(context.security, 1 - step * i)

    if context.risk_free in data :
      order_target_percent(context.risk_free, step * i)
    context.entered = False
    log.info("-0.25 %d" % i)
    record(pnl=context.portfolio.portfolio_value / context.enter_cash - 1)
There was a runtime error.

Incidentally sent the post without finishing it ...

Try to further fool around with the parameters, slightly adjust the date and now the strategy try to enter/exit the position in multiple days. The total return further went up to 3.45.

I guess we can add some risk control logic to further boost the performance. Maybe we could use something like history volatility/GARCH/bayesian approach.

This version trade large cap stocks instead of SPY

Clone Algorithm
89
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
from pandas.tseries.offsets import BMonthBegin, BDay
from scipy.stats import mstats
from sqlalchemy import or_

def zscore(a) :
  return (a - a.mean()) / a.std()

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
  #Choose the liquid stocks
  #set_universe(universe.DollarVolumeUniverse(floor_percentile=98.0, ceiling_percentile=100.0))

  # States: Construct, Enter, revert, exit, idle.
  context.state = 'start'

  # Invest bound when we are idle
  context.risk_free = symbol('TLT')
  context.market = symbol('SPY')
    
  # Construct profolilo at T - 8 where T is the end of the month
  context.candidates = [ ]
  context.num_candidates = 40

  schedule_function(func=transist_to_construction,
                    date_rule=date_rules.month_end(days_offset=7),
                    time_rule=time_rules.market_close())

  # enter position at T - 5
  schedule_function(func=enter_position,
                    date_rule=date_rules.month_end(days_offset=5),
                    time_rule=time_rules.market_open())

  # exit position at T + 1
  schedule_function(func=exit_position,
                    date_rule=date_rules.month_start(days_offset=1),
                    time_rule=time_rules.market_open())
  

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
  record(leverage=context.account.leverage)
  if context.state == 'start' :
    if context.risk_free in data: order_target_percent(context.risk_free, 1.0)
    context.state = 'idle'
    return

def before_trading_start(context, data) :
  if context.state == 'idle' :
    update_universe([context.risk_free, context.market])
    return

  fd = get_fundamentals(query(fundamentals.valuation.market_cap)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(context.num_candidates))
  context.market_caps = fd.iloc[0]
  update_universe(fd.columns.values)

  if context.state == 'construct' :
    context.state = 'enter'
    log.info('construct -> enter')
  elif context.state == 'enter' :
    construct_profolio(context, data)

def transist_to_construction(context, data) :
  context.state = 'construct'

def construct_profolio(context, data) :
  context.candidates = context.market_caps.index.values

def enter_position(context, data) :
  if context.state != 'enter' : return

  num_candidates = len(context.candidates)
  for sid in context.candidates :
    if (sid in data) :
      assert sid != context.risk_free
      order_target_percent(sid, 1.0 / num_candidates)

  context.state = 'exit'
  log.info('enter -> exit')
  order_target(context.risk_free, 0)  


def exit_position(context, data) :
  if context.state != 'exit' : return

  for sid in context.candidates :
    if sid in context.portfolio.positions :
      assert sid != context.risk_free
      order_target(sid, 0)

  context.candidates = []
  log.info('exit -> idle')
  context.state = 'idle'

  if context.risk_free in data:
    order_target_percent(context.risk_free, 1.0)
There was a runtime error.

A enhanced version that try to identify the mean revert process on the stocks.

Clone Algorithm
89
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from pandas.tseries.offsets import BMonthBegin, BDay
from scipy.stats import mstats
from sqlalchemy import or_

def zscore(a) :
  return (a - a.mean()) / a.std()

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
  #Choose the liquid stocks
  #set_universe(universe.DollarVolumeUniverse(floor_percentile=98.0, ceiling_percentile=100.0))

  # States: Construct, Enter, revert, exit, idle.
  context.state = 'start'

  # Invest bound when we are idle
  context.risk_free = symbol('TLT')
  context.market = symbol('SPY')
    
  # Construct profolilo at T - 8 where T is the end of the month
  context.candidates = [ ]
  context.num_candidates = 50

  schedule_function(func=transist_to_construction,
                    date_rule=date_rules.month_end(days_offset=6),
                    time_rule=time_rules.market_close())

  # enter position at T - 5
  schedule_function(func=enter_position,
                    date_rule=date_rules.month_end(days_offset=5),
                    time_rule=time_rules.market_open())

  # exit position at T + 1
  schedule_function(func=exit_position,
                    date_rule=date_rules.month_start(days_offset=1),
                    time_rule=time_rules.market_open())
  

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
  record(leverage=context.account.leverage)
  if context.state == 'start' :
    if context.risk_free in data: order_target_percent(context.risk_free, 1.0)
    context.state = 'idle'
    return

def before_trading_start(context, data) :
  if context.state == 'idle' :
    update_universe([context.risk_free, context.market])
    return

  fd = get_fundamentals(query(fundamentals.valuation.market_cap,
                              fundamentals.valuation.shares_outstanding)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(400))
  # Market cap and outstandings
  context.market_caps = fd.loc['market_cap']
  context.shares_outstandings = fd.loc['shares_outstanding'] 
  update_universe(fd.columns.values)

  if context.state == 'construct' :
    context.state = 'enter'
    log.info('construct -> enter')
  elif context.state == 'enter' :
    pass

def transist_to_construction(context, data) :
  context.state = 'construct'

def construct_profolio(context, data) :
  today = get_datetime().tz_convert('US/Eastern')
  month_start = today + BMonthBegin(n=-1) #+ BDay(n=5)

  prices = history(pd.bdate_range(month_start, today).size, '1d', 'price')
  prices = prices.loc[:, context.market_caps.index.values].dropna(axis=1)
  prices = prices.head(-1) # last day data is invalid

  # Calculate the liquidate impact
  liquidate_impact = 1 - prices.iloc[-1] / prices.iloc[-3]
  # Must have a drop on stock price
  liquidate_impact = liquidate_impact[liquidate_impact > 0].dropna()

  volumes = history(4, '1d', 'volume').loc[:,liquidate_impact.index.values].dropna(axis=1).head(-1)
  turnovers = (volumes.sum() / context.shares_outstandings).dropna()

  liquidate_score = zscore((liquidate_impact / turnovers).dropna())

  prices=prices.loc[:,liquidate_impact.index.values].dropna(axis=1)
  #Volatility GARCH?
  #logrets = prices.apply(np.log).diff().tail(-1)
  #volatility_scores = zscore(logrets.std()).rank()

  #Calculate the growth
  growths = (prices.iloc[-3] / prices.iloc[0]) - 1 
  growth_score = zscore(growths)
    
  scores = zscore((growth_score + liquidate_score).dropna())
  scores.sort(ascending=False)

  context.candidates = scores.head(context.num_candidates).index.values

def enter_position(context, data) :
  if context.state != 'enter' : return

  # Construct profolio 
  construct_profolio(context, data)

  num_candidates = len(context.candidates)
  for sid in context.candidates :
    if (sid in data) :
      assert sid != context.risk_free
      order_target_percent(sid, 1.0 / num_candidates)

  context.state = 'exit'
  log.info('enter -> exit')
  order_target(context.risk_free, 0)  


def exit_position(context, data) :
  if context.state != 'exit' : return

  for sid in context.candidates :
    if sid in context.portfolio.positions :
      assert sid != context.risk_free
      order_target(sid, 0)

  context.candidates = []
  log.info('exit -> idle')
  context.state = 'idle'

  if context.risk_free in data:
    order_target_percent(context.risk_free, 1.0)
There was a runtime error.

Added a naive risk control that assumes the stock prices will revert to its value T-8. The next step is to add some bayesian inference things. This strategy looks like to be a very good example for me to learn all these things.

Clone Algorithm
89
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from pandas.tseries.offsets import BMonthBegin, BDay
from scipy.stats import mstats
from sqlalchemy import or_

def zscore(a) :
  return (a - a.mean()) / a.std()

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
  #Choose the liquid stocks
  #set_universe(universe.DollarVolumeUniverse(floor_percentile=98.0, ceiling_percentile=100.0))
  context.universe_size = 20
  context.num_candidates = 5

  # States: Construct, Enter, long, exit, idle.
  context.state = 'start'

  # Invest bound when we are idle
  context.risk_free = symbol('TLT')
  context.market = symbol('SPY')
    
  # Construct profolilo at T - 8 where T is the end of the month
  context.candidates = [ ]

  schedule_function(func=transist_to_construction,
                    date_rule=date_rules.month_end(days_offset=6),
                    time_rule=time_rules.market_close())

  # enter position at T - 5
  schedule_function(func=enter_position,
                    date_rule=date_rules.month_end(days_offset=5),
                    time_rule=time_rules.market_open())

  # risk managerment from T - 4 to T
  context.risk_span = 5
  for i in range(1, 1 + context.risk_span) :
    offset = 4 + 1 - i
    date = date_rules.month_end(offset) if offset > 0 else date_rules.month_start(-offset)
    schedule_function(func=(lambda j : lambda context, data : risk_control(context, data, j))(i),
                      date_rule=date,
                      time_rule=time_rules.market_open())


  # exit position at T + 1
  schedule_function(func=exit_position,
                    date_rule=date_rules.month_start(days_offset=1),
                    time_rule=time_rules.market_open())
  

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
  record(leverage=context.account.leverage)
  if context.state == 'start' :
    if context.risk_free in data: order_target_percent(context.risk_free, 1.0)
    context.state = 'idle'
    return

def before_trading_start(context, data) :
  if context.state == 'idle' :
    update_universe([context.risk_free, context.market])
    return

  fd = get_fundamentals(query(fundamentals.valuation.market_cap,
                              fundamentals.valuation.shares_outstanding)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(context.universe_size))
  # Market cap and outstandings
  context.market_caps = fd.loc['market_cap']
  context.shares_outstandings = fd.loc['shares_outstanding'] 
  update_universe(fd.columns.values)

  if context.state == 'construct' :
    context.state = 'enter'
    log.info('construct -> enter')
  elif context.state == 'enter' :
    pass

def transist_to_construction(context, data) :
  context.state = 'construct'

def construct_profolio(context, data) :
  today = get_datetime().tz_convert('US/Eastern')
  month_start = today + BMonthBegin(n=-1) #+ BDay(n=5)

  prices = history(pd.bdate_range(month_start, today).size, '1d', 'price')
  prices = prices.loc[:, context.market_caps.index.values].dropna(axis=1)
  # Remember the prices we enter
  context.enter_prices = prices.iloc[-1]
  prices = prices.head(-1) # last day's price is the open prices

  # Calculate the liquidate impact
  liquidate_impact = 1 - prices.iloc[-1] / prices.iloc[-3]
  # Remember the target prices
  context.target_prices = prices.iloc[-3]
  context.recover_step = (context.target_prices - context.enter_prices) / context.risk_span

  # Must have a drop on stock price
  liquidate_impact = liquidate_impact[liquidate_impact > 0].dropna()
  #impact_score = zscore(liquidate_impact)

  volumes = history(4, '1d', 'volume').loc[:,liquidate_impact.index.values].dropna(axis=1).head(-1)
  turnovers = (volumes.sum() / context.shares_outstandings).dropna()
  # We want to identify large price move with little tunover, that's a sign of lack of liquidality
  #turnover_score = -zscore(turnovers)

  liquidate_score = zscore((liquidate_impact / turnovers).dropna())

  prices=prices.loc[:,liquidate_impact.index.values].dropna(axis=1)
  #Volatility GARCH?
  #logrets = prices.apply(np.log).diff().tail(-1)
  #volatility_scores = zscore(logrets.std()).rank()

  #Calculate the growth
  growths = (prices.iloc[-3] / prices.iloc[0]) - 1 
  growth_score = zscore(growths)
    
  scores = zscore((growth_score + liquidate_score).dropna())
  scores.sort(ascending=False)

  context.candidates = scores.head(context.num_candidates).index.values
  context.recover_step = context.recover_step.loc[context.candidates].dropna()
  context.enter_prices = context.enter_prices.loc[context.candidates].dropna()

def enter_position(context, data) :
  if context.state != 'enter' : return

  # Construct profolio 
  construct_profolio(context, data)

  num_candidates = len(context.candidates)
  for sid in context.candidates :
    if (sid in data) :
      assert sid != context.risk_free
      order_target_percent(sid, 1.0 / num_candidates)

  context.state = 'long'
  log.info('enter -> long')
  order_target(context.risk_free, 0)  

def risk_control(context, data, i) :
  if context.state != 'long' : return
  log.info('risk control %d' % i)

  prices = history(i + 1, '1d', 'price')
  # Only interest in the prices of profolio
  prices = prices.loc[:,context.candidates].dropna(axis=1)
  # Exclued the open prices today
  closes_prices = prices.head(-1)
  last_closes = closes_prices.iloc[-1]

  # Calculate the risk
  delta = last_closes - (context.enter_prices + i * context.recover_step)
  delta = delta / context.enter_prices
  for sid, delta in delta.iteritems() :
    log.info("Delta %s - %f" % (sid, delta))
    if delta < -0.05 :
      order_target(sid, 0)

def exit_position(context, data) :
  if context.state != 'long' : return

  for sid in context.candidates :
    if sid in context.portfolio.positions :
      assert sid != context.risk_free
      order_target(sid, 0)

  context.candidates = []
  log.info('long -> idle')
  context.state = 'idle'

  if context.risk_free in data:
    order_target_percent(context.risk_free, 1.0)
There was a runtime error.

It's important to remember when backtesting to use minute mode to nail down your algorithms. This is an artifact of how the backtester operates. When considering algorithms like this, ones that are event driven or where timing is extremely crutial, you must be extremely diligent in making sure that orders are executed and filled when they need to be. I'm a big fan of these kind of strategies as you are only exposed to risk for the short amount of time around the event! Keep working on the implementation and see what you can get. Attached is a minute mode backtest from one of the above implementations...

Clone Algorithm
6
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd
from pandas.tseries.offsets import BMonthBegin, BDay
from scipy.stats import mstats
from sqlalchemy import or_

def zscore(a) :
  return (a - a.mean()) / a.std()

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context):
  #Choose the liquid stocks
  #set_universe(universe.DollarVolumeUniverse(floor_percentile=98.0, ceiling_percentile=100.0))

  # States: Construct, Enter, revert, exit, idle.
  context.state = 'start'

  # Invest bound when we are idle
  context.risk_free = symbol('TLT')
  context.market = symbol('SPY')
    
  # Construct profolilo at T - 8 where T is the end of the month
  context.candidates = [ ]
  context.num_candidates = 50

  schedule_function(func=transist_to_construction,
                    date_rule=date_rules.month_end(days_offset=6),
                    time_rule=time_rules.market_close())

  # enter position at T - 5
  schedule_function(func=enter_position,
                    date_rule=date_rules.month_end(days_offset=5),
                    time_rule=time_rules.market_open())

  # exit position at T + 1
  schedule_function(func=exit_position,
                    date_rule=date_rules.month_start(days_offset=1),
                    time_rule=time_rules.market_open())
  

# Will be called on every trade event for the securities you specify. 
def handle_data(context, data):
  record(leverage=context.account.leverage)
  if context.state == 'start' :
    if context.risk_free in data: order_target_percent(context.risk_free, 1.0)
    context.state = 'idle'
    return

def before_trading_start(context, data) :
  if context.state == 'idle' :
    update_universe([context.risk_free, context.market])
    return

  fd = get_fundamentals(query(fundamentals.valuation.market_cap,
                              fundamentals.valuation.shares_outstanding)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(400))
  # Market cap and outstandings
  context.market_caps = fd.loc['market_cap']
  context.shares_outstandings = fd.loc['shares_outstanding'] 
  update_universe(fd.columns.values)

  if context.state == 'construct' :
    context.state = 'enter'
    log.info('construct -> enter')
  elif context.state == 'enter' :
    pass

def transist_to_construction(context, data) :
  context.state = 'construct'

def construct_profolio(context, data) :
  today = get_datetime().tz_convert('US/Eastern')
  month_start = today + BMonthBegin(n=-1) #+ BDay(n=5)

  prices = history(pd.bdate_range(month_start, today).size, '1d', 'price')
  prices = prices.loc[:, context.market_caps.index.values].dropna(axis=1)
  prices = prices.head(-1) # last day data is invalid

  # Calculate the liquidate impact
  liquidate_impact = 1 - prices.iloc[-1] / prices.iloc[-3]
  # Must have a drop on stock price
  liquidate_impact = liquidate_impact[liquidate_impact > 0].dropna()

  volumes = history(4, '1d', 'volume').loc[:,liquidate_impact.index.values].dropna(axis=1).head(-1)
  turnovers = (volumes.sum() / context.shares_outstandings).dropna()

  liquidate_score = zscore((liquidate_impact / turnovers).dropna())

  prices=prices.loc[:,liquidate_impact.index.values].dropna(axis=1)
  #Volatility GARCH?
  #logrets = prices.apply(np.log).diff().tail(-1)
  #volatility_scores = zscore(logrets.std()).rank()

  #Calculate the growth
  growths = (prices.iloc[-3] / prices.iloc[0]) - 1 
  growth_score = zscore(growths)
    
  scores = zscore((growth_score + liquidate_score).dropna())
  scores.sort(ascending=False)

  context.candidates = scores.head(context.num_candidates).index.values

def enter_position(context, data) :
  if context.state != 'enter' : return

  # Construct profolio 
  construct_profolio(context, data)

  num_candidates = len(context.candidates)
  for sid in context.candidates :
    if (sid in data) :
      assert sid != context.risk_free
      order_target_percent(sid, 1.0 / num_candidates)

  context.state = 'exit'
  log.info('enter -> exit')
  order_target(context.risk_free, 0)  


def exit_position(context, data) :
  if context.state != 'exit' : return

  for sid in context.candidates :
    if sid in context.portfolio.positions :
      assert sid != context.risk_free
      order_target(sid, 0)

  context.candidates = []
  log.info('exit -> idle')
  context.state = 'idle'

  if context.risk_free in data:
    order_target_percent(context.risk_free, 1.0)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@James

Right, i expect the return will drop when I switch to minute mode, so I am trying to get a higher over all return first. But the drop here is still much more larger than I expect ...

Here is a proof-of-concept version that work on minute data with SPY (without risk-free asset). The approach is inspired by paper "market making and mean reversion", but we are doing "directional" market making here.

Clone Algorithm
216
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# For this example, we're going to write a simple momentum script.  
# When the stock goes up quickly, we're going to buy; 
# when it goes down we're going to sell.  
# Hopefully we'll ride the waves.

# To run an algorithm in Quantopian, you need two functions: 
# initialize and handle_data.
def initialize(context) :
  # The initialize function sets any data or variables that 
  # you'll use in your algorithm. 
  # For instance, you'll want to define the security 
  # (or securities) you want to backtest.  
  # You'll also want to define any parameters or values 
  # you're going to use later. 
  # It's only called once at the beginning of your algorithm.
    
  # In our example, we're looking at Apple.  
  # If you re-type this line you'll see 
  # the auto-complete that is available for security. 
  context.security = symbol('SPY')
  context.risk_free = symbol('TLT')
  context.entered = False
  context.enter_cash = context.portfolio.starting_cash

  context.enter_prices = None
  context.last_enter_prices = None
  context.last_exit_prices = None

  #set_benchmark(context.risk_free)

  enter_span = 4
  for i in range(1, 1 + enter_span) :
    schedule_function(func=(lambda j : lambda context, data : enter(context, data, j, enter_span))(i),
                      date_rule=date_rules.month_end(days_offset = 6 + 1 - i),
                      time_rule=time_rules.market_open())
    
    schedule_function(func=cancel_orders,
                      date_rule=date_rules.month_end(days_offset = 6 + 1 - i),
                      time_rule=time_rules.market_close())

  exit_span = 4
  for i in range(1, 1 + exit_span) :
    offset = 2 + 1 - i
    date = date_rules.month_end(days_offset=offset) if offset > 0 else date_rules.month_start(days_offset=-offset)
    schedule_function(func=(lambda j : lambda context, data : exit(context, data, j, exit_span))(i),
                      date_rule=date,
                      time_rule=time_rules.market_open())
    schedule_function(func=cancel_orders,
                      date_rule=date,
                      time_rule=time_rules.market_close())
  
  schedule_function(func=force_exit,
                    date_rule=date_rules.month_start(days_offset = 3),
                    time_rule=time_rules.market_open(hours=1))
    
# The handle_data function is where the real work is done.  
# This function is run either every minute 
# (in live trading and minute backtesting mode) 
# or every day (in daily backtesting mode).
def handle_data(context, data):
  #record(leverage=context.account.leverage)
  #record(pnl=0)
  return

def enter(context, data, i, span) :
  if get_environment('data_frequency') != 'minute' : cancel_orders(context, data)

  step = 1.0 / span

  #Liquidate risk free assert if it is in the portfolio
  #if context.risk_free in context.portfolio.positions :
  #  order_target_percent(context.risk_free, 1 - step * i)
  if context.enter_prices == None :
    context.enter_prices = data[context.security].price
    context.last_exit_prices = context.enter_prices
    context.last_enter_prices = context.enter_prices
  else :
    context.last_enter_prices  = min(context.last_enter_prices, data[context.security].price)

  order_target_percent(context.security, step * i, style=LimitOrder(context.last_enter_prices))
  log.info("enter %.2f %d" % (step, i))

  if not (context.security in context.portfolio.positions):
    context.entered = True
    context.enter_cash = context.portfolio.portfolio_value

def exit(context, data, i, span) :
  if get_environment('data_frequency') != 'minute' : cancel_orders(context, data)

  step = 1.0 / span

  context.last_exit_prices = max(context.last_exit_prices, data[context.security].price)

  if context.security in context.portfolio.positions:
    order_target_percent(context.security, 1.0 - step * i, style=LimitOrder(context.last_exit_prices))

    #if context.risk_free in data :
    #  order_target_percent(context.risk_free, step * i)
    context.entered = False
    log.info("exit %.2f %d" % (step, i))

def force_exit(context, data) :
  context.enter_prices = None
  record(pnl=context.portfolio.portfolio_value / context.enter_cash - 1)
  order_target(context.security, 0)
  #if context.risk_free in data :
  #  order_target_percent(context.risk_free, 1.0)
  
    
def cancel_orders(context, data) :
  if get_environment('data_frequency') != 'minute' : return

  for sid, ords in get_open_orders().iteritems() :
    for ord in ords :
      cancel_order(ord)
There was a runtime error.

I think I figure out what went wrong in the minute mode. In daily mode the order is filled at the next day's close price. Hence in minute mode we need to schedule the enter/exit function 1 day later to replicate the result. And we need to schedule the function close the the market close.

And I figured out the the turn over of the stocks is a very good indicator for this strategy for the enter timing, perhaps also good for exit timing as well

Here is the working version at minute mode. After fixed the timing, we still need to fix the slippage issue - we should avoid hit the market with a large order in a short time, which may generate a significant price impact and cause the drop of returns.

Clone Algorithm
23
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from sqlalchemy import or_
import pandas as pd
from pandas.tseries.offsets import BMonthBegin, BDay
import numpy as np

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context) :
  context.universe_size = 80

  #context.market = symbol('SPY')
  context.candidates = []

  # Order will fill in the next databar, to enter the position at T4 we need to order at T5
  # in daily mode, but in T4 in minute more
  T5Offset = 5 if get_environment('data_frequency') == 'daily' else 4
  schedule_function(func=enter_T5,
                    date_rule=date_rules.month_end(days_offset=T5Offset),
                    time_rule=time_rules.market_close(minutes=30))
  # Cancel order at market close
  if get_environment('data_frequency') != 'daily' :
    schedule_function(func=cancel_orders,
                      date_rule=date_rules.month_end(days_offset=T5Offset),
                      time_rule=time_rules.market_close())

  liquidate_offset = 1 if get_environment('data_frequency') == 'daily' else 2
  schedule_function(func=liquidate_all,
                    date_rule=date_rules.month_start(days_offset=liquidate_offset),
                    time_rule=time_rules.market_close(hours=1))

  # Daily rebalance to equal weight - sell high buy low
  #for i in range(-T5Offset, liquidate_offset) :
  #  date = date_rules.month_end(days_offset=-(i+1)) if i < 0 else \
  #         date_rules.month_start(days_offset=i)
  #  schedule_function(func=rebalance, date_rule=date,
  #                    time_rule=time_rules.market_close(hours=1))


# Will be called on every trade event for the securities you specify. 
def handle_data(context, data) :
  record(leverage=context.account.leverage)
    
def before_trading_start(context, data) :
  fd = get_fundamentals(query(fundamentals.valuation.market_cap,
                              fundamentals.valuation.shares_outstanding)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(context.universe_size))
  # Market cap and outstandings
  context.market_caps = fd.loc['market_cap']
  context.shares_outstandings = fd.loc['shares_outstanding'] 
  update_universe(fd.columns.values)
    
def zscore(a) :
  return (a - a.mean()) / a.std()
    
def enter_T5(context, data) :
  today = get_datetime().date() #.tz_convert('US/Eastern')
  month_start = today + BMonthBegin(n=-1) + BDay(n=2)
  days_after_month_start = pd.bdate_range(month_start, today).size
  #lastT4 = today + BMonthBegin(n=-1) + BDay(n=-4)
  #days_after_lastT4 = pd.bdate_range(lastT4, today).size

  closes = history(days_after_month_start, '1d', 'close_price').dropna(axis=1)
  closes = closes.loc[:,context.market_caps.index.values]
  T6Offset = 0 if get_environment('data_frequency') == 'daily' else -1
  #closesT8 = closes.iloc[-3 + T6Offset]
  T8date = closes.index[-3 + T6Offset]
  #closesT6 = closes.iloc[-1 + T6Offset]

  # Identify the price drop as net-buy - dosent work at least at T6,
  # - not statistically significant, but a good heuristic
  # opens = history(1 - T6Offset, '1d', 'open_price').dropna(axis=1)
  # opensT6 = opens.iloc[-1 + T6Offset]
  # price_drops = closesT6 - opensT6

  # T8 must higher than T6 - not statistically significant
  #candidates = closesT6[closesT8 < closesT6].dropna()

  # Last month turn return - not statistically significant,
  # see autocorrelation_plot
  #month_turns = closes.loc[lastT4:month_start,:]
  #month_turns = month_turns.iloc[-1]/month_turns.iloc[0]

  # Identify the over bought ones
  # closes = closes.loc[month_start:T8date, :]
  # monthly_rets  - not statistically significant
  # monthly_rets = (closesT8 / closes.iloc[0]).apply(np.log)
  # We assume people tend to liquidate risky assets at the end of month
  # Historicial Volatility - not statistically significant
  rets = closes.apply(np.log).diff().dropna()
  volas = rets.std()

  #risky_rets = volas.dropna()

  # Identify the increase in the turnover
  # - risks are transfering with a volume increase 
  volumes = history(days_after_month_start, '1d', 'volume').dropna(axis=1)
  volumes = volumes.loc[:,context.market_caps.index.values]
  turnovers = (volumes / context.shares_outstandings).dropna(axis=1)
  monthly_turnovers = turnovers.loc[month_start:T8date]
  turnover_avgs = monthly_turnovers.mean()
  T6turnovers = turnovers.iloc[-1 + T6Offset]
  T6Spikes = T6turnovers - turnover_avgs 
  T6Spikes = T6Spikes[T6Spikes > 0].dropna()
  # Identify a net sell with a price drop - not work at T6
  # price_drops = price_drops.loc[T6Spikes.index.values]
  # price_drops = price_drops[price_drops > 0].dropna()
  #T6Spikes.sort(ascending=False)
  # Looks like we can scale the universe with this heuristic
  T6Spikes = T6Spikes.head(60)

  volas = volas.loc[T6Spikes.index.values]
  volas.sort(ascending=False)

  candidates = volas.head(40)

  #candidates.sort(ascending=False)
  #candidates = candidates.head(20)

  #candidates = 
  #candidates = candidates / candidates.sum()
    
  context.candidates = candidates
  rebalance(context, data)
  log.info("T5 enter")

def cancel_orders(context, data) :
  log.info("Cancel orders")
  for sid, ords in get_open_orders().iteritems() :
    for ord in ords :
      cancel_order(ord)
    
def rebalance(context, data) :
  cancel_orders(context, data)

  candidates = context.candidates

  num_candidates = len(candidates)
  record(longs=num_candidates)
  if num_candidates == 0 :
    return

  log.info("rebalance")

  for sid, weight in  candidates.iteritems() :
    target_price = data[sid].price
    order_target_percent(sid, 1.0 / num_candidates) #, style=LimitOrder(target_price))

def liquidate_all(context, data) :
  cancel_orders(context, data)

  log.info("liquidate all")
  for sid, pos in context.portfolio.positions.iteritems() :
    order_target(sid, 0)
There was a runtime error.

This is what we can get if we set slippage to 0 (slippage.FixedSlippage(spread=0.00)). This suggests that we can optimize the ordering strategy to get better result.

Clone Algorithm
23
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from sqlalchemy import or_
import pandas as pd
from pandas.tseries.offsets import BMonthBegin, BDay
import numpy as np

# Put any initialization logic here.  The context object will be passed to
# the other methods in your algorithm.
def initialize(context) :
  context.universe_size = 80
  set_slippage(slippage.FixedSlippage(spread=0.00))

  #context.market = symbol('SPY')
  context.candidates = []

  # Order will fill in the next databar, to enter the position at T4 we need to order at T5
  # in daily mode, but in T4 in minute more
  T5Offset = 5 if get_environment('data_frequency') == 'daily' else 4
  schedule_function(func=enter_T5,
                    date_rule=date_rules.month_end(days_offset=T5Offset),
                    time_rule=time_rules.market_close(minutes=10))
  # Cancel order at market close
  if get_environment('data_frequency') != 'daily' :
    schedule_function(func=cancel_orders,
                      date_rule=date_rules.month_end(days_offset=T5Offset),
                      time_rule=time_rules.market_close())

  liquidate_offset = 1 if get_environment('data_frequency') == 'daily' else 2
  schedule_function(func=liquidate_all,
                    date_rule=date_rules.month_start(days_offset=liquidate_offset),
                    time_rule=time_rules.market_close(minutes=10))

  # Daily rebalance to equal weight - sell high buy low
  #for i in range(-T5Offset, liquidate_offset) :
  #  date = date_rules.month_end(days_offset=-(i+1)) if i < 0 else \
  #         date_rules.month_start(days_offset=i)
  #  schedule_function(func=rebalance, date_rule=date,
  #                    time_rule=time_rules.market_close(hours=1))


# Will be called on every trade event for the securities you specify. 
def handle_data(context, data) :
  record(leverage=context.account.leverage)
    
def before_trading_start(context, data) :
  fd = get_fundamentals(query(fundamentals.valuation.market_cap,
                              fundamentals.valuation.shares_outstanding)
                        .filter(fundamentals.valuation.market_cap != None)
                        .filter(fundamentals.valuation.shares_outstanding != None)
                        .filter(fundamentals.share_class_reference.is_primary_share)                      
                        .filter(fundamentals.company_reference.country_id == "USA")
                        .filter(or_(fundamentals.company_reference.primary_exchange_id == "NAS",
                                    fundamentals.company_reference.primary_exchange_id == "NYS"))
                        .order_by(fundamentals.valuation.market_cap.desc())
                        .limit(context.universe_size))
  # Market cap and outstandings
  context.market_caps = fd.loc['market_cap']
  context.shares_outstandings = fd.loc['shares_outstanding'] 
  update_universe(fd.columns.values)
    
def zscore(a) :
  return (a - a.mean()) / a.std()
    
def enter_T5(context, data) :
  today = get_datetime().date() #.tz_convert('US/Eastern')
  month_start = today + BMonthBegin(n=-1) + BDay(n=2)
  days_after_month_start = pd.bdate_range(month_start, today).size
  #lastT4 = today + BMonthBegin(n=-1) + BDay(n=-4)
  #days_after_lastT4 = pd.bdate_range(lastT4, today).size

  closes = history(days_after_month_start, '1d', 'close_price').dropna(axis=1)
  closes = closes.loc[:,context.market_caps.index.values]
  T6Offset = 0 if get_environment('data_frequency') == 'daily' else -1
  #closesT8 = closes.iloc[-3 + T6Offset]
  T8date = closes.index[-3 + T6Offset]
  #closesT6 = closes.iloc[-1 + T6Offset]

  # Identify the price drop as net-buy - dosent work at least at T6,
  # - not statistically significant, but a good heuristic
  # opens = history(1 - T6Offset, '1d', 'open_price').dropna(axis=1)
  # opensT6 = opens.iloc[-1 + T6Offset]
  # price_drops = closesT6 - opensT6

  # T8 must higher than T6 - not statistically significant
  #candidates = closesT6[closesT8 < closesT6].dropna()

  # Last month turn return - not statistically significant,
  # see autocorrelation_plot
  #month_turns = closes.loc[lastT4:month_start,:]
  #month_turns = month_turns.iloc[-1]/month_turns.iloc[0]

  # Identify the over bought ones
  # closes = closes.loc[month_start:T8date, :]
  # monthly_rets  - not statistically significant
  # monthly_rets = (closesT8 / closes.iloc[0]).apply(np.log)
  # We assume people tend to liquidate risky assets at the end of month
  # Historicial Volatility - not statistically significant
  rets = closes.apply(np.log).diff().dropna()
  volas = rets.std()

  #risky_rets = volas.dropna()

  # Identify the increase in the turnover
  # - risks are transfering with a volume increase 
  volumes = history(days_after_month_start, '1d', 'volume').dropna(axis=1)
  volumes = volumes.loc[:,context.market_caps.index.values]
  turnovers = (volumes / context.shares_outstandings).dropna(axis=1)
  monthly_turnovers = turnovers.loc[month_start:T8date]
  turnover_avgs = monthly_turnovers.mean()
  T6turnovers = turnovers.iloc[-1 + T6Offset]
  T6Spikes = T6turnovers - turnover_avgs 
  T6Spikes = T6Spikes[T6Spikes > 0].dropna()
  # Identify a net sell with a price drop - not work at T6
  # price_drops = price_drops.loc[T6Spikes.index.values]
  # price_drops = price_drops[price_drops > 0].dropna()
  #T6Spikes.sort(ascending=False)
  # Looks like we can scale the universe with this heuristic
  T6Spikes = T6Spikes.head(60)

  volas = volas.loc[T6Spikes.index.values]
  volas.sort(ascending=False)

  candidates = volas.head(40)

  #candidates.sort(ascending=False)
  #candidates = candidates.head(20)

  #candidates = 
  #candidates = candidates / candidates.sum()
    
  context.candidates = candidates
  rebalance(context, data)
  log.info("T5 enter")

def cancel_orders(context, data) :
  log.info("Cancel orders")
  for sid, ords in get_open_orders().iteritems() :
    for ord in ords :
      cancel_order(ord)
    
def rebalance(context, data) :
  cancel_orders(context, data)

  candidates = context.candidates

  num_candidates = len(candidates)
  record(longs=num_candidates)
  if num_candidates == 0 :
    return

  log.info("rebalance")

  for sid, weight in  candidates.iteritems() :
    target_price = data[sid].price
    order_target_percent(sid, 1.0 / num_candidates) #, style=LimitOrder(target_price))

def liquidate_all(context, data) :
  cancel_orders(context, data)

  log.info("liquidate all")
  for sid, pos in context.portfolio.positions.iteritems() :
    order_target(sid, 0)
There was a runtime error.

It is worth noting that all of the implementations above are missing a key point from the paper - that this behavior is exhibited most clearly amongst stocks that are most commonly held by mutual funds and are showing the highest volatility prior to the month end draw down period.

However, I assume that automating these components within Quantopian would be very difficult?

Thanks for Richard for pointing out.

Do you have any idea where to get the list of popular stocks held by the mutual funds? Currently I only make a naive assumption that the larger market cap, the more stocks are held by mutual funds.

Perhaps we can also use some popular fundamental data metrics to reverse-engineering the list popular stocks?

Nice work iterating on this algorithm. I took your latest version and ran it through the tearsheet to analyze the meta-data about the algo. The tearsheet shows data about your position concentration, risk analysis, drawdown details and more. Here are some of the things that caught my eye:

  • Consider using record() to track the number of candidates your algo is trading. At one point it looks like all the holdings were in IEP, with 99.8% of the portfolio allocated. Maybe this intended? Either way, it's good to keep track!
  • The beta is slightly too high for the hedge fund. We are looking for strategies with a rolling beta-to-spy between 0.3 and -0.3. If you're still working on this strategy, keep this in the back of your mind.
  • Along the same lines, it has a high correlation to the small cap and high growth stocks in the Fama French factors. Generally, when you reduce the beta, you'll reduce your exposure to these factors.
  • Most of the time your algorithm had mild drawdowns - great! Note that in 2008 it had ~30% drawdown. Though most strategies suffered during this market.
  • It looks like your algo trades frequently and fluctuates between sitting in cash and holding positions. (And this is confirmed by your source code above). You can see the behavior in the Gross Leverage and Long/Short/Cash plots. It's good to be aware of this algo behavior. Perhaps you want to consider adding a hedge and shorting another basket of securities to lower the beta?

Take these details into account as you're working on the next iteration. If you want to learn more about how to analyze your algorithm's behavior and how to use the tearsheet, join this webinar on Thursday at 12PM EST: https://attendee.gotowebinar.com/register/3903306230308545025

Jess Stauth is our VP of Quant Strategy and she'll explain how to evalaute strategies.

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks @Alisa for the comments and the hints.

For the IEP case, it is unexpected and may be a bug in the code or it is delisted before I can sell it at the month begin.

The beta is high because this is a long-only strategy. One way to workaround this (not fixing this) is to hold TLT during the month, which will reduce the beta significantly. But this still dosent change the fact that it is still a long-only strategy. I tried to build a long-short strategy but failed because my factor model dosent make sense currently :)

The correlation to small cap is unexpected because the month-turn effect is supposed to more significant to large cap stocks. This may suggest that I am picking the wrong stocks! The correlation to high growth stocks is expected because I think people tend to buy high growth stocks during the mean-revert process? I need to think about this.

For the 30% drawdown, actually I found an heuristics that based on average turn-over before T-8 and after T-8. That's only long the stock at then month end when its pre-T-8 turn-over is lower than its post-T-8 turn-over - this seems to eliminate the drawdown. I need to further think about the reason behind this also.

For the last point, such a behavior is intended. In fact, we can combine another intra-day strategy to this strategy to generate higher return and hopefully lower beta!

Essentially, it is the lack of liquidity that cause such a mean revert process around the month end. Perhaps paper "Liquidity and Market Crashes"[1] may provide useful insight for this strategy.

[1]http://web.mit.edu/wangj/www/pap/HW_070228.pdf