Back to Community
Help me out in alpha discovery for the new Tearsheet Challange

Hi!

I'm starting to look for alpha in the Insider Transactions Dataset.

Could you give me any advices? In the attached notebook I get the data for these 3 metrics (unique_filers_form3_90d, unique_buyers_form4and5_90d & unique_sellers_form4and5_90d) BUT on different columns, do you know a way to have something more compact, excel-style?

Something like:

Timestamp     Equity     alpha_factor    unique_filers_form3_90d     unique_buyers_form4and5_90d      unique_sellers_form4and5_90d

2016-05-05    Equity(2 [ARNC])     0.250000                   5.0                    0.0                                    3.0  
Loading notebook preview...
11 responses

@Emiliano,

Add this to a cell:

import qgrid  
# Set the default max number of rows to 12  so theDataFrame we render  
# with qgrid isn't too tall.  
qgrid.set_grid_option('maxVisibleRows', 12)

# Render DataFrame with QGrid  
qgrid.show_grid(df)

# Uncomment and run this line to allow columns to overflow the cell window  
# and show a toolbar to add/remove rows and view in fullscreen  
# qgrid.show_grid(df, grid_options={'forceFitColumns': False}, show_toolbar=True)  

This and more were found at:
https://www.quantopian.com/docs/recipes/research-recipes#qgrid-example

On the content side, trying to parse these papers/indexes for some insights...perhaps make some factors based on their results.

https://sites.hks.harvard.edu/fs/rzeckhau/InsiderTrading.pdf
https://poloclub.github.io/polochau/papers/13-snam-insider.pdf
http://www.zacksdata.com/data/insider-transactions/zacks-insider-rank/

alan

thank you very much @Alan

For what concerns possible sources of ideas/alpha for the tearsheet challenge I will add this one, too:
The 7 Lessons Of Insider Transactions

Here's an updated version of my notebook:

Next thing to do:
Trying to make the final alpha factor be influenced more by stocks that have an higher number of unique filers:

If stock A has 1 total buyer and 0 total sellers during the last 90 days shouldn't have an alpha factor score as high as another stock that had 100 unique buyers and 0 or 1 or 2 .... unique sellers

Click to load notebook preview

In this notebook I try to address this problem:

Blockquote If stock A has 1 total buyer and 0 total sellers during the last 90 days shouldn't have an alpha factor score as high as another stock that had 100 unique buyers and 0 or 1 or 2 .... unique sellers

Click to load notebook preview

Testing the new alpha factor

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Template algorithm for the insiders challenge. Based on an algorithm provided by Leo M
# The algo uses documented example from: https://www.quantopian.com/docs/data-reference/ownership_aggregated_insider_transactions

from quantopian.algorithm import attach_pipeline, pipeline_output

import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.domain import US_EQUITIES

# Form 3 transactions
from quantopian.pipeline.data.factset.ownership import Form3AggregatedTrades
# Form 4 and Form 5 transactions
from quantopian.pipeline.data.factset.ownership import Form4and5AggregatedTrades

import pandas as pd
import numpy as np

def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    # Normally a contest algo uses the default commission and slippage
    # This is unique and only required for this 'mini-contest'
    set_commission(commission.PerShare(cost=0.000, min_trade_cost=0))   
    set_slippage(slippage.FixedSlippage(spread=0))
    
    # Rebalance every day, 1 hour after market open.
    schedule_function(
        rebalance,
        date_rules.week_start(),
        time_rules.market_open(hours=2),
    )
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(context), 'pipeline') 
    
    # Record any custom data at the end of each day    
    schedule_function(record_positions, 
                      date_rules.week_start(),
                      time_rules.market_close())
    
    
def create_factor():
    # Base universe set to the QTradableStocksUS
    qtu = QTradableStocksUS()
    
    insider_txns_form3_90d = Form3AggregatedTrades.slice(False, 90)
    insider_txns_form4and5_90d = Form4and5AggregatedTrades.slice(False, 90)
    # From each DataSet, extract the number of unique buyers and unique sellers.
    # We do not need to include unique sellers using Form 3, because Form 3 is
    # an initial ownership filing, and so there are no sellers using Form 3.
    unique_filers_form3_90d = insider_txns_form3_90d.num_unique_filers.latest
    unique_buyers_form4and5_90d = insider_txns_form4and5_90d.num_unique_buyers.latest
    unique_sellers_form4and5_90d = insider_txns_form4and5_90d.num_unique_sellers.latest
    # Sum the unique buyers from each form together.
    unique_buyers_90d = unique_filers_form3_90d + unique_buyers_form4and5_90d
    unique_sellers_90d = unique_sellers_form4and5_90d
    # Compute the fractions of insiders buying and selling.
    frac_insiders_buying_90d = unique_buyers_90d / (unique_buyers_90d + unique_sellers_90d)
    frac_insiders_selling_90d = unique_sellers_90d / (unique_buyers_90d + unique_sellers_90d)
         
    # compute factor as buying-selling rank zscores
    alpha_factor = unique_buyers_90d - unique_sellers_90d

    # Add the sentiment factor to a pipeline.
    pipe = Pipeline(
    columns={
        'alpha_factor': alpha_factor,
        'frac_insiders_buying_90d' : frac_insiders_buying_90d,
        'frac_insiders_selling_90d' : frac_insiders_selling_90d,
        #'unique_filers_form3_90d' : unique_filers_form3_90d,
        #'unique_buyers_form4and5_90d' : unique_buyers_form4and5_90d,
        #'unique_sellers_form4and5_90d' : unique_sellers_form4and5_90d
        
    },
    domain=US_EQUITIES,
    )
    
    screen = qtu & ~alpha_factor.isnull() & alpha_factor.isfinite()
    
    return alpha_factor, screen

def make_pipeline(context):  
    alpha_factor, screen = create_factor()
    
    # Winsorize to remove extreme outliers
    alpha_winsorized = alpha_factor.winsorize(min_percentile=0.01,
                                              max_percentile=0.99,
                                              mask=screen)
    
    # Zscore and rank to get long and short (positive and negative) alphas to use as weights
    alpha_rank = alpha_winsorized.rank().zscore()
    
    return Pipeline(columns={'alpha_factor': alpha_rank}, 
                    screen=screen, domain=US_EQUITIES)
    

def rebalance(context, data): 
    # Get the alpha factor data from the pipeline output
    output = pipeline_output('pipeline')
    alpha_factor = output.alpha_factor
    log.info(alpha_factor)
    # Weight securities by their alpha factor
    # Divide by the abs of total weight to create a leverage of 1
    weights = alpha_factor / alpha_factor.abs().sum() 
    
    # Must use TargetWeights as an objective
    order_optimal_portfolio(
        objective=opt.TargetWeights(weights),
        constraints=[],
    )

    
def record_positions(context, data):
    pos = pd.Series()
    for position in context.portfolio.positions.values():
        pos.loc[position.sid] = position.amount
        
    pos /= pos.abs().sum()
    
    # Show quantiles of the daily holdings distribution
    # to show if weights are being squashed to equal weight
    # or whether they have a nice range of sensitivity.
    quantiles = pos.quantile([.05, .25, .5, .75, .95]) * 100
    record(q05=quantiles[.05])
    record(q25=quantiles[.25])
    record(q50=quantiles[.5])
    record(q75=quantiles[.75])
    record(q95=quantiles[.95])
There was a runtime error.

The original one was better :(

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Template algorithm for the insiders challenge. Based on an algorithm provided by Leo M
# The algo uses documented example from: https://www.quantopian.com/docs/data-reference/ownership_aggregated_insider_transactions

from quantopian.algorithm import attach_pipeline, pipeline_output

import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.domain import US_EQUITIES

# Form 3 transactions
from quantopian.pipeline.data.factset.ownership import Form3AggregatedTrades
# Form 4 and Form 5 transactions
from quantopian.pipeline.data.factset.ownership import Form4and5AggregatedTrades

import pandas as pd
import numpy as np

def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    # Normally a contest algo uses the default commission and slippage
    # This is unique and only required for this 'mini-contest'
    set_commission(commission.PerShare(cost=0.000, min_trade_cost=0))   
    set_slippage(slippage.FixedSlippage(spread=0))
    
    # Rebalance every day, 1 hour after market open.
    schedule_function(
        rebalance,
        date_rules.every_day(),
        time_rules.market_open(hours=2),
    )
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(context), 'pipeline') 
    
    # Record any custom data at the end of each day    
    schedule_function(record_positions, 
                      date_rules.every_day(),
                      time_rules.market_close())
    
    
def create_factor():
    # Base universe set to the QTradableStocksUS
    qtu = QTradableStocksUS()
    # Slice the Form3AggregatedTrades DataSetFamily and Form4and5AggregatedTrades
    # DataSetFamily into DataSets. Here, insider_txns_form3_90d is a DataSet
    # containing insider transaction data for Form 3 over the past 90 calendar
    # days, and insider_txns_form4and5_90d is a DataSet containing insider
    # transaction data for Forms 4 and 5 over the past 90 calendar days. We only
    # include non-derivative ownership (derivative_holdings is False).
    insider_txns_form3_90d = Form3AggregatedTrades.slice(False, 90)
    insider_txns_form4and5_90d = Form4and5AggregatedTrades.slice(False, 90)
    # From each DataSet, extract the number of unique buyers and unique sellers.
    # We do not need to include unique sellers using Form 3, because Form 3 is
    # an initial ownership filing, and so there are no sellers using Form 3.
    unique_filers_form3_90d = insider_txns_form3_90d.num_unique_filers.latest
    unique_buyers_form4and5_90d = insider_txns_form4and5_90d.num_unique_buyers.latest
    unique_sellers_form4and5_90d = insider_txns_form4and5_90d.num_unique_sellers.latest
    # Sum the unique buyers from each form together.
    unique_buyers_90d = unique_filers_form3_90d + unique_buyers_form4and5_90d
    unique_sellers_90d = unique_sellers_form4and5_90d
    # Compute the fractions of insiders buying and selling.
    frac_insiders_buying_90d = unique_buyers_90d / (unique_buyers_90d + unique_sellers_90d)
    frac_insiders_selling_90d = unique_sellers_90d / (unique_buyers_90d + unique_sellers_90d)
    
    # compute factor as buying-selling rank zscores
    alpha_factor = frac_insiders_buying_90d - frac_insiders_selling_90d
    
    screen = qtu & ~alpha_factor.isnull() & alpha_factor.isfinite()
    
    return alpha_factor, screen

def make_pipeline(context):  
    alpha_factor, screen = create_factor()
    
    # Winsorize to remove extreme outliers
    alpha_winsorized = alpha_factor.winsorize(min_percentile=0.02,
                                              max_percentile=0.98,
                                              mask=screen)
    
    # Zscore and rank to get long and short (positive and negative) alphas to use as weights
    alpha_rank = alpha_winsorized.rank().zscore()
    
    return Pipeline(columns={'alpha_factor': alpha_rank}, 
                    screen=screen, domain=US_EQUITIES)
    

def rebalance(context, data): 
    # Get the alpha factor data from the pipeline output
    output = pipeline_output('pipeline')
    alpha_factor = output.alpha_factor
    log.info(alpha_factor)
    # Weight securities by their alpha factor
    # Divide by the abs of total weight to create a leverage of 1
    weights = alpha_factor / alpha_factor.abs().sum() 
    
    # Must use TargetWeights as an objective
    order_optimal_portfolio(
        objective=opt.TargetWeights(weights),
        constraints=[],
    )

    
def record_positions(context, data):
    pos = pd.Series()
    for position in context.portfolio.positions.values():
        pos.loc[position.sid] = position.amount
        
    pos /= pos.abs().sum()
    
    # Show quantiles of the daily holdings distribution
    # to show if weights are being squashed to equal weight
    # or whether they have a nice range of sensitivity.
    quantiles = pos.quantile([.05, .25, .5, .75, .95]) * 100
    record(q05=quantiles[.05])
    record(q25=quantiles[.25])
    record(q50=quantiles[.5])
    record(q75=quantiles[.75])
    record(q95=quantiles[.95])
There was a runtime error.

Trying slices of 30 days

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Template algorithm for the insiders challenge. Based on an algorithm provided by Leo M
# The algo uses documented example from: https://www.quantopian.com/docs/data-reference/ownership_aggregated_insider_transactions

from quantopian.algorithm import attach_pipeline, pipeline_output

import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.domain import US_EQUITIES

# Form 3 transactions
from quantopian.pipeline.data.factset.ownership import Form3AggregatedTrades
# Form 4 and Form 5 transactions
from quantopian.pipeline.data.factset.ownership import Form4and5AggregatedTrades

import pandas as pd
import numpy as np

def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    # Normally a contest algo uses the default commission and slippage
    # This is unique and only required for this 'mini-contest'
    set_commission(commission.PerShare(cost=0.000, min_trade_cost=0))   
    set_slippage(slippage.FixedSlippage(spread=0))
    
    # Rebalance every day, 1 hour after market open.
    schedule_function(
        rebalance,
        date_rules.week_start(),
        time_rules.market_open(hours=2),
    )
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(context), 'pipeline') 
    
    # Record any custom data at the end of each day    
    schedule_function(record_positions, 
                      date_rules.month_start(),
                      time_rules.market_close())
    
    
def create_factor():
    # Base universe set to the QTradableStocksUS
    qtu = QTradableStocksUS()
    
    insider_txns_form3_90d = Form3AggregatedTrades.slice(False, 30)
    insider_txns_form4and5_90d = Form4and5AggregatedTrades.slice(False, 30)
    # From each DataSet, extract the number of unique buyers and unique sellers.
    # We do not need to include unique sellers using Form 3, because Form 3 is
    # an initial ownership filing, and so there are no sellers using Form 3.
    unique_filers_form3_90d = insider_txns_form3_90d.num_unique_filers.latest
    unique_buyers_form4and5_90d = insider_txns_form4and5_90d.num_unique_buyers.latest
    unique_sellers_form4and5_90d = insider_txns_form4and5_90d.num_unique_sellers.latest
    # Sum the unique buyers from each form together.
    unique_buyers_90d = unique_filers_form3_90d + unique_buyers_form4and5_90d
    unique_sellers_90d = unique_sellers_form4and5_90d
    # Compute the fractions of insiders buying and selling.
    frac_insiders_buying_90d = unique_buyers_90d / (unique_buyers_90d + unique_sellers_90d)
    frac_insiders_selling_90d = unique_sellers_90d / (unique_buyers_90d + unique_sellers_90d)
         

    # compute factor as buying-selling rank zscores
    alpha_factor = unique_buyers_90d - unique_sellers_90d

    # Add the sentiment factor to a pipeline.
    pipe = Pipeline(
    columns={
        'alpha_factor': alpha_factor,
        'frac_insiders_buying_90d' : frac_insiders_buying_90d,
        'frac_insiders_selling_90d' : frac_insiders_selling_90d,
        #'unique_filers_form3_90d' : unique_filers_form3_90d,
        #'unique_buyers_form4and5_90d' : unique_buyers_form4and5_90d,
        #'unique_sellers_form4and5_90d' : unique_sellers_form4and5_90d
        
    },
    domain=US_EQUITIES,
    )
    
    screen = qtu & ~alpha_factor.isnull() & alpha_factor.isfinite()
    
    return alpha_factor, screen

def make_pipeline(context):  
    alpha_factor, screen = create_factor()
    
    # Winsorize to remove extreme outliers
    alpha_winsorized = alpha_factor.winsorize(min_percentile=0.01,
                                              max_percentile=0.99,
                                              mask=screen)
    
    # Zscore and rank to get long and short (positive and negative) alphas to use as weights
    alpha_rank = alpha_winsorized.rank().zscore()
    
    return Pipeline(columns={'alpha_factor': alpha_rank}, 
                    screen=screen, domain=US_EQUITIES)
    

def rebalance(context, data): 
    # Get the alpha factor data from the pipeline output
    output = pipeline_output('pipeline')
    alpha_factor = output.alpha_factor
    log.info(alpha_factor)
    # Weight securities by their alpha factor
    # Divide by the abs of total weight to create a leverage of 1
    weights = alpha_factor / alpha_factor.abs().sum() 
    
    # Must use TargetWeights as an objective
    order_optimal_portfolio(
        objective=opt.TargetWeights(weights),
        constraints=[],
    )

    
def record_positions(context, data):
    pos = pd.Series()
    for position in context.portfolio.positions.values():
        pos.loc[position.sid] = position.amount
        
    pos /= pos.abs().sum()
    
    # Show quantiles of the daily holdings distribution
    # to show if weights are being squashed to equal weight
    # or whether they have a nice range of sensitivity.
    quantiles = pos.quantile([.05, .25, .5, .75, .95]) * 100
    record(q05=quantiles[.05])
    record(q25=quantiles[.25])
    record(q50=quantiles[.5])
    record(q75=quantiles[.75])
    record(q95=quantiles[.95])
There was a runtime error.

Next idea to test:

Trying to create alpha factor according to the trend in the last 90, 30 trading and 7 trading days:

Here I try to create alpha factor according to the trend in the last 90, 30 trading and 7 trading days:

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Template algorithm for the insiders challenge. Based on an algorithm provided by Leo M
# The algo uses documented example from: https://www.quantopian.com/docs/data-reference/ownership_aggregated_insider_transactions

from quantopian.algorithm import attach_pipeline, pipeline_output

import quantopian.optimize as opt
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.domain import US_EQUITIES

# Form 3 transactions
from quantopian.pipeline.data.factset.ownership import Form3AggregatedTrades
# Form 4 and Form 5 transactions
from quantopian.pipeline.data.factset.ownership import Form4and5AggregatedTrades

import pandas as pd
import numpy as np

def initialize(context):
    """
    Called once at the start of the algorithm.
    """
    # Normally a contest algo uses the default commission and slippage
    # This is unique and only required for this 'mini-contest'
    set_commission(commission.PerShare(cost=0.000, min_trade_cost=0))   
    set_slippage(slippage.FixedSlippage(spread=0))
    
    # Rebalance every day, 1 hour after market open.
    schedule_function(
        rebalance,
        date_rules.week_start(),
        time_rules.market_close(),
    )
    # Create our dynamic stock selector.
    attach_pipeline(make_pipeline(context), 'pipeline') 
    
    # Record any custom data at the end of each day    
    schedule_function(record_positions, 
                      date_rules.week_start(),
                      time_rules.market_close(),
                     )
    
    
def create_factor():
    # Base universe set to the QTradableStocksUS
    qtu = QTradableStocksUS()
    
    insider_txns_form3_90d = Form3AggregatedTrades.slice(False, 90)
    insider_txns_form4and5_90d = Form4and5AggregatedTrades.slice(False, 90)
    
    insider_txns_form3_30d = Form3AggregatedTrades.slice(False, 30)
    insider_txns_form4and5_30d = Form4and5AggregatedTrades.slice(False, 30)
    
    insider_txns_form3_7d = Form3AggregatedTrades.slice(False, 7)
    insider_txns_form4and5_7d = Form4and5AggregatedTrades.slice(False, 7)
    
    insider_txns_form3_1d = Form3AggregatedTrades.slice(False, 1)
    insider_txns_form4and5_1d = Form4and5AggregatedTrades.slice(False, 1)
    
    # From each DataSet, extract the number of unique buyers and unique sellers.
    # We do not need to include unique sellers using Form 3, because Form 3 is
    # an initial ownership filing, and so there are no sellers using Form 3.
    unique_filers_form3_90d = insider_txns_form3_90d.num_unique_filers.latest
    unique_buyers_form4and5_90d = insider_txns_form4and5_90d.num_unique_buyers.latest
    unique_sellers_form4and5_90d = insider_txns_form4and5_90d.num_unique_sellers.latest
        
    unique_filers_form3_30d = insider_txns_form3_30d.num_unique_filers.latest
    unique_buyers_form4and5_30d = insider_txns_form4and5_30d.num_unique_buyers.latest
    unique_sellers_form4and5_30d = insider_txns_form4and5_30d.num_unique_sellers.latest
    
    unique_filers_form3_7d = insider_txns_form3_7d.num_unique_filers.latest
    unique_buyers_form4and5_7d = insider_txns_form4and5_7d.num_unique_buyers.latest
    unique_sellers_form4and5_7d = insider_txns_form4and5_7d.num_unique_sellers.latest
    
    unique_filers_form3_1d = insider_txns_form3_1d.num_unique_filers.latest
    unique_buyers_form4and5_1d = insider_txns_form4and5_1d.num_unique_buyers.latest
    unique_sellers_form4and5_1d = insider_txns_form4and5_1d.num_unique_sellers.latest
    
    # Sum the unique buyers from each form together.
    unique_buyers_90d = unique_filers_form3_90d + unique_buyers_form4and5_90d
    unique_sellers_90d = unique_sellers_form4and5_90d
            
    unique_buyers_30d = unique_filers_form3_30d + unique_buyers_form4and5_30d
    unique_sellers_30d = unique_sellers_form4and5_30d
    
    unique_buyers_7d = unique_filers_form3_7d + unique_buyers_form4and5_7d
    unique_sellers_7d = unique_sellers_form4and5_7d
    
    unique_buyers_1d = unique_filers_form3_1d + unique_buyers_form4and5_1d
    unique_sellers_1d = unique_sellers_form4and5_1d
    
    # Compute the fractions of insiders buying and selling.
    #frac_insiders_buying_90d = unique_buyers_90d / (unique_buyers_90d + unique_sellers_90d)
    #frac_insiders_selling_90d = unique_sellers_90d / (unique_buyers_90d + unique_sellers_90d)
    
    # Compute the absolute value of insiders buying and selling over last n days.
    unique_buyers_minus_sellers_90d=unique_buyers_90d - unique_sellers_90d
            
    unique_buyers_minus_sellers_30d=unique_buyers_30d - unique_sellers_30d
    
    unique_buyers_minus_sellers_7d=unique_buyers_7d - unique_sellers_7d
    
    unique_buyers_minus_sellers_1d=unique_buyers_1d - unique_sellers_1d

    # compute factor as buying-selling rank zscores
    alpha_factor = unique_buyers_minus_sellers_7d*2 - unique_buyers_minus_sellers_30d*1.1 - unique_buyers_minus_sellers_90d*0.9     

    # Add the sentiment factor to a pipeline.
    pipe = Pipeline(
    columns={
        'alpha_factor': alpha_factor,
        #'frac_insiders_buying_90d' : frac_insiders_buying_90d,
        #'frac_insiders_selling_90d' : frac_insiders_selling_90d,
        #'unique_filers_form3_90d' : unique_filers_form3_90d,
        #'unique_buyers_form4and5_90d' : unique_buyers_form4and5_90d,
        #'unique_sellers_form4and5_90d' : unique_sellers_form4and5_90d
        
    },
    domain=US_EQUITIES,
    )
    
    screen = qtu & ~alpha_factor.isnull() & alpha_factor.isfinite()
    
    return alpha_factor, screen

def make_pipeline(context):  
    alpha_factor, screen = create_factor()
    
    # Winsorize to remove extreme outliers
    #alpha_winsorized = alpha_factor.winsorize(min_percentile=0.00,
    #                                          max_percentile=1.00,
    #                                          mask=screen)
    
    # Zscore and rank to get long and short (positive and negative) alphas to use as weights
    #alpha_rank = alpha_winsorized.rank().zscore()
    
    return Pipeline(columns={'alpha_factor': alpha_factor}, 
                    screen=screen, domain=US_EQUITIES)
    

def rebalance(context, data): 
    # Get the alpha factor data from the pipeline output
    output = pipeline_output('pipeline')
    alpha_factor = output.alpha_factor
    log.info(alpha_factor)
    # Weight securities by their alpha factor
    # Divide by the abs of total weight to create a leverage of 1
    weights = alpha_factor / alpha_factor.abs().sum() 
    
    # Must use TargetWeights as an objective
    order_optimal_portfolio(
        objective=opt.TargetWeights(weights),
        constraints=[],
    )

    
def record_positions(context, data):
    pos = pd.Series()
    for position in context.portfolio.positions.values():
        pos.loc[position.sid] = position.amount
        
    pos /= pos.abs().sum()
    
    # Show quantiles of the daily holdings distribution
    # to show if weights are being squashed to equal weight
    # or whether they have a nice range of sensitivity.
    quantiles = pos.quantile([.05, .25, .5, .75, .95]) * 100
    record(q05=quantiles[.05])
    record(q25=quantiles[.25])
    record(q50=quantiles[.5])
    record(q75=quantiles[.75])
    record(q95=quantiles[.95])
There was a runtime error.

Trying to use the research environment to test my alpha factors

Click to load notebook preview

Added winsorize() and zscore()

What do you think I miss? Any suggestions? Should I try to trade more frequently or less according to this IC decay graph?

Click to load notebook preview