get lagged output of Pipeline Custom Factor?

I'm trying to figure out how to control the point in time for which a Pipeline Custom Factor returns its output. The typical use case is for a factor to return today's value. Is there a way to lag the output by calling/instantiating the factor with a parameter? For example, what if I want yesterday's value? Or N days ago?

As a specific example, say I wanted the value of AdvancedMomentum N trading days ago--how would I do it?

        class AdvancedMomentum(CustomFactor):
inputs = (USEquityPricing.close, Returns(window_length=126))
window_length = 252

def compute(self, today, assets, out, prices, returns):
am = np.divide(
(
(prices[-21] - prices[-252]) / prices[-252] -
prices[-1] - prices[-21]
) / prices[-21],
np.nanstd(returns, axis=0)
)
out[:] = preprocess(am)

22 responses

Hi Grant,

Interesting question. For this factor, I wonder if this would do the trick for N = 5 days ago?

        class AdvancedMomentum(CustomFactor):
N = 5 + 1
inputs = (USEquityPricing.close, Returns(window_length=126 + (N/2))
window_length = 252 + N

def compute(self, today, assets, out, prices, returns):
am = np.divide(
(
(prices[-21 - N] - prices[-252 - N]) / prices[-252 - N] -
prices[-1 - N] - prices[-21 -N]
) / prices[-21 - N],
np.nanstd(returns, axis=0)
)
out[:] = preprocess(am


@Grant. To get the value of a factor n days ago use a simple CustomFactor like this

class Factor_N_Days_Ago(CustomFactor):
def compute(self, today, assets, out, input_factor):
out[:] = input_factor



Then create an instance of this and pass the factor you want n days ago as the input. Pass the window length which is n+1.

    advanced_momentum_1_day_ago = Factor_N_Days_Ago([advanced_momentum], window_length=days_ago+1)



The one thing you need to ensure is that the original factor is 'window safe'. If it is, then put the following line into it's class definition.

    window_safe = True



See the attached notebook.

26

Thanks Dan -

The one thing you need to ensure is that the original factor is 'window safe'.

I'm not sure how do check/test for window safeness. I searched the Q help page for the topic, and didn't find anything. I gather from Jamie's post (https://www.quantopian.com/posts/how-to-make-factors-be-the-input-of-customfactor-calculation#58f7921d92b39e5b66f9b473). I guess the idea is that the factor needs to have the same normalization versus time on a per security basis, which I think one would want anyway for an alpha factor, right? I thought all of the Pipeline inputs were corrected for such issues (e.g. splits), anyway?

Also, I don't understand what this does:

window_safe = True


Is it just to ensure that the author did a head-scratch to determine window safeness? Or is it more than that?

'window_safe' is a zipline/pipeline term. I've never come across it in other quant circles. If a factor is 'window_safe' it means its value will be the same when calculated over various 'windows' or timeframes. Really this means its value won't be impacted by stock splits so it's 'safe' to use whether a split is applied or not.

As an example, a '10_day_moving_average_price' factor is not window_safe. If a 2:1 split occurs all the values will be cut in half. However, a '10_day_return' factor is. It's just a ratio which will remain the same even if the prices are halved.

It's up to the author of a factor to determine if a factor is 'window_safe'. The window_safeflag is just used in pipeline to throw an error if it finds it's using a factor in an 'un-safe' way and therefore may be giving incorrect results. Setting this to True simply instructs pipeline to not throw an error. It should be noted that maybe using a factor with unadjusted prices is ok. It just depends upon the situation.

So, yes. It's just to ensure the author did a 'head-scratch'.

Very helpful, thanks Dan! I've wondered about 'window_safe' for some time too.

Thanks Dan -

I'm still a bit confused about the need for window safeness. Outside of Pipeline, in an algo that is run in the IDE (the "bactester"), when OHLCV minute bar data are retrieved, they are corrected for splits as of the current simulation time, right? Is this also not true for Pipeline data when running an algo in the IDE?

Or perhaps the code you shared above effectively dials back the current simulation time, so that one has to worry about splits?

Grant, I sometimes find the whole split, window_safe, thing a bit of a head scratcher.

Concrete examples can therefore be a help. Below is a pipeline output for AAPL at the time of their 7:1 split on 6-9-2014. The three columns are price (standard close), price 2 days ago (a typical standard custom factor), and then the price factor used in the n-days-ago factor.

              price    price_2_day_ago   price_factor_2_day_ago
2014-06-03  628.50000   633.000000        633.00000
2014-06-04  637.54000   628.500000        628.50000
2014-06-05  644.82000   637.540000        637.54000
2014-06-06  647.35000   644.820000        644.82000
2014-06-09  92.22613    92.480421         647.35000
2014-06-10  93.70000    92.226130         92.22613
2014-06-11  94.25000    93.700000         93.70000
2014-06-12  93.87000    94.250000         94.25000
2014-06-13  92.26000    93.870000         93.87000



Notice the 'price_2_day_ago' and 'price_factor_2_day_ago' are the same as long as there aren't any splits between n-days-ago and the current simulation day. The 'price_2_day_ago' factor uses adjusted prices, however, 'price_factor_2_day_ago' is the actual (unadjusted) factor value 2 days ago. It's the 'real' value of the factor 2 days ago. This may or may not be what you want. It's not right or wrong. Just understand how to use it. 'window_safe' is just a flag of caution.

Not to beat this to death, but here's an example. If one wanted to know if yesterdays price was greater than the prices 2 days ago, then definitely use the adjusted values (ie column 2 above). However, if one wanted to check if a stock is priced over $100, then adjusted prices aren't correct. Column 3 above would maybe be more correct. Take a look at the attached notebook. The code shared above for ''Factor_N_Days_Ago" factor works exactly as it implies. It returns the value of a factor as it would have been seen on that day. It doesn't 'adjust' it for any subsequent splits, it's exactly 'as-it-was'. Whether one needs to worry about splits depends upon how it is used (eg the the stock over$100 example above), or more generally, how the factor is calculated and whether it's impacted by splits. If a factor isn't impacted by splits (eg ratios or counts such as returns or up/down days) then the factor can be labeled as 'window_safe'.

oops, forgot to attach the notebook...

15

Hi Dan -

You are using a notebook, but do things work the same way in the backtester? On the help page, it clearly states:

When your algorithm calls for historical equity price or volume data, it is adjusted for splits, mergers, and dividends as of the current simulation date. In other words, if your algorithm asks for a historical window of prices, and there is a split in the middle of that window, the first part of that window will be adjusted for the split. This adjustment is done so that your algorithm can do meaningful calculations using the values in the window.

So, does this only apply to non-Pipelined data? Or maybe it applies to Pipelined data, but only when running a backtest?

Or perhaps Pipeline, when queried for lagged data/factors, as you illustrated, is effectively shifting the simulation date?

Sorry, still confused what's going on...

Grant,

One of the redeeming features of the Pipeline API is that uses the same execution engine and has the same user-facing API in both research and backtesting. The only differences between the two environments are the way that you actually run/execute the pipeline, and the shape of the output. Regardless of environment, when a pipeline is computed for day N, it is computed over a set of tabular input data (described as a 2-dimensional M*N matrix in the CustomFactor lesson of the Pipeline Tutorial). Per-share data fields in the input data will always be adjusted as of day N.

In response to the meaning of the simulation date: in backtesting, the 'simulation date' is the current date of the zipline engine. In research, the 'simulation date' is the date in run_pipeline. The simulation date in research is the first level of the index in the output dataframe of run_pipeline.

In general, I find it helpful to play around with an example, like the AAPL split in 2014. You should play around with printing out the inputs to a CustomFactor, the output of run_pipeline, etc. to help visualize how pipeline works.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

So, I guess I'm concluding that Dan's example above effectively shifts the simulation date when computing a lagged Pipeline custom factor, so that one needs to worry about the factor being "window safe" (or it'll suffer from erroneous jumps, if one is expecting adjusted data). If I understand correctly, as of the effective simulation date, the data are adjusted for splits and dividends back to the time of Adam and Eve (or the Big Bang, if you prefer).

It's a bit confusing, since the "simulation date" of a backtest is the current date:

get_datetime(timezone)


Returns the current algorithm time. By default this is set to UTC, and you can pass an optional parameter to change the timezone.

But if one lags a Pipeline factor in a backtest, then the data used by the factor are a trailing window of adjusted data, as of the effective simulation date, and not the get_datetime date (if I'm following correctly...).

Backing up one step. The ONLY time 'window_safe' (and likewise the majority of the above discussion) is an issue is when using pipeline AND when factors are used as inputs to other factors.

When using any of the built in datasets as inputs, the input data is dutifully adjusted each day to be the data one would have seen on that day. It's really sort of a three step process. First the data is fetched, then it's adjusted, then it is fed to the compute method of any factors.

Adjusting the data is not a trivial process. As an example, prices are divided by the split ratios (ie a 2:1 split will divide the price by 2), however volumes are multiplied by the split ratio (ie a 2:1 split will multiply the volume by 2). Zipline takes care of adjusting the data BEFORE feeding the data into any pipeline factors. The data is always adjusted as of the simulation date.

So far so good...

Now we get to using factors as inputs to other factors. While the built in datasets are dutifully adjusted for splits and dividends, zipline/pipeline doesn't have a clue how to adjust an arbitrary input (such as a custom factor). Does it multiply? Does it divide? Does it do some other funky math? So, it doesn't do anything. (Well I suppose technically it does something. It throws an error if window_safe isn't set to True). In any case, it's really now a two step process. Fetch all the data and send the data to the compute method of the factors. No intermediate adjusting. It doesn't try to 'adjust' it because it doesn't know how to. It just thinks of it as raw data. That's what's happening in the notebook example above.

@Grant, I wouldn't think in terms of 'lagging' a pipeline factor or 'effective simulation' dates. That's complicating it too much. First, one concept which may not be apparent, is that a factor ALWAYS has the same output(s) when run on a specific day. One could run the factor standalone in a notebook or in the IDE or as an input to another factor, and it will ALWAYS output the same value(s) for a specific day. It's like static dictionary. On this date this is the factor output. Period.

Now, when using factors as inputs there are just two steps. First, the factor output is computed. (Remember the output is a fixed series of dates and associated outputs -one date - one output.) Second, this output is passed to the compute method of the other factor. No adjusting is done. ''Factor_N_Days_Ago" is really only taking the fixed output from the input factor (which is a series of dates and the associated output) and 'looking up' the output from 2 days ago. It's not any more complicated than that.

Hope that helps?

Thanks Dan & Jamie -

I think the key concept I was missing is that, as Dan says, his Factor_N_Days_Ago is looking up the factor output from N days back. I was incorrectly thinking of it as applying the factor to lagged data adjusted up to the current get_datetime() date of the simulation.

Regarding the use of window_safe = True I'm still not completely clear if Pipeline performs a test for window safeness if I don't use it (and thus it is not necessary and is a kind of override), or if it is always required when passing the output of one factor to another, or an error will result?

I searched on the help page for a description of the window_safe flag but found nothing. Generally is there documentation on this business of passing the output of one factor to another? Seems like a basic tool.

Based on some Google searches, Dan's comment above, and some testing on a factor that outputs Returns (which should be window safe), I conclude that setting window_safe = True flags Pipeline to ignore the NonWindowSafeInput "error" and to keep calm and carry on.

Another question is, when Factor_N_Days_Ago is instantiated with a mask, does it return values using the current value of the mask, or the mask N days ago? For example, since the composition of QTradableStocksUS changes with time, if Factor_N_Days_Ago is looking up prior values of the factor, then the list of securities would change with N, due to changes in QTradableStocksUS. This is what I'd expect, since one is effectively storing and then looking up prior values of the factor.

Hi Dan & Jamie -

Is there any way to write a function similar to the example above Factor_N_Days_Ago but that returns the factor values for a trailing window of N days? It seems as though Pipeline custom factors are designed to just return a vector, so maybe I would need to iterate over the days_ago parameter to access a trailing window (versus a single lagged value). Or somehow use out.<output_name> for each lagged value, and then cobble everything back together?

Dan,

I've tried using this version (copied from ipynb) in my strategy, and the simpler version on another post (for getting price 2 days ago), and I cannot get either to build -- I'm getting CustomFactor unknown -- so while it may be the same backtest, etc -- I simply cannot get this code to work in any way I try.

Would be nice if I could simply do:
USEquityPricing.close.iloc[-255]
or .shift(255)

Unfortunately I'm stuck with this -- just need to get it to run : )

Zach

Hi Zach,

Thanks for the feedback. You're right that a built-in function to get values from N days ago would be helpful here. I've added a +1 to an internal ticket tracking this feature request.

In the meantime, would you be able to share your implementation and the corresponding error message? Maybe we can help get it running.

Jamie,

Thanks for the quick reply ! I actually figured out the momentum issue -- unfortunately I was missing an import haha. Seems in the notebook format I missed it, was looking in the wrong cells. I am still stuck, however. I'm trying to translate a deprecated strategy into current build -- the primary issue was the fundamentals (used Morningstar), but I replaced those with the new native filters in universe.
Unfortunately I'm still stuck on the method of building the pipeline -- it was done kind of as a pure DF in the deprecated example, and I've had trouble reconciling with the rest of the logic.

Here's the 'new' version. I'll comment again with the original.

(I also tried simply replacing the deprecated fundamental logic with current style, still not running)

2
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a template algorithm on Quantopian for you to adapt and fill in.
"""
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
import pandas as pd
import math
import datetime

def initialize(context):
"""
Called once at the start of the algorithm.
"""

#### Variables to change for your own liking #################
#the constant for portfolio turnover rate
context.holding_months = 1
#number of stocks to pass through the fundamental screener
context.num_screener = 500
#number of stocks in portfolio at any time
context.num_stock = 50
#number of days to "look back" if employing momentum. ie formation
context.formation_days = 200
#set False if you want the highest momentum, True if you want low
context.lowmom = False
#################################################################
#month counter for holding period logic.
context.month_count = context.holding_months

context.spy = sid(8554)

# Rebalance every day, 1 hour after market open.
algo.schedule_function(
rebalance,
algo.date_rules.month_start(),
algo.time_rules.market_open(),
)

# Record tracking variables at the end of each day.
algo.schedule_function(
record_vars,
algo.date_rules.every_day(),
algo.time_rules.market_close(),
)

# Create our dynamic stock selector.
algo.attach_pipeline(make_pipeline(), 'pipeline')

def make_pipeline():
"""
A function to create our dynamic stock selector (pipeline). Documentation
on pipeline can be found here:
https://www.quantopian.com/help#pipeline-title
"""

# Base universe set to the QTradableStocksUS

# Factor of yesterday's close price.
yesterday_close = USEquityPricing.close.latest

#Begin massive fundamental blend
#this code prevents query every day -- THIS INITIALLY WAS IN DIF FUNC
#if context.month_count != context.holding_months:
#    return

#Begin fund list
mktCap = Fundamentals.market_cap.latest > 1e6
so = Fundamentals.shares_outstanding.latest
ctry = Fundamentals.country_id.latest
ebitda = Fundamentals.ebitda.latest
pe = Fundamentals.pe_ratio.latest
fcf = Fundamentals.fcf_ratio.latest
ev_eb = Fundamentals.ev_to_ebitda.latest
ps = Fundamentals.ps_ratio.latest
roe = Fundamentals.roe.latest
tde = Fundamentals.total_debt_equity_ratio.latest
cr = Fundamentals.current_ratio.latest

#Booleans
so_fn = (so < 2e8)
ctry_fn = ctry != 'CHN'
ebitda_fn = ebitda > 0
pe_fn = (pe > 1) & (pe < 30)
fcf_fn = fcf < 30
eveb_fn = ev_eb < 30
ps_fn = ps < 5
roe_fn = roe > .1
tde_fn = tde < 1
cr_fn = cr > 1

#Combined filters
fund_filt = (so_fn & ctry_fn & ebitda_fn & pe_fn & fcf_fn & eveb_fn & ps_fn & roe_fn & tde_fn & cr_fn)

#.order_by(fundamentals.valuation_ratios.pe_ratio.asc())
#.limit(context.num_screener)

pipe = Pipeline(
columns={
'close': yesterday_close,
'pe' : pe,
},
screen=fund_filt
)
return pipe

"""
Called every day before market open.
"""
context.output = algo.pipeline_output('pipeline')

# These are the securities that we are interested in trading each day.
context.security_list = context.output.index

def rebalance(context, data):
"""
Execute orders according to our schedule_function() timing.
"""
chosen_df = calc_return(context)

def calc_return(context):
price_history = history(bar_count=context.formation_days, frequency="1d", field='price')

temp = context.fund_filt.copy()  #ORR context.output.copy()

for s in temp:
now = price_history[s].ix[-20]
old = price_history[s].ix
pct_change = (now - old) / old
if np.isnan(pct_change):
temp = temp.drop(s,1)
else:
temp.loc['return', s] = pct_change#calculate percent change

return temp

def record_vars(context, data):
"""
Plot variables at the end of each day.
"""
pass

def handle_data(context, data):
"""
Called every minute.
"""
pass
There was a runtime error.

I don't know what happened -- I'm getting some weird indent / formatting error here -- don't understand why my indents are aligned -- just checked them

Maybe some issue with the commented out old code -- I tried sharing my code, but I can't because I can't get a backtest to run. Tried sharing with the collaborate feature?

If that was unsuccessful, here is the link to the thread from the old strategy I'm trying to translate -- and get_fundamentals was the primary issue.
https://www.quantopian.com/posts/value-momentum-strategy

Any help would be greatly appreciated : )

Hi Zach,

I took a look and it seems the error is coming from this line: temp = context.fund_filt.copy(). Specifically, the issue is that fund_filt is not an attribute stored on context, so the algorithms is raising an exception.

That said, I think your best bet is to re-write the strategy in pipeline. The version that you are working from is using a very old version of the Quantopian API, and can be done much more simply in pipeline. I've attached a version of something that I think is pretty close to the original post you were working from, but with most of the logic moved into pipeline. Before iterating on the strategy any further, I'd highly recommend going through the Getting Started Tutorial and Pipeline Tutorial to get a better understanding of how the Pipeline API works, as it is Quantopian's core API.

2
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data import EquityPricing, Fundamentals
import pandas as pd
import math
import datetime
import quantopian.optimize as opt

def initialize(context):
"""
Called once at the start of the algorithm.
"""

#### Variables to change for your own liking #################
#the constant for portfolio turnover rate
context.holding_months = 1
#number of stocks to pass through the fundamental screener
context.num_screener = 500
#number of stocks in portfolio at any time
context.num_stock = 50
#number of days to "look back" if employing momentum. ie formation
context.formation_days = 200
#set False if you want the highest momentum, True if you want low
context.lowmom = False
#################################################################
#month counter for holding period logic.
context.month_count = context.holding_months

context.spy = sid(8554)

# Rebalance every day, 1 hour after market open.
algo.schedule_function(
rebalance,
algo.date_rules.month_start(),
algo.time_rules.market_open(),
)

# Create our dynamic stock selector.
algo.attach_pipeline(make_pipeline(context), 'pipeline')

class CustomReturn(CustomFactor):
inputs=[EquityPricing.close]
def compute(self, today, asset_ids, out, values):
# Calculates the column-wise standard deviation, ignoring NaNs
out[:] = (values[-20] - values) / values

def make_pipeline(context):
"""
A function to create our dynamic stock selector (pipeline). Documentation
on pipeline can be found here:
https://www.quantopian.com/help#pipeline-title
"""

# Base universe set to the QTradableStocksUS

# Factor of yesterday's close price.
yesterday_close = EquityPricing.close.latest

#Begin massive fundamental blend
#this code prevents query every day -- THIS INITIALLY WAS IN DIF FUNC
#if context.month_count != context.holding_months:
#    return

#Begin fund list
mktCap = Fundamentals.market_cap.latest > 1e6
so = Fundamentals.shares_outstanding.latest
ctry = Fundamentals.country_id.latest
ebitda = Fundamentals.ebitda.latest
pe = Fundamentals.pe_ratio.latest
fcf = Fundamentals.fcf_ratio.latest
ev_eb = Fundamentals.ev_to_ebitda.latest
ps = Fundamentals.ps_ratio.latest
roe = Fundamentals.roe.latest
tde = Fundamentals.total_debt_equity_ratio.latest
cr = Fundamentals.current_ratio.latest

#Booleans
so_fn = (so < 2e8)
ctry_fn = ctry != 'CHN'
ebitda_fn = ebitda > 0
pe_fn = (pe > 1) & (pe < 30)
fcf_fn = fcf < 30
eveb_fn = ev_eb < 30
ps_fn = ps < 5
roe_fn = roe > .1
tde_fn = tde < 1
cr_fn = cr > 1

#Combined filters
fund_filt = base_universe & (so_fn & ctry_fn & ebitda_fn & pe_fn & fcf_fn & eveb_fn & ps_fn & roe_fn & tde_fn & cr_fn)

#.order_by(fundamentals.valuation_ratios.pe_ratio.asc())
#.limit(context.num_screener)

custom_return = CustomReturn(window_length=context.formation_days)

pipe = Pipeline(
columns={
'close': yesterday_close,
'pe' : pe,
},
screen=top_returners
)
return pipe

def rebalance(context, data):
"""
Execute orders according to our schedule_function() timing.
"""
chosen_assets = algo.pipeline_output('pipeline').index
num_chosen_assets = len(chosen_assets)

target_weights = {}

for asset in chosen_assets:
target_weights[asset] = 0.95/num_chosen_assets

algo.order_optimal_portfolio(
objective=opt.TargetWeights(target_weights),
constraints=[]
)


There was a runtime error.

I realized the same thing last night (after we spoke) -- did you see the version since I added make_pipeline ? Regardless I'm stuck on the same thing.

I guess the issue I'm having is I'm still not getting how to use the calc_return function -- I realize the context.output.copy() is the issue, but that's me trying to replicate the original where they're just copying the dataframe. It seems they're simply calculating momentum, which I can do simpler inline like so:
close_1yr_ago = yesterday_close / (Returns(window_length=252) + 1.0)
mom_calc = (yesterday_close - close_1yr_ago) / close_1yr_ago

I understand pipeline, I just didn't understand how to interact with the pipeline with a custom function like this (calc_return) -- but I don't think I need to necessarily. I can simply filter the pipeline with masks or screens, and sort it accordingly -- kind of how I did in the Ver2 I shared first?
It seems Ver2 didn't share -- I'll try again.

I guess my question is why it's performing so differently from the strategy I was replicating? Am I missing something here?

Zach

2
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
"""
This is a template algorithm on Quantopian for you to adapt and fill in.
"""
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.factors import Returns
import pandas as pd
import math
import datetime

class Momentum(CustomFactor):
""" Conventional Momentum factor """
inputs = [USEquityPricing.close]
window_length = 252

def compute(self, today, assets, out, prices):
out[:] = (prices[-21] - prices[-252])/prices[-252]

def initialize(context):
"""
Called once at the start of the algorithm.
"""

#### Variables to change for your own liking #################
#the constant for portfolio turnover rate
context.holding_months = 1
#number of stocks to pass through the fundamental screener
context.num_screener = 500
#number of stocks in portfolio at any time
context.num_stock = 50
#number of days to "look back" if employing momentum. ie formation
context.formation_days = 200
#set False if you want the highest momentum, True if you want low
context.lowmom = False
#################################################################
#month counter for holding period logic.
context.month_count = context.holding_months

context.formation_days = 250

context.spy = sid(8554)

# Rebalance every month, 1 hour after market open.
algo.schedule_function(
rebalance,
algo.date_rules.month_start(),
algo.time_rules.market_open(),
)

# Record tracking variables at the end of each day.
algo.schedule_function(
record_vars,
algo.date_rules.every_day(),
algo.time_rules.market_close(),
)

# Create our dynamic stock selector.
algo.attach_pipeline(make_pipeline(), 'pipeline')

def make_pipeline():
"""
A function to create our dynamic stock selector (pipeline). Documentation
on pipeline can be found here:
https://www.quantopian.com/help#pipeline-title
"""

# Base universe set to the QTradableStocksUS

# Factor of yesterday's close price.
yesterday_close = USEquityPricing.close.latest
close_1yr_ago = yesterday_close / (Returns(window_length=252) + 1.0)

mom_calc = (yesterday_close - close_1yr_ago) / close_1yr_ago

#Begin massive fundamental blend
#this code prevents query every day -- THIS INITIALLY WAS IN DIF FUNC
#if context.month_count != context.holding_months:
#    return

#Begin fund list
mktCap = Fundamentals.market_cap.latest > 1e6
so = Fundamentals.shares_outstanding.latest
ctry = Fundamentals.country_id.latest
ebitda = Fundamentals.ebitda.latest
pe = Fundamentals.pe_ratio.latest
fcf = Fundamentals.fcf_ratio.latest
ev_eb = Fundamentals.ev_to_ebitda.latest
ps = Fundamentals.ps_ratio.latest
roe = Fundamentals.roe.latest
tde = Fundamentals.total_debt_equity_ratio.latest
cr = Fundamentals.current_ratio.latest
pb = Fundamentals.pb_ratio.latest

#Booleans
so_fn = (so < 2e8)
ctry_fn = ctry != 'CHN'
ebitda_fn = ebitda > 0
pe_fn = (pe > 1) & (pe < 30)
fcf_fn = fcf < 30
eveb_fn = ev_eb < 30
ps_fn = ps < 5
roe_fn = roe > .1
tde_fn = tde < 1
cr_fn = cr > 1

#SLIGHT DEVIATION FROM ORIGINAL HERE --

#Combined filters
fund_filt = (so_fn & ctry_fn & ebitda_fn & pe_fn & fcf_fn & eveb_fn & ps_fn & roe_fn & tde_fn & cr_fn)

fund_filt = fund_filt & (ev_factor | mom_factor)

#Momentum

#.order_by(fundamentals.valuation_ratios.pe_ratio.asc())
#.limit(context.num_screener)

pipe = Pipeline(
columns={
'close': yesterday_close,
'pe' : pe,
'pb': pb,
'ev_eb':ev_eb,
'mom': mom_calc,
},
screen=fund_filt
)
return pipe

"""
Called every day before market open.
"""
context.output = algo.pipeline_output('pipeline')

context.output = context.output.sort_values(by=['mom','pb'],ascending=[False,True])
log.info("context.output: {}".format(context.output)) #Works to here.
#Not needed...
context.security_list = context.output.index

#chosen_df = calc_return(context.security_list) #doesnt work -- might work inline?
#MOMENTUM
'''Momentum doesnt work inline either ! :/
price_history = history(bar_count=context.formation_days, frequency="1d", field='price')
#temp = context.output #.copy()
temp = pd.DataFrame({'Secs':fund_filt})
temp = context.output.copy()

for s in temp:
now = price_history[s].ix[-20]
old = price_history[s].ix
pct_change = (now - old) / old
if np.isnan(pct_change):
temp = temp.drop(s,1)
else:
temp.loc['return', s] = pct_change#calculate percent change

'''
# These are the securities that we are interested in trading each day.

def rebalance(context, data):
"""
Execute orders according to our schedule_function() timing.
"""
#Another srt
chosen_df = context.output.sort_values(by=['pe']).iloc[:,:(context.num_stock)]

#chosen_df = context.output

# Create weights for each stock
weight = (0.95/len(chosen_df.columns))/2
# Exit all positions before starting new ones
for stock in context.portfolio.positions:
if stock not in chosen_df:
order_target(stock, 0)

# Rebalance all stocks to target weights
for stock in chosen_df:
if weight != 0 and stock in data:
order_target_percent(stock, weight)

order_target_percent(context.spy, .5) #SHOULD be -.5 , not working well now though
#(needs mom!!!!)

#Not using this currently
def sort_return(df, lowmom):
'''a cheap and quick way to sort columns according to index value. Sorts by descending order. Ie higher returns are first'''
df = df.T
df = df.sort(columns='return', ascending = lowmom)
df = df.T

return df

# Cannot get this to work : /

def calc_return(context):
price_history = history(bar_count=252, frequency="1d", field='price')

#temp = context.fund_filt.copy()  #ORR context.output.copy()
#temp = context.output.index.copy()  #Very close to working...

for s in temp:
now = price_history[s].ix[-20]
old = price_history[s].ix
pct_change = (now - old) / old
if np.isnan(pct_change):
temp = temp.drop(s,1)
else:
temp.loc['return', s] = pct_change#calculate percent change

return temp

def record_vars(context, data):
"""
Plot variables at the end of each day.
"""
pass

def handle_data(context, data):
"""
Called every minute.
"""
pass
There was a runtime error.

Is there a reason my backtests aren't working (loading...) when I import in here ? Sorry -- I'll try sharing/collaborating with you.

I had another question -- I was trying to calculate a smoothness factor used in momentum, like the number of up days vs number of down days in say 1yr.
I suppose I could use :

    secdf = context.output
secdf['pct_chg'] = secdf['Yesterday Close'].pct_change()
secdf['id'] = np.where((secdf['pct_chg'] < 0),1,-1)


Is there any way to do this IN the pipeline, rather than 'to' the pipe after it's returned?
make_pipeline():

close_1d = yesterday_close / (Returns(window_length=2) + 1.0)
ret_1d = (yesterday_close - close_1d ) / close_id
idm = np.where(ret_1d < 0,1,-1).cumsum()


...