Back to Community
Can't figure out why both Fetch_csv and self-serve data keep forward-filling my data

I have generated a CSV file with daily positions that I would like to run through the backtester:
http://viridianhawk.dreamhosters.com/static_assets/bf4positions.csv

However I've tried both fetch_csv() and the self-serve data features, and both are forward-filling my data such that the positions just accumulate. The CSV file only has roughly 20 tickers per date, but both len(data.fetcher_assets) for the fetcher and len(context.output.index)for self-serve keep growing each day until they are thousands large.

This is not the behavior I expected from reading the docs, nor have I observed it with other datasets that I've uploaded, and I hope it is not occurirng, because I'll have to start over on those as well then.

If I intend for a value to be NULL do I need to explicitly send it in as NULL?

Clone Algorithm
1
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    fetch_csv('http://viridianhawk.dreamhosters.com/static_assets/bf4positions.csv', 
               date_column = 'date',
               date_format = '%y-%m-%d')

def before_trading_start(context, data):
    print data.fetcher_assets
    record(fetcher_len = len(data.fetcher_assets))
There was a runtime error.
5 responses

Both both fetch_csv() and self-serve forward fill data. Think of the data as 'the last know value' as of a particular date. This is similar to the price field which is forward filled with the last known close price.

So, what if one doesn't want to forward fill and just want's point in time data from their data file. Well, there isn't a switch to flip, however one can write a small pre-processing file for the fetch_csv method. The fetch_csv method provides for two 'hooks' into the processing. One just after the raw text file has been imported, and another once the file has been sorted and re-indexed. One can use the former to add rows to the fetched data. We'll add a row 1 day after each original row but with a weight value of 0. This will effectively turn off the weight so it's only in effect for one day. Something like this


def add_zeros(fetched_df):  
    """  
    add a data row one day after each datapoint to  
    zero out the weight  
    """  
    # Get a copy of our fetched data  
    zero_data = fetched_df.copy()

    # Create a collumn (ie series) which is the string date + 1 day  
    plus_1_day = (pd.to_datetime(fetched_df.date) + pd.DateOffset(1)).dt.strftime('%Y-%m-%d')

    # Set our date to plus_1_day in our zero_data dataframe  
    zero_data.date = plus_1_day  
    # Set the weight to zero  
    zero_data.weight = 0

    # Concatenate the fetched and the new zero dataframes  
    # Put the fetched data last so it will be last in the dataframe  
    # This allows the fetcher data to override the zero data in case of duplicate days  
    fetched_df = pd.concat([zero_data, fetched_df, ])

    return fetched_df

One could also do something similar instead in the post_func. I chose this because it seemed simpler.

@Viridian Hawk . Try this out and see if your algo behaves as you expected.

Clone Algorithm
0
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import pandas as pd

def initialize(context):
    fetch_csv('http://viridianhawk.dreamhosters.com/static_assets/bf4positions.csv', 
               date_column = 'date',
               date_format = '%y-%m-%d',
              pre_func=add_zeros,
              post_func=display)

def before_trading_start(context, data):
    # Look at data for a single stock
    print(data.current(symbol('GRA'), fields='weight'))
    
def add_zeros(fetched_df):
    """
    add a data row one day after each datapoint to
    zero out the weight
    """
    # Get a copy of our fetched data
    zero_data = fetched_df.copy()

    # Create a collumn (ie series) which is the string date + 1 day
    plus_1_day = (pd.to_datetime(fetched_df.date) + pd.DateOffset(1)).dt.strftime('%Y-%m-%d')
    
    # Set our date to plus_1_day in our zero_data dataframe
    zero_data.date = plus_1_day
    # Set teh weight to zero
    zero_data.weight = 0
    
    # Concanentate the fetched and the new zero dataframe
    # Put the fethced data last so it will be last in the dataframe
    # This allows the fetcher data to override the zero data in case of duplicate days
    fetched_df = pd.concat([zero_data, fetched_df, ])
                  
    return fetched_df

def display(fetched_df):
    # Look at processed fetched data for a single stock
    stock = symbol('GRA')
    print(fetched_df.query('[email protected]'))
    return fetched_df
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks for the help!

So for self-serve data, for datasets where forward-filling is incorrect, I'll need to make sure that my script that generates the CSV checks the previous day's values and inserts them as NULL for the current day?

From the docs:

symbol, start date, stock_score  
AA,     2/13/12,     11.7  
WFM,    2/13/12,     15.8  
FDX,    2/14/12,     12.1  
M,      2/16/12,     14.3  
You can backtest the code below during the dates 2/13/2012 - 2/18/2012. When you use this sample file and algorithm, data.fetcher_assets will return, for each day:

2/13/2012: AA, WFM  
2/14/2012: FDX  
2/15/2012: FDX (forward filled because no new data became available)  
2/16/2012: M  

The docs state that Fetcher is only forward-filled on days when there are no entries, but this example makes it explicitly clear that on other dates (such as 2/16/2012) there is not forward-filling, and only the tickers submitted for that date are included in fetcher_assets for that date.

So I guess the docs are wrong. The behavior actually experienced is as follows:

2/13/2012: AA, WFM  
2/14/2012: AA, WFM, FDX  
2/15/2012: AA, WFM, FDX  
2/16/2012: AA, WFM, FDX, M  

I got a solution from Chris Myles, which I'll share here in case anybody else needs to have their self-serve pipeline fields not forward-filled:

days_since_last_update = BusinessDaysSincePreviousEvent(  
                                 inputs=[dataset.asof_date.latest]  
                                 )  
is_stale = days_since_last_update > days_since_last_update.min() # + extra_days if needed

pipe = Pipeline(  
    columns={  
        'my_dataset': dataset.pct_change.latest,  
        'asof_date' : dataset.asof_date.latest,  
        'days_since_last_update' : days_since_last_update  
    },  
    screen=~is_stale &  
           dataset.pct_change.latest.notnull()  
)