Using historic data in a pipeline filter

I am working on a filter that assesses whether a particular pattern has occured. I already have it working (sort of) but want to transform it into a filter for the pipeline, because the computational part takes in about a half year's worth of low, high and volume data.... But I am struggling with the input part.

I am currently working with something building of off what I found here in [this post][1], will adding the line window_length = 100 let me fetch 100 days of data to work with? Like so?

def Test(**kwargs):
kwargs['window_length'] = 100
class TestFilter(CustomFilter):
# Default to close price
inputs = [USEquityPricing.low,USEquityPricing.high, USEquityPricing.volume]
window_length = 100
def compute(self, today, assets, out, low, high, volume):
# magic happens here, resulting in result being either True or False
out[:] = result
return TestFilter(**kwargs)


And if so.... What will the dataframe look like that I need to work with ? In the notebook, i have it working with one asset, where high, low, volume are retrieved in one dataframe using  hist = get_pricing(my_symbol, fields=['high', 'low', 'volume'], start_date=start_date, end_date=end_date, frequency='daily').dropna()  But with more assets in my universe going through the pipeline, what will that input data look like?

The reason I am asking, is that I not only need, say, 100 days of data for each asset, for high, low and volume, but that I also need to slice that for different date ranges subsequently (i.e. first look at the 100 days, then at a subset of 60 days etc.)