Back to Community
Need help with custom factor

Trying to recreate some work that i did before on an old account. can any one help me get this working inside of a custom factor?

Loading notebook preview...
6 responses

Has any one else gotten this error before "TypeError: Term.getitem() expected a value of type Asset for argument 'key', but got int instead."

One first issue is the order of inputs to the compute function must match the order they are specified.

# Probably not what was intended  
class ML(CustomFactor):  
    inputs = [USEquityPricing.close, USEquityPricing.open, USEquityPricing.volume, USEquityPricing.high, USEquityPricing.low]  
    window_length = 246  
    def compute(self, today, assets, open, high, close, Volume, low, out):

# This is correct  
class ML(CustomFactor):  
    inputs = [USEquityPricing.close, USEquityPricing.open, USEquityPricing.volume, USEquityPricing.high, USEquityPricing.low]  
    window_length = 246  
    def compute(self, today, assets, out, close, open, volume, high, low):

However, that's not the problem (yet). A question... what are you expecting the dataframe df inside the compute method to look like? What should the index and the columns be? Should it be a multi-indexed dataframe or a single indexed dataframe? Remember that each of the 'inputs' (close, open, volume, high, low) is a 2 dimensioned numpy ndarray. The number of rows equals the window_length. There is a row for each trading day. There is a column for each asset. The column labels can be found in the assets input. The number of columns equals the length of assets.

I think the error stems from

        df = pd.DataFrame.from_items(self.inputs, columns =['Adj. Close', 'Adj. Open', 'Adj. Volume', 'Adj. High', 'Adj. Low']) 

That is probably not generating the dataframe you expect.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I hope this helps answer your question.

Loading notebook preview...

The typical format to store 2D arrays of values (eg close, open high, low, volume) is with a multi-indexed dataframe. This is the pipeline output format in a notebook. The level0 index are the dates, the level1 index are the securities, and the columns are the values (ie close, open high, low, volume). If you want to use a dataframe, that is the way to go.

To then do some calculations on this dataframe, for each security, use the groupby(level=1).apply(my_function) method. That allows you to write a custom function and pass it to the the apply method. The apply method will then take the data for each security (which is now a single indexed dataframe) and pass it to the function. One can return a scaler value, a list of scaler values, or a panda series. In this case probably a scaler value is appropriate.

Attached is a notebook which puts each of the inputs into a multi-indexed dataframe. I also put some print statements into the code so you can see the data at each step. There may be more efficient ways to do this but this approach seemed the most straightforward. I just made up a 'dummy' function but that is where you would place any sklearn or other methods.

Loading notebook preview...

I see were you use my_result = sid_df.iloc[-1].close then return(my_result)

    "# Rather than looping over each security it's much faster to group by security and apply a function  
    computed_accuracy = df.groupby(level=1).apply(my_df_function)  
    out[:] = computed_accuracy"

Does this mean that you can only pass one variable for each security by returning the function call?

The pandas apply method calls a user specified function. That function can return either a single value or a series. If the function returns a single value, then the apply method merges the values together into a series (one value per security). If the function returns a series, then it merges these series together into a dataframe (one column per security). There is a bit more information in the Pandas documentation https://pandas.pydata.org/pandas-docs/version/0.18/generated/pandas.DataFrame.apply.html.

What you then do with this series or dataframe inside the custom factor's compute function is up to you. However, the custom factor can only return a single scaler value per security per output. (A factor can return multiple outputs. See https://www.quantopian.com/posts/new-feature-multiple-output-pipeline-custom-factors).

Does that answer the question?