Back to Community
Vectorized Implementation to get TTM data

Hello everyone,

I've implemented the following class the get TTM data for a given fundamental factor:

class TTM_unique(CustomFactor):  
    # Get the sum of the last 4 reported values  
    window_length = 260

    def compute(self, today, assets, out, asof_date, values):  
        for asset in range(len(assets)):  
            # unique asof dates indicate availability of new figures  
            _, filing_dates = np.unique(asof_date[:, asset], return_index=True)  
            quarterly_values = values[filing_dates[-4:], asset]  
            # ignore annual windows with <4 quarterly data points  
            if len(~np.isnan(quarterly_values)) != 4:  
                out[asset] = np.nan  
            else:  
                out[asset] = np.sum(quarterly_values)  

It works correctly but the problem is, that it is quite slow and sometime I get a timeout using it in the backtest environment.
Of course, it's slow because of the for-loop but I don't know how to vectorise it.
Any help?

I've attached also a research notebook that shows also as a näive TTM implementation without np.unique, which is 7x faster.

Thanks in advance,
Costantino

Loading notebook preview...
Notebook previews are currently unavailable.
5 responses

Hi Costantino,

Coincidentally, I was recently trying to make this custom factor run more quickly myself. I don't have a vectorized solution, but I have a few suggestions for improving the above implementation (which I initially wrote with help from one of our engineers). The improvements are listed below and implemented in the attached notebook.

  1. Since the TTM_unique custom factor is performing a non-trivial computation (unique), you should provide a mask to your factor. Pipeline will still load data for all US equities over the whole lookback window, but the actual compute function will only be executed on assets which pass your mask. This should shave off a bunch of time from the execution. It's worth noting that supplying a mask to the TTM custom factor doesn't really help because its computation is trivial in terms of performance.
  2. Given that you are looking at quarterly data, I think you can help out the unique function by first taking a subset of the 260 day lookback. In the attached notebook, I sampled every 21 cells in the lookback window of each asset before looking for unique values. I found this to be a safe choice given that we expect data points to have asof_dates ~63 trading days apart. This also helps cut down on the execution time.

Alternatively, the FactSet Fundamentals dataset has built in LTM (last twelve month) fields. Depending on your use case, this might be your best option.

Let me know if this helps.

Loading notebook preview...
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hello Jamie,

thanks a lot for your suggestion, but unfortunately my algo get a timeout even with your improvment.

I've started to work to a vectorized implementation, but the first problem that I had, is that the numpy version currently used by Quantopian, doesn't support the argument axis (new in version 1.13.0):
https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html

Any there any plan to upgrade numpy?

I've just noted, that even np.unique with axis woudn't be than helpful, because in that case it's about unique rows or colums.
What we need is unique values in each column... quite complex to vectorise... :-(

I solved my timeout problem! It was so slow, because I performed a time trend linear regression for each asset in the loop.
Now I create a 2D array and perform the regression only once.

This is my code:

def time_trend(Y, allowed_missing=0):  
    """  
    If 'allowed_missing' is zero, interpolate to fill the NaN.  
    If all values are NaN, replace them with zero  
    """  
    if allowed_missing == 0:  
        # interpolate is too slow for the Algo Platform  
        # Y = pd.DataFrame(Y).interpolate().fillna(method='bfill').fillna(0)  
        Y = pd.DataFrame(Y).fillna(method='ffill', axis=0).fillna(0)  
    n = Y.shape[0]  
    m = Y.shape[1]  
    X = np.full((m, n), (np.arange(n, dtype=float))).T  
    # shape: (N, M)  
    X = np.where(np.isnan(Y), np.nan, X)  
    X_mean = np.nanmean(X, axis=0)  
    Y_mean = np.nanmean(Y, axis=0)  
    # shape: (M,)  
    XY_cov = np.nanmean((X - X_mean) * (Y - Y_mean), axis=0)  
    X_var = np.nanvar(X, axis=0)  
    # shape: (M,)  
    beta = np.divide(XY_cov, X_var)  
    alpha = Y_mean - beta * X_mean  
    Y_est = alpha + np.multiply(beta, X)  
    residual = Y - Y_est  
    s2 = np.nansum(residual ** 2, axis=0) / (n - 2.0)  
    std_err2 = s2 / (n * X_var)  
    std_err = np.sqrt(std_err2)  
    # Write nans back to locations where we have more  
    # then allowed number of missing entries.  
    nanlocs = np.isnan(X).sum(axis=0) > allowed_missing  
    beta[nanlocs] = np.nan  
    # alpha[nanlocs] = np.nan  
    std_err[nanlocs] = np.nan  
    # return (alpha, beta)  
    return (beta, std_err)  


def _(item):  
    return [MsFunds.total_revenue_asof_date, item]


class TimeTrendQ(CustomFactor):  
    outputs = ['trend', 'std_err']  
    window_length = 260  

    def compute(self, today, assets, out, asof_date, values):  
        # take monthly samples before looking for unique values  
        values, asof_date = values[-1:0:-21], asof_date[-1:0:-21]  
        y = np.empty((4,0))  
        for asset in range(len(assets)):  
            _, filing_dates = np.unique(asof_date[:, asset], return_index=True)  
            quarterly_values = values[filing_dates[-4:], asset]  
            if len(~np.isnan(quarterly_values)) != 4:  
                quarterly_values = np.full((4, 1), np.nan)  
            y = np.hstack((y, quarterly_values.reshape((4,1))))  
        (out.trend, out.std_err) = time_trend(y)  

That will do it. I always feel uncomfortable sharing code in the forum that uses a for loop for 2 reasons:
1. I can't shake the feeling that there has to be a vectorized solution (still hoping someone comes up with one for this example!).
2. I'm always worried that someone will accidentally or unknowingly include an expensive computation in it (happens to everyone, especially me).

I'm glad you figured it out!