Back to Community
Timestamps in log seem wrong
API

See the logs of the attached backtest:

2011-01-04 14:45 compute:20 INFO Today: 2011-01-04 00:00:00+00:00
2011-01-04 14:45 compute:20 INFO Today: 2011-01-05 00:00:00+00:00
2011-01-04 14:45 compute:20 INFO Today: 2011-01-06 00:00:00+00:00
2011-01-04 14:45 compute:20 INFO Today: 2011-01-07 00:00:00+00:00
2011-01-04 14:45 compute:20 INFO Today: 2011-01-10 00:00:00+00:00
2011-01-04 14:45 compute:20 INFO Today: 2011-01-11 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-12 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-13 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-14 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-18 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-19 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-20 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-21 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-24 00:00:00+00:00
2011-01-12 14:45 compute:20 INFO Today: 2011-01-25 00:00:00+00:00

This one had me chasing ghosts, I thought my custom factor executed multiple times a day.

Clone Algorithm
0
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 59d530ff10b64b5061cda0c5
There was a runtime error.
2 responses

When you attach a pipeline to an algorithm and run a backtest, it is pre-fetched and pre-computed in chunks. By default, the first chunk computed is 1 week long (so that you can see that your algo is running). Subsequent chunks are 6 months long. In a little more detail, each time you call pipeline_output, your algorithm will check if it has already computed the pipeline for the current simulation day. If it hasn't, it computes the next chunk and caches the output. If it already has performed the computation for the current simulation day (usually the case), it simply reads from the cached output.

Since you are logging within the compute function in a custom factor, it will be triggered when the pipeline is computed. As you can see in your logs, today is incrementing, but several days in a row are printed with the same log timestamp. This is because all of the days in the same chunk are computed in the same simulation day.

Does this help?

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks, I understand now.

It seems I dug a hole for myself: I obtain historical SPY data up until the current day in before_trading_start using data.history and attach this data to the context so my custom factor can use it in calculations. The problem is that the SPY data is retrieved only at the beginning of a pipeline chunk, meaning every day in that chunk the custom factor can only use SPY data up until the first day of the chunk...

I guess what I'm doing wasn't how custom factors are meant to be used, but I'd say it's not the most far-fetched scenario either, think e.g. of obtaining futures data or doing more sophisticated calculations that need to be shared among multiple custom factors.

Now a workaround for me, I guess, would be to include the SPY in the pipeline besides the Q1500US, but then I have to make sure it's excluded from the 'real' pipeline calculations.

In essence, it seems to me that the implementation details are leaking through the abstraction of the simulation. Do you have any advice on how to deal with this other than limiting my custom factors to pipeline data?