We have a number of platform updates to share:
One of our goals this year is to add a lot more data to Pipeline so that you have more ways to come up with successful strategies that are eligible for the contest (and an allocation). Today, we added 3 new datasets from FactSet’s catalog, and all 3 are usable in the contest. Check them out and see if they spark any new ideas!
To learn more about each dataset and how to use them, see the Data Reference:
We plan on making a follow-up post that covers each dataset in more depth, but in the meantime, the data reference should give you a lot of information on each one!
In Research, running a pipeline with
run_pipeline now includes a progress bar to give you a sense of how much of your pipeline has been computed and how much is left. At a high level, it computes the progress by determining the total number of terms that the pipeline needs to compute and the % that have already been computed. This should make it easier to work with pipelines in research, especially those that take longer to run.
We upgraded the hardware or ‘instance class’ on which backtests are executed so you should start to see backtests running a little faster than they did before. Generally speaking, we see the most extreme speed improvements with software updates, but this hardware upgrade should lead to a slight improvement here. It’s also worth noting that the new instance class has a bit more memory, so if you had a backtest that ran out of memory before, you should try running it again.
Previously, the debugger was crashing for algorithms that used certain
quantopian module attributes. Most commonly, this occurred in algorithms that used the
optimize.experimental module. We pushed a fix so it shouldn’t crash any more.
Thanks to Gary for originally reporting the issue.
For those who celebrate it, happy 4th of July!