I wonder if parameter exploration of a strategy have been considered here (to run same strategy but with differents parameters values)
pypet may be an interesting library for this.
"pypet: A Python Toolkit for Data Management of Parameter Explorations" http://journal.frontiersin.org/article/10.3389/fninf.2016.00038/full
User creates an algorithm which use "parameters" (a user interface should allow to define parameters of an algo... using for example simply a YAML file with a default value for each parameter). We can have an ID for this algo (algorithm_id).
User opens a notebook
He imports pypet, defines an environment (with algorithm_id)
He manually adds parameters and their default values (or parameters are automatically add thanks to this strategy ID and a YAML parameter file)
He defines how parameters should be explored (a simple cartesian product for example)
He runs environment (with start date, stop date...) which put several backtest to a task queue of this user.
Some limits should probably be given to avoid too much hardware requirements.
Let's say that queue could only contain 1000 backtest requests and only 5 parallel backtests could be run (that's just an example).
User could go to a task dashboard to see what is current status of tasks.
Task dashboard will contains columns such as
Task created at ...
Task run at ...
Task stopped at ...
Status (scheduled, running, stopped, error, completed...)
Such workflow could allow some classical optimization approach such as WFO (walk-forward optimization)
https://en.wikipedia.org/wiki/Walk_forward_optimization or WFA Walk-Forward Analysis
Other possible library to consider:
- https://www.quantopian.com/posts/to-run-a-backtest-in-research (with broken links to examples)