Back to Community
Feature requests! What changes would you like to see?

We launched this website with some bare-bones features. We wanted to get enough out there so you could try it, and then tell us where we should take it. What would you like to see? What features do you need? What data sources would you like? What data transforms do you use? What forum features will make this community stronger?

Thank you for all your feedback.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

184 responses

Hi, I've just received the invite it looks great!

I'm thinking maybe making some python modules available for the algorithms, for example is nympy available?



Hi Pithawat,

Yes, many popular scientific python libraries are available - including numpy, scipy, and pandas. You can use them as you would in any python code, just import and enjoy:

import pandas  

Please let us know if there are libraries you would like us to add!



The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

When I click on the various community discussion items, the performance graph can take a while to load. It would be nice to be able to review the source code of the algro while the performance graph is being built in the background.

Great idea Rob, we'll give it some thought!

The order(sid, amount) function requires amount to be the number of shares. If I'm trading a portfolio, each share has a different price. In this case I would like to order not a fixed number of shares, but how ever many shares I can buy for X $. That way I could split my portfolio more even.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

@Thomas, I took a quick stab at a helper function to let you place orders in cash instead of in shares. Add the following code to whatever project you'd like to take this cash-based order approach and call this new "order_in_cash" function instead of your normal order function.

def order_in_cash(data, security, cash): shares = cash // data[security].price if shares < 0: shares += 1 # to fix python's flooring of negative integer division. order(security,shares) One key difference to note, you have to pass in "data" from your handle_data function so that the helper can access stock prices. Otherwise, it works as you mentioned.

Andrew: Looks great, thanks a bunch! It's good you caught that weird flooring problem of negative integer division. I haven't been aware of that and it's quite odd.

I wonder if it makes a difference though, have you tried running a backtest with fixed cash instead of fixed shares?

please do consider a re-running feature for one backtesting. That will easy for user to debug his algorithm without setting the backtesting time periods again and again.

+1 on Xingzhong feature request :)

Maybe a simple visualizing execution with short-term (random or true) market data on chart for preview my algorithm's actions? This can be use before a full scale long-term market data backtest.

1) question & FR - what is the benchmark that shows up in my backtests? (just the avg returns to the stocks you initialize?) and then, the FR is, make it so I can explicitly choose my benchmark.

2) Allow for user groups/ selective sharing of algos. So allow me to share an algo with colleague X, but not post to forum (y, I can just email the code, but it would be nicer to have more of a user group feel).

3) add an option to 'winsorise' returns for outlier handling - a notorious issue with backtests is hidden outliers in returns data - sometimes they are obvious, you trade a stock and it makes 10,000% in 1 day (oops pricing error, currency issue etc) - but sometimes these errors can be hidden. Winsorizing your returns data allows you to set sanity bounds on what returns you think a stock can achieve, so you might say, clip my returns data at -99% and + 2 standard deviations from the mean returns for that time period. Better explained here:

4) allow me to plot returns to multiple portfolios in one backtest. so e.g. show long book, short book and a market neutral long/short all in one backtest ( i think you guys said you have the whole 'data cube' behind the scenes so it is just a matter of plotting more of the data already generated)


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

Hey Jess!

1) The current benchmark is an ETF that looks like the S&P 500. Yes, we want the benchmark to be configurable. I think we're going to squeeze that into the schedule sooner than later.

2) Yes, there's definitely a feature around partial-sharing that we need to build. People want to work collaboratively. We need to solve a way to do that. We started with all-or-nothing, and we will add "some."

3) Wow, that's new to me, and it makes a lot of sense. I'll see if we can get someone to put that into Zipline, and then make it available in Quantopian.

4) This one, I think, we already can do! (Unless I'm missing something). In your algo you need to compute all three of those numbers. Then, add this line to your algo:

record(short=your_short_var, long=your_short_var, neutral=your_neutral_var)  

And the graph will magically appear in your backtest. Full doc here.

Great feedback, thanks.

Hi all,

  • Being able to change the date from, date to and initial capital directly in the backtest screen would be nice.
  • Being able to change to date format would be nice too (currently it's MM-DD-YYYY in the IDE screen and YYYY-MM-DD in the backtest screen, I'm more familiar to DD-MM-YYYY or some "universal" format like 01-JAN-2013).
  • Being able to run algorithms from the beginning of day D to end of day D (currently you can only run it from beginning of day D to beginning of day D + 1)
  • Changing the chart's period from one day to one minute if we run our algorithms on one day only and with minute-based data.
  • Being able to use simple transform functions on minute-based window.

That would great to see :).

Thanks a lot,

Hi All,
Great work. The features I would like to see are
1, Version control of my algorithm. Many a times I wish I could get back my old change. Cloning the algo for every version creates too many algos to manage
2. More items mapped in the record functions (current limit is 5)
3. Ability to get price at an arbitrary date (for example, if I want to quickly get AAPL yearly perf compared to SPY for last 10 years, I need to run the backtest for the 10 years. There could be other scenarios where we might want to tap into the database to get a single data
4. Use some other stock other than AAPL for examples :) . The user gets thrown off whenever he sees the example and sees the huge gain that his algo gets

Thanks and Regards

+1 for version control!

I've been cloning my own algorithms to do this and it results in tons of "cloned from" algorithms that are difficult to manage

All of these are great, thank you. Some of them are closer than others, in terms of how quickly we can deliver them.



I would like to receive the forum discussion by mail as well... than the plot and metrics needs to be included as a picture.

Best Regards

Some machine learning models require a repetitive training and validation scheme. A partial answer was already given in the ML topic but a quick look doesn't teach me how to train for example a neural network model with many parameters to train (e.g. neuron weights) and many characteristics to optimise (e.g. number of neurons, hidden layers,...). Ideally, one would be able to perform an cross-validation scheme online.

Secondly, how do you deal with algorithms with high computational demands? With increasing popularity of ML techniques in this domain, models will become more advanced and more computational demanding. What is the limit? Will users have to buy computational power, or will they have to train there models offline on their own computers?

My wish list:

  1. The ability to run an algorithm in batch mode in the background. Thomas Wiecki gives a nice example on, where he uses cloud computing to optimize a parameter in an algorithm. One risk of the current Quantopian backtesting paradigm is that it does not facilitate rapid exploration of the full algorithm parameter space (i.e. the response surface).
  2. Full plotting capability, via matplotlib.
  3. A secure virtual desktop (running in "the cloud") that would allow free-ranging access to the entire Quantopian data set. Basically, this would provide the ability to run zipline (and the browser-based backtester), but with full access to the data. The desktop should also have access to parallel/cluster/GPU/cloud computing.
  4. The backtest output should include an explicit reporting of commissions paid/expenses. I've poked around, and nowhere can I find an accounting of the expenses. Perhaps something like the expense ratios reported for mutual funds could be adopted?


A forum

+1 for full plotting capability :)

@Fabian - that one we do already. At the bottom of the Quantopian homepage, lower-left corner, turn "auto listen" on.

Machine learning and other parameter optimization methods are definitely on the road map.

As for high computational demand, I think we're going to cross that bridge when we get to it. If you have an algo that you want to run that is having performance problems, please share it with us and we'll see what we can do. We're regularly working on our performance.

The ability to plot and otherwise explore the data is also on the list. Data exploration is very exciting, and it's also a very big project that we want to do well. That will come after live trading.

We don't currently show commissions anywhere. I'll look at exposing those.

I would like see a feature that lets you compare two algorithms. Something along the line of comparing backtests from two different algorithms.
I'm not sure if it's already possible.

You can compare the code between two backtests for the same algorithm, to see the diff of the code. Go to the backtest list for your algorithm.

Hello Fawce (& Ben),

I think that Ben is talking about a convenient way of comparing the backtest results of two or more algorithms, not the code (Ben, correct me if I'm wrong).

This is basically the batch-mode functionality that I suggest above:

  1. Define a series of runs over N parameters (e.g. in a table or an array within the algorithm).
  2. For each run, store the results, along with the corresponding parameter settings.
  3. When the batch is complete, notify the user (e.g. an e-mail).
  4. Provide the results in an online table and/or a downloadable file for offline analyses.


@Dan: great, thanks!

Have not read the other posts here, sorry if I'm repeating something. There needs to be some information about portfolio turnover and sensitivity to slippage. Ideally there would also be defaults for trading cost analysis. (coming from FI we look at per basis point effects on IR given the simulated turnover)

+1 for version control!
+1 for matplotlib plotting

Some additional nice-to-haves:

  1. Collapsible code blocks, to hide/reveal code.
  2. A directory structure and the ability to specify a search path, for storing custom functions outside of the main algorithm script (similar to MATLAB).
  3. The ability to block comment/uncomment sections of code.

@Ray, can you tell me more about what you're looking for? We do include a slippage model and a trading cost model already. But it sounds like you are looking for something more? Additional reporting?

@Grant we have that 3rd one done but not well documented. Highlight your code section, then click cntrl-/

extended-hours trading. (after-hours trading). i have a separate post about this, but this would make a big difference.
if not that, then being able to trade against user-supplied market data.
if not that, then the ability to execute trades against arbitrary data (i.e. trade at arbitrary time/price)

@Dan, yes. I'm simply asking for the realized turnover and a chart displaying how sensitive the strategy is to slippage in execution (a function of turnover). These are among the first things I look at when someone shows me a backtest. Not having those displayed with the backtests makes evaluating the quality of a model on quantopian difficult.

@Dan, sorry me again... getting the mails works, though a feature request would be to get the title of the thread as the subject of the mail.
Right now it says just:
xxx has submitted a new post in Quantopian

This makes it quite hard to follow.

Yeah, that drives me crazy too. Will do.

That's interesting Ray. I'm going to have to learn more about that.

I don't know what kind of data quantopian has, but an important factor when determining market impact in high turnover portfolios is the depth of the order book. E.g., there may be an asset that almost never trades but the order book is enormous. Take a look at the euribor contract. I don't mind being 100% of the volume for five minutes because I'm not going to move the market (I may, however, pay the spread). In other assets, there may be huge volume but the order book is shallow and if my strategy is taking liquidity from the market I could be moving the price significantly (significance of course depends on the importance of slippage in the deterioration of my alpha), even if I'm under 10% of the volume (well below the 25% limit quantopian imposes).

Ah, thank you, that helps clarify quite a bit.

Some form of github integration would be nice, for algorithm/function development collaboration (I realize that there would be privacy/security considerations). Awhile back, Thomas W. had outlined some ideas along these lines--let me know if it'd be helpful to pull them up. --Grant

It would be neat to have buy/sell support for Bitcoin.

Hi Daniel,

I'm not so familiar with Bitcoin, but it seems like Quantopian would have to take orders in Bitcoin and then convert them to dollars before sending them to Interactive Brokers, right? Unless IB also accepted Bitcoin...

Is there any precedent for securities trading in Bitcoin?

What about avoiding currency altogether, and just trading shares peer-to-peer? Bitshares... Somewhere I think I saw a quote by Fawce that there may be an untapped worldwide market of 10 million individual, retail quantitative traders...if so, maybe they could just trade amongst themselves and cut out the middlemen? Should be doable (brokerages hold share records electronically, right?), but I doubt that it is part of the Quantopian business plan.

If you look at Quantopian and their funding, I think this kind of thinking will go nowhere, since they are looking to plug into an existing retail business model already established by IB and other online brokers by providing an improved API and backtesting solution...nothing revolutionary, at least for the near-term (but definitely innovative and entrepreneurial).


Would it be possible to allow us to make folders in the "My Algorithm" section?
They are starting to get unwieldy and difficult to find.

Michael, I agree. We have some organization work to do on the My Algorithms page for sure. I haven't planned that out yet.

Hi Dan,

I suggest a test instance of Quantopian, so that members can assist in testing and de-bugging, prior to release. Part of the test instance could include an interactive tool for developing test cases and plans, and displaying test results.

You might also consider publishing a build/update/release schedule, with an accompanying change and test report to be published prior to release.

Generally, you need to address the concern voiced by one member that he'll wake up one day and find that his live trading algorithm is broken (and in a worst-case scenario, losing money due to a unbeknownst change to the underlying code).


What I would like to see changed.
1) We need more documentation!
2) Direct portability between Zipline and Quantopian
3) support for alternative asset classes.

These are the three features that have significantly reduced my reliance on Quantopian/Zipline. I am extremely optimistic about Zipline and think it could be an extraordinary tool in my arsenal, however I cant really get off to the races simply because its extremely difficult to explore all of its features. For instance the batch transform is one of the staple features, however the limited documentation and examples are on the Quantopian side so when I try to port a strategy over to Zipline things quickly fall apart. I seldom get on Quantopian simply because I am trading futures not equities and rely on local data. I am currently trying to put together a frankenstein version of Zipline that will allow me to backtest futures, however handling the data and trying to figure out how to use the module are major hurdles. Would love to get any thoughts or input on this.

If you guys decided to implement futures down the line, I dont think you should use continuos time series because no matter how you adjust it, it simply does not reflect reality. Rolling into a further out contract is typically going to have a negative price impact. You should have data for all the active contracts at any given time and let the user determine what contract to roll into via their algo, typically the contract with the greatest OI. Also you need to consider the difference in tick size/value as well as changes in margin. Many of the platforms I have seen in the past assume .01 minimum tick and static margin across time.

If I may, I would like to suggest the following metrics to help evaluate models
1. Per trade MaxProfit/MaxLoss
2. Probability of Profit/Loss along with Expected Profit or Expected Loss
3. Combine the table of Return and Benchmark Return that tabulates trailing 1,3,6,12 month returns so that its easier to compare


I'd love to see two things:
1. A convergence between zipline and quantopian. E.g. quantopian includes a graph of the benchmark portfolio value. I've found that difficult to reproduce in zipline, which is probably me doing something stupid, but it'd be nice if I didn't have to recreate this since it's already present in quantopian.
2. The benchmarking data to extend back further (e.g. by using 10-year bond data instead of or in addition to the treasury yield curve data) and the ability to easily turn off benchmarking when the primary data set extends back further than the benchmarking data set.

Better search feature for the community posts.

I have found doing a site search through Google returns more results than using the search box.

Also, I would like an easier way to browse the "forum" and check out posted algorithms.

I would like having the capability to compare my portfolio to another portfolio and not only to the benchmark.

Is there a way to indent a block of code in the editor? I found that un-indenting works by selecting code and pressing [shift]+[tab], but attempting to indent by selecting code and pressing [tab] just replaces your text with a tab
Also, a find a refactor feature (or even just find/replace) would be pretty slick

@Vesper, indenting follows the ipython shortcuts. Cmd + ] to indent, Cmd + [ to unindent.
at least in os x, I imagine windows would be ctrl instead.

@Brandon: awesome, thanks. Ill look around for some more useful nuggets

1) Please include an easier way to use time in quantopian. Something like recording each day's bar would be great!
I think generally most traders work through bars rather than line charts.

2) Groups to share my algos only with selected people

3) more technical indicators

Please make transaction record downloadable. I found this especially important to me because I will typically import some arbitrary data to test the vulnerability of a trading system, i.e. flashcrash days, doomed days. And if the transaction records could be downloaded, I can further verify them. Thanks!

In the Algorithms Page, for the list of the Live Trading Algorithms, it would be helpful to show a very short summary data along with each live trading algorithm.
Helps to see how all my live trading algorithms are doing without having to click through each one. If the developer can decide which of his calculated metrics to display in this summary with a default list provided by you, that would be the best.


Please also make it more convenient in adding sid's. For example, context.stocks = [ingredients(SP500)]

@Vesper and Brandon - thanks, we need to document those better, and make them discoverable.

@Daniel When you say use time, I'm not sure what you mean. I do understand recording a bar on the chart.

Collaboration is a great feature request, I definitely want to build it.

Are there technical indicators outside of ta-lib, or do you think finishing ta-lib is the right step?

@Jiaming Can you tell me more about what you'd do with the dowloaded data? I'm not yet understanding what you want to do with a downloaded list.

I agree that it would be awesome to have the SP500 ingredients as a command, but we'd have to license that data from the S&P, and their fees are astronomical.

@Saravanan Thanks for those suggestions. I agree that page needs to be easier.

I had asked question earlier in one of the threads as to why the returns and benchmark returns were all under 1.0 and not %values.
And I was told these were not %value and there was no specific reason why they chose to leave it as <1.0 instead of %return values.

I just noticed the max drawdowns are shown as %values.

Some consistency would be nice, and preferably all of them shown as % return values and like I mentioned preferably show returns and benchmark returns side by side.


Haven't read this whole thread but I keep running into an issue where my statements get cut off. Sometimes the log statements themselves reach a threshold and other times I get truncated statements. For example, if I want to print a dataframe with 10 rows and 63 columns I get the following:

DatetimeIndex: 63 entries, 2013-04-08 00:00:00+00:00 to 2013-07-16 00:00:00+00:00  
Data columns (total 10 columns):  
19654 63 non-null values  
19656 63 non-null values  
8329 63 non-null values  
8554 63 non-null values  
19659 63 non-null values  
19660 63 non-null values  
19661 63 non-null values  
19662 63 non-null values  
19657 63 non-null values  
19658 63 non-null values  

I'm not sure if this sort of truncation is a python thing or a Quantopian thing but I am a chronic triple-checker when I'm writing code and I like to print things out and look at what is going on during each frame before moving on. Just a suggestion - maybe add some sort of "extended" log file (possibly one that saves to the user's desktop after the simulation runs, if that is faster than printing each log message in the log panel)

I suggest adding more detail to your help/API docs on data sources. Basically, I think that you need to describe, technically, exactly how the data are collected and reduced/adjusted/cleaned, for backtesting, paper, and live trading. Perhaps you could simply provide links to your vendors' specifications? Or post their specs. on your website?

It'd be nice if a copy & paste of log output into the community forum would appear as it does in the log output window of a backtest. As it is, spaces are removed and multi-line output ends up on one line. Or is there a trick to getting it to look pretty?

Alternatively, when backtests are shared, you could display the log output in a separate tab.

Thanks, all. I've noted each one. Logs are a continuing challenge for us.

I'll look at expanding our data description, too.

For the record function:

  • Minute-level resolution.
  • Optionally change the plot type from lines to points, with no filling/interpolation (the help page states "Each series will retain its last recorded value until a new value is used").

Just signed up so I may have missed it but I would like to be able to see the values of my variables at each data point in a backtest.

Provide the list of securities, their corresponding symbols, sid numbers, trade date info. etc....basically everything except the OHLCV data. You could post it every day for download by registered users, so that users could search and filter it, rather than using the simple one-by-one search in the code editor.

Support for Futures contracts

Provide a means to load all of the sids in the database at the start of the backtest. For example:

context.stocks = [list of all the sids]  

I just signed up and have been playing around with writing algos. Four requests -

1) Could the backtest represent the percentage gain of the actual security as oppose to what appears to be SPY? At the moment I have to bounce from window to window comparing charts to my results.

2) I may have missed this feature but it looks like you can either buy or sell, was hoping it would be possible to short a security?

3) After-hours ordering.

4) Access to the news wire. For things such as earnings reports it would be nice to be able to parse that document, evaluate it and make split second earnings decisions.

Hi Mike - Welcome to Quantopian! Thank you for the feature requests.

Here are some quick answers inline:

1) Could the backtest represent the percentage gain of the actual security as oppose to what appears to be SPY? At the moment I have to bounce from window to window comparing charts to my results.
> I think you are asking for the ability to define a custom benchmark that will show up in the performance chart. If yes, that is on our to do list. Can't guarantee when you'll see it - but you will see it.

2) I may have missed this feature but it looks like you can either buy or sell, was hoping it would be possible to short a security?
> We do support shorting currently. The way to sell short is simply to sell shares of a security that you don't currently own. So for example, if I do:
order(sid(24), -100) and I don't own any shares of AAPL (sid=24) then the simulation will sell short 100 shares.

3) After-hours ordering.
> Noted as a request - we don't currently support simulating or trading outside of market hours.

4) Access to the news wire. For things such as earnings reports it would be nice to be able to parse that document, evaluate it and make split second earnings decisions.
> We had a great research talk on this at our last NYC meetup, it is a very cool idea! We don't have this built-in to Quantopian currently - but we do have the capability to parse external CSV data using Fetcher (see here: ). So if you wanted to built a text scraper/parser and feed stock-specific data in that would be possible.

It doesn't seem possible right now, but would be great to be able to implement the following logic (ie, the choice to trade on the open or on the close):

# Buy at the open and sell at the close of the next bar  
if signal == True:  
    order(symbol, 100, open_price)  
    order(symbol, -100, close_price)  

Alexis, that type of trade is pretty explicitly not permitted by our backtester - trading on an open price implies having foreknowledge. Check out the mechanics of the backtester in the FAQ.

If you want more info, please open a new thread and we can go into more detail. I bet we can find a way to do what you want with the existing code.

1) Feature request: 1-to-1 correspondence between zipline and QP site coding environments.

2) The ability for zipline to work with local CSV files directly (or at least better documentation for it, if it is already possible df=read_csv(".."); handle_data(df)... )


Thanks for the quick answer. In my understanding, the backtester always executes the market order at the close of the next bar. In intraday mode, I agree with you that it's the way to go. However, in the daily mode, I'd like to be able to trade the overnight session (from close to next day's open) and the day session (from open to close) via market-on-open (i.e., standard market order) and market-on-close orders. I guess it's a pretty standard feature for many swing traders.

Koala - for the zipline request, I suggest you put it in the Zipline group - they are very responsive. The one-to-one is definitely for both, of course, thank you.

Alexis - you can do that today in minute mode by coding up things to happen at the beginning and end of the day. We certainly can make that easier for you, and we will.

scikit for their covariance estimators.

sklearn is already available

Sorry didn't see that. Is it possible to store order objects?
I tried

in initialize
context.order_list = []

do order somewhere in the code:
order_temp = order(sid,amount)

But that doesn't work since order_temp is a str object. I want to be able to access the order so I can see how much has been filled etc.

Bernd, the order() command returns an order id, which you then can use to monitor the order. More info on the various order commands here.

order_temp = order(sid,amount)  

I really really want to see better documentation and more examples. My request is also for you all to write the documentation in layman's terms and not techie terms. I am hoping you want to help traders who know a little bit of programming to use your site. If you just want techies who want to write trading programs, then probably the documentation you provided is fine. However, for people like myself, who trades and wants to develop computerized systems, we need some help. I will give you an example.

Here is a paper that you all published. Please read the line on What's Changing... especially some impedance mismatches around using minute data and specifying the parameters in day units. What does impedance means over here? When I did a Google search for "what impedance means", it returns - "the effective resistance of an electric circuit or component to alternating current, arising from the combined effects of ohmic resistance and reactance." Please use terms that we can easily understand without having degrees in computer science.

Also, when you publish the examples, please comment them in such a way that someone beginning to program will understand. We need your help and I believe proper and user friendly documentation will go a long way.

Thank you for your help.

Tnx Dan. Is it possible to set the timeout yourself? Because when I try to run a pairtrade algo on a lot of stocks I keep getting timed out.
Selecting a universe based on market cap would be nice too.

matplotlib, please.


Please please please make the back testing environment faster for both minute and daily data.
When I try to backtest algorithms it takes couple minutes to have a 5% progress in the backtest.

Thank you

Add a means to programmatically add and subtract capital (cash "sink" and "source" functionality). This would allow simulation of cash inflows and outflows (costs such as fees/commissions and taxes).

Another plug for matplotlib...or perhaps just some way to execute parameter scans to do a simple heatmap. For example, it'd be great to be able to do a 10x10 grid over two parameters. I realize that this would require 100 backtests run in parallel, but it's commodity cloud computing, right? Should be dirt cheap. You could even send out an automated e-mail/text to the user when the scan is complete.


Is this the best place to post ideas for new features? Or is there a more formal ticket style system?

I wanted to request the ability for set_universe to be able to filter by more than just dollar volume data. Ideally, I would love to see the ability to select a security based on it's dividend rate, dividend yield, and industry sector. I'm sure there are quite a few variables that would make for helpful selection of a useful SID set. What are some of the things other people would like to select securities by? Is this even remotely possible given the structure of the way quantopian stores data?

Hi Pumplerod,

Thanks for the suggestion. We totally agree that universe selection should have much more data available, and users should be able to define custom criteria for stock universe selection. Quantopian can manage universes using any criteria, but we need the data to drive the filters. We are starting down this road by updating fetcher to allow universe selection to be driven by fetcher data. You can find the proposed spec and ongoing discussion about the updates to fetcher in this thread:


As a feature request for the site I think it would be really nice for there to be a special log of feature requests. It would be great to have an easy way to view what other people are interested in and perhaps know what is being worked on. Might even help the devs in allowing people to vote to indicate what they find to be more useful. Just a thought. At any rate, I love the community you have going here, and how responsive, and helpful everyone has been. I've learned a lot already and I know I've barely scratched the surface.

A default monitor function that imposes realistic margin requirements.

Recording/plotting down to the minute level and control over plot formatting.

When a runtime error occurs, it would be convenient to be able to click a "Report Error" button that would capture all of the relevant details of the error event and additionally provide a means to send an accompanying message. Or does this already happen in the background if I send feedback via the "send us feedback" link above the error details?

The backtester needs to be more efficient. Minutely backtests should run ~100X-1000X faster, to support development of minute-level algorithms for paper/live trading.

I would like to see more detailed error reporting. I think that while it is good that for example the backtester tells me there is an error in line 36, it would be better if it would tell me where on that line and what type of error.Continuing with my previous example, perhaps it tells me that the variable profits can't be resolved to a value. That way I can more easily troubleshoot my code.

Minute-level resolution on charts.

I think a random data generator would be nice, one where we can designate certain market characteristics on a virtual benchmark or stock (or both) and test our algo with that. With something like that, I would be able to design latent calibration devices that adjust my trading parameters according to developments in major market characteristics such as changes in volatility, interest rates, policy, etc

In the position values tab, a column that sums the various holdings, so you could see the total value of all of your assets.

A random universe of Sids that exist from start-date to present would be cool. I like to use random portfolios for testing but don't want DollarVolumeUniverse to update quarterly because I rely on stats from previous observations.

Maybe a repo of importable sid sets. Pickle is the first thing that comes to mind, it has security issues but sids can be stored as integers making them easy to parse over. This would allow for user generation of sid sets, they could submit sid sets that they would like to be able to import.

A "Save As" button would be nice. Presently, to save a copy of an algorithm under a new name, a copy-and-paste operation is required.

Also, a directory structure for algorithms would be helpful, or some other means of organizing them.

Plus 1 on the directory structure. A feature to collapse functions in the IDE would be nice, things can get cluttered without the directory structure.

Possibility to place orders to be filled at opening and closing auctions.

Trailing stop.

It's been a bit since I weighed in here. Just wanted to let you all know that we read and consider all of this and put it into the feature prioritization.

Also - the error reporting got a bit more verbose last week, which should help some of the requests about errors.

Please add support for backtesting imported CSV data. For example, for Bitcoin, you can import but you can't actually model buying and selling. It would be very useful to be able to do this, even if we can't actually use it for live trading.

A change history for an algo, much like a wiki keeps when a page is changed. It would help for those times I realize I've gone down a dead end and need to backtrack.

The ability to bookmark/permalink/favorite a thread or post so I can find it later.

Devon, this is standard in software development, as version control is used on any project and I agree it is something Quantopian should have eventually. The ability to check in changes, as in software development would be very useful so you can look at history and roll-back to a version, if needed.

In addition to the version control, there needs to be some way to recycle and import custom code. In MATLAB, it is a matter of setting up a directory path or putting all of the custom subroutines in a working directory. Specifically, for example, I might want to write a custom portfolio rebalancing routine and then use it across multiple algorithms.

I am new here so maybe I missed this feature: A possibility to save an algorithm as a file and to send an algorithm to a printer.

Please add the ability to sort all the post by most replied to, most viewed, and make the top algorithm list a little longer than top 5.

I agree Pamo.

  • imported data that you can "trade anything" in the backtester is on my personal favorite list
    • version control, change management, algo and backtest organization are all great suggestions and on the list for future work
    • improved organization of content in forums is on the list too

Thanks, all, keep them coming.

It would be great to have vix and $vix and $vxv available in the history. Downloading them in a csv file only provides daily values, but they can move a lot during the day.

This is related to a directory structure. I'd like to see a root directory where the running algos live. The idea is to use pandas to_csv and the fetcher to send signals between running algos.

How about letting users post their paper trading strategies live for everyone to view. That would be interesting and encouraging for the competitive people who will want to rank high on that performance list and it will provide a transparent feedback mechanism between users and quantopian. This could also be useful for generating revenue as eventually people could purchase systems from selected users who perform well and quantopian could keep a portion of the fee. It is time for more transparency regarding what people are coming up with and who really is confident enough to share their work not just troll around.

A ranking system would be great to rank all advertised strategies, also a collective average ranking of all users who post for live viewing could be tracked which could demonstrate if the development of the quantopian community is improving at generating revenue in live markets as time goes on.

"Find/search & replace with" functionality in the code editor would be handy.

Adding the boto module to the list of available python libraries would be nice, allowing us to store csv files using Amazon S3 or Google cloud storage.

I've been traveling, with only an iPad which is not the most friendly tool for working with quantopian. It got me wondering what it would take to have a dedicated Quantopian App and if people would be interested in that?

Ability to have a lib directory of some sort so that we can save custom python modules and classes that we tend to frequently use would be really nice. I can see how that might make sharing code more complicated and I don't really have a solution for that, but I would love a way to include code rather than cutting and pasting every time I start a new algo.

I'd like to add couple of features, sorry for maybe omitting it mentioned but its huge thread

  • Other languages, like R ( I know You don't want to introduce other languages, but just a vote)
  • International stocks
  • Other securities

Since many Quantopians will keep their best algorithms private I would like a feature where you can keep your algo private but publish statistics (or allow backtest) on it along with a link to allow others to live trade with it without seeing the source code. Other users could then pay the developer of that algo a fee to live trade with that algo without gaining access to the source code. This would create a market where individual developers can monetize their work while sharing the benefits with other users. Quantopian would also create an additional revenue stream by taking a cut of this fee. This market would create a strong incentive to make high quality algorithms that perform well under real trading.

There is some risk that this would reduce sharing but there are many ways incentivize sharing such as an n-year privacy limit for monetized algos similar to patent expiration which balances the individual incentive to make money with the community interest to innovate.

Alternatively, a user could also offer to make their private algo semipublic for a fee (possibly a handsome fee). Other Quantopians would be allowed to form a group to pool contributions to meet this fee and have exclusive access for some limited period of time. This would accelerate diffusion of knowledge without immediately degrading the value of an algo.

I love the idea of allowing for sharing results without exposing the code.

I'm not sure what the tax or liability ramifications are for charging to allow others to trade off of your algo but there are a lot of great possibilities there.

The time remaining before an algorithm time-out or some other indicator should be available, so that code can be written to avoid an algorithm crash due to a time-out (for live trading, ~ 50 seconds per call to handle_data, I understand, and for backtesting?). Alternatively, the error could be made accessible for some sort of error-catching structure (e.g try-except). --Grant

i would like to see multiple Record charts. (maybe 2 would be ok) and the ability to annotate the record chart datapoints with text (so when you mouse over, you can see log data)

  • Community Rooms separated or tagged by experience, strategy-types, or other logical groups
  • More documentation about referenced libraries and modules (for non-python or non-programmers)
  • More functionality on indicator writing or function writing, with more chart functionality.

Futures! I've visited the site a few times and am purely a quantitative trader (mostly use AmiBroker and Ninja). I love the idea of the platform, but it isn't useful without at least eMini data and support.

I'd like to support that. At this point, the platform is very functional, though still needs improvement. The biggest thing that could help is more data available. The more data, the more clever things we can do with the platform. I understand that data would be expensive. I for one wouldn't mind having to pay a premium to get certain types of data.

In addition to the OHLC values, the actual Nanex timestamps could be made available, associated with the values (i.e. O at time 1, H at time 2, L at time 3, and C at time 4). This would be just a bit more information with minimal overhead (versus dumping the entire Nanex dataset onto the algorithm every minute to sort out). --Grant

Version control on algorithms would be really helpful as well - maybe periodic state saving based simply on time, amount of code edited, or manually controlled.

As others have mentioned...futures support

documentation of risk metrics. and as mentioned here, the current overall metrics are broken:

let me pay for more cpu/bandwidth/resources for backtesting! I'd suggest you try digital ocean. i've been running a 30 vm cloud service on it for about 6 months with no problems

Once again, futures data is really what would be helpful. Understanding that there are particular issues with rolling contracts on futures trading, access to this data is really what this site needs to grow. Algorithmic trading makes up a massive portion of trading in the futures markets. Even further, you are working with Interactive Brokers which allows futures trading. It would seem that as such, it should be easy to get this data.

It seems there is substantial interest in providing support to futures products, so I might as well put in a plug for a project I have been working on that does just that. While the library is meant to fit my specific needs, the intent is to eventually push all futures related code back into the main branch of zipline. It is extremely alpha, but if it can amass some additional developers the process in which the code is pushed back could be greatly accelerated. I am strongly against continuous contracts so instead you either have to explicitly state which contract you are ordering or define logic on which your algorithm rolls over using the roll decorator. Currently the roadblock is in the performance tracker and is largely attributed to futures margins. It seems no one has a historical database of initial and maintenance margins and no literature seems to exist in developing a margining model for backtesting purposes. Once I can crack this I believe it wont be far off from having futures accessible to everyone on Quantopian.

@Jason - - very cool!

Just a +1 on one of the earliest requests (Xingzhong - Aug 23, 2012)
"please do consider a re-running feature for one backtesting. That will easy for user to debug his algorithm without setting the backtesting time periods again and again."

Ideally, backtest the algo a number of times, say, 10 runs, over different dates and different time periods, and then give the mean returns over these 10 runs. To enable reruns on same dates, we can base the run on some number (which appears random to the user), but actually is a key to start and end dates for running the backtests.

A code collapse feature in the IDE would be awesome!!! I'm getting cluttered over here.

  • simulation using minute data needs to be faster
  • sth that allows the user to scroll through logs more efficiently
  • multiple charts for recorded data
  • privacy on source codes as preferred by each user
  • more securities in the database: futures and options market data and standard BS model to go with it as a built-in function

For aggregated daily and minutely bars, access to the tic-level datetime stamps for the OHLC values (i.e. O @ t_O, H @ t_H, L @ t_L, & C @ t_C, where X = O, H, L, C are tic-level prices and t_X is the associated tic-level datetime stamp).

Testing algos (other than very simple ones) on Quantopian is a time-consuming and often painful experience due to the lack of a debugger. Zipline is much easier, but doesn't use the same data and it's non-trivial (though not VERY complicated) to transition to Q. As for testing live algos - well, I'm finding that somewhat frustrating due the very painful testing with minute data and to the S L O W loading of algo data (I've been waiting 15 minutes and my 10 line test is still 'loading data'!). In short, I'd like to see some attention given to the test/debug/live run cycle to make it less frustrating.

A couple of other ideas:
1) Folder structure for algos and/or seamless GIT-based version control
2) I second Grant's ideas re a cloud-based 'virtual-desktop'
3) I'm sure you guys have thought about providing for users to make their algos available for a monthly fee (like C2, Portfolio123 etc). Any chance this might happen?
4) Any chance of extending yr deadline of 31st March? :)

Having said this, I think you guys are doing a great job......

Dave, I made a cross-platform shim, see this thread:

The version I pasted in the thread can run on both zipline and quantopian, though the stuff in the github repo are quantopian only at the moment. I'll probably add the additional zipline compatabily if I end up needing it, but not yet.

Many thanks. Very helpful.

Just wondering if there is the possibility of having a "batch run" capability, or better yet, a simplex optimization feature that allows the user to optimize parameters?

The algorithm build process could be made more transparent, to avoid and to de-bug "Build Errors" reported prior to running the algorithm. For example, simply enabling logging during build would help (e.g. see

The ability to set_universe to major indices ie S&P 500, Dow Jones, etc.

Please make this a feature, it would really make life easier. Getting data from a .csv file is a pain.

Some things that would be helpful:

1) Minute-level plotting for backtest results, even if only for very short time-period backtests (1 day?) would be extremely helpful in developing and optimizing algorithms.

2) Email/SMS notifications on trade execution, or some kind of daily trade summary for both live-trading algorithms and paper money algorithms. Notifications could be a freemium feature, i.e. subscriptions that give you a certain quota of notifications per day/month with a free tier. Possibly Slack integration too for teams? =)

3) Some kind of source control integration, so we can work on algorithms locally using our own familiar dev environment and push to Quantopian to backtest and deploy. Also facilitates collaboration.

4) A way to run backtests faster. I'm sure serious traders would be willing to pay to use higher end VMs to execute backtests faster. Although there's a fine line to walk between making paid versions faster vs crippling the free tier. I'm a bit ambivalent about this one but it's hard for me to imagine that it'd be sustainable for Quantopian to offer completely free compute for backtests forever.

5) Sharing backtest results publicly outside of Quantopian. And allow us to decide whether or not to share the code as well. This will facilitate your most enthusiastic users trying to get you more users through network effects. Right now the only way to do this is to take a screenshot of the backtest page, which is a rather tedious hoop to jump through.

6) More data! Historical data for stocks that have existed for longer than 13 years would be great so we can establish more confidence in our backtest results.

Organization tools for the algorithms page and the backtests would help. Something as simple as allowing folders and naming backtests (which someone else mentioned already).

  • Custom filters
    • Historical index lists adjusted for survivorship bias
    • Quarterly/Yearly schedule functions
    • Log/print function enabled inside of ide

+1 to backtest naming and/or a notes field

Charlie, you really brought this 2012 thread, back from the dead!

I agree 100% on your request, and it would be very simple to implement. On some of my algos I have over a hundred backtests, and I want a field (name or notes) that will let me know what I was testing with each change.

A getter for every setter: get_commission, get_slippage (as estimated under the current slippage model), get_universe, ...

@Charlie -

Note that for each backtest, a copy of the code is stored and is accessible from the results page (upper right, drop down menu, "View Code"). So, you can add notes within the backtest code, if you want (e.g. a header). However, the notes would not be available within the research platform, if you are needing them there, since the backtest code is not available there; only backtest "exhaust" can be pulled into the research platform.

Yes, I love the code diff and is how I do it today... but 100 backtests where I'm changing environments and variables could really use a quick notes field. I'm really glad you mentioned the diff though, I didn't find that for a couple months.

+1 to adding Next/Prev button to the code diff dialogue

One work-around is to consider encoding information in one or more of the record() variables, which are stored and available in the research platform. It's a hack, but if you work it out, I think that a tremendous amount of data can be piped to the research platform. If the backtest is N days, then you have N times 5 times the bit-size of a record variable. You just have to write an encoder for the algo, and a decoder for the research environment. I haven't worked out the details, but my initial read is that it should be feasible, until Q offers something better.

There should be a way to override the time-out on calls to handle_data & before_trading_start when backtesting.

The platform is great,What I would suggest is that it would be nicer if API has a better clarified documents

The ability to trade a multi currency portfolio. The ability to trade multi systems out of the same pot.

I'm reiterating the email notification. If the logging module could be used to send logs, one could track critical events with email alerts.

Also seconding the secret URL request: I'd like to send friends a URL showing them live trade or backtest results without sharing the source code.


I think that it would be great to visualize the major intra-day performance movements/changes in the backtest chart and not only the end-of-day day-by-day/close price changes in performance. Something similar to the paper-trading performance chart.

It would improve very much the live trading contest algos, as the developers learn how the algos are really behaving in the real world and try to improve them even more.

Please tell me what you think.
Thank you!

Ability to see summary of all live trading algogithms without having to open each one

Their may be a work around or something I haven't found yet, but it would be nice if their was some way to actually save information that your code generates. I understand their are security reasons why you cannot allow us to save directly to a .csv, but when dealing with machine learning and genetic algorithms their needs to be some type of data persistence to be able to serialize objects.


@ Austin,

You can save to context, e.g. context.saved_data.

Yet another mystery of the Q: there is no way to get data from the Q research platform into the backtester/live trading. If there were, then code could be executed in the research platform, with results dumped into a file that could be pulled in for backtesting/live trading. Or one could run backtests in parallel, pull the results into the research platform, perform analysis, and then use the results for live trading, for example (I suspect that this is the sort of thing "machine learning and genetic algorithms" would need to do).

Some requests:

  1. Allow data to flow from the research platform to the backtest/live trading platform.
  2. Provide automation tools so that backtests can be run in parallel on the backtest platform, with the ability to vary parameters within the backtest systematically (there is a hack, but it is not so pretty). I can envision a script that would do everything from the research platform, running the backtests in the background (no GUI). Effectively, there would be a backtest engine API available in the research platform.
  3. In the research platform, expand get_backtest() so that additional data are accessible (e.g. the contents of context and log output).
  4. In the research platform, provide an API to pull in paper/real-money trading results (with support for both a stopped algo and one that is still running). This could include data from IB that are not available on Q, but resulted from paper/real-money trading.
  5. In both the research platform and the backtester, provide something like the Windows "Task Manager" to see available computing resources and to monitor and control them.
  6. Consider providing a general API for backtest/live trading that would allow users to plug into the platform with their own front end (the GUI is so Web 1.0 dudes...get with the times!). I realize this has security implications, so maybe it is too much of a stretch.

Things that I see that I would like:
- Git integration (github would be best, but at least a heroku-style git control of the repos)
- More languages
- More functionality in the editor and backtester
- More markets / data / resolution / etc

By the way, some of this seem all related to Quantopian being a highly coupled set of products. I have a code editor, a simple vcs, a market API against which I can trade, a backtester... from the point of view of the beginner this integration is great. But for advanced users, I you could expose the interactions among your components, perhaps I could have algorithms in clojure that I can develop in emacs, push to aws, configure heavily and only then send them to the contest. I think that thinking hard about the architecture and wha'ts best about quantopian might end up freeing you from investing effort in areas that are not core value propositions of your product.

Sorry for the unpolished text, I don't have a lot of time right now and english is not my first language. I hope it's somewhat clear and useful, alas.


Rpy2 module with a working R-project framework. There are lot of already made strategies on the R community that could easily be used on Quantopian with this approach. Regarding this same issue, it could be nice to have a community "lib" for functions or "methods" to be used recurrently without rewriting everything with new algorithms or ideas. Something similar of what TA-Lib tried to do.

I'd like to see a user-facing bug/issue/improvement tracker, with notifications (e.g. a user could chose to "listen" to specific tracked items). Perhaps it would simply be a matter of leveraging your presence on github. As a start, when bugs are identified and acknowledged in the forum, if they are captured somewhere on, you could provide users with a link. I realize that some of your code isn't public-facing, but perhaps in github, there would be a way to expose high-level bug descriptions, linked to your proprietary code, without revealing the code itself.

1) Use ipython notebook for everything

2) minute by minute price visualization

3) multi strategy backtest

4) An option to use a faster backtest engine not event-driven

Automatic, integrated application of the T+3 rule, etc. for non-margin backtests (e.g. for Robinhood), including minute-by-minute, order-by-order, leverage constraints. Basically, fully emulate Interactive Brokers and Robinhood for non-margin accounts.

1) Tick/ second data

2) More than 500 securities in the trading universe.

3) Faster backtesting

It'd be nice if it were not necessary to check if data exists (e.g. if stock in data). It seems that once a filter is applied to only allow listed securities, then nothing else should be required. If the stock is tradeable, transactions should go through, regardless of the existence of data.

  1. Better handling of bankruptcy/m&a etc.
  2. Interactive examination of a backtest, for instance, clicking on a graph at any given point in time brings up all the trades, metrics, log output, etc. associated with that point in the backtest.
  3. Faster backtesting.
  4. Code reuse. It is likely that users will produce little libraries related to their preferred style of portfolio management, etc. Would be nice if these didn't have to be copy/pasted across algorithms.
  5. Multiple pipelines per algorithm.
  6. More flexibility in scheduling algorithm execution. For those interested in more buy-and-hold style strategies, daily execution is too frequent, and simultaneously unnecessarily constraining in terms of the compute time available for any single execution.
  7. More descriptive information about metrics and how they related to each other if at all.
  8. Tax simulation.

A built-in function to cancel all open orders after the daily close, to simulate real-money trading at IB and Robinhood. The user should be able to toggle the function on/off, for a given backtest. Or perhaps this can be done already by adding code in before_trading_start()?

Not sure if it was mentioned already, but the get_backtest function in research should include the ability to pull in the contents of context. And the same for get_live_results.

I would like to see the ability to output to a csv file.
Elsewhere the Q staff has said that agreements with their data providers preclude that, but why is that the case?

On Tradestation and many other platforms you can download data. I don't think anyone here is interested in going in to the business of re-selling Apple stock quotes, so what is the concern?
It would be very helpful to see OHLC prices when looking at data in xcel.

Important stuff:
- Filter by exchange in pipelines.
- Ability to import my own files, either stored on your server, or in github/bitbucket
- Folders for algorithms/research
- Performance improvements for fundamental data
- Speed, speed, speed

Smaller, nice to have:
- LaTeX support in research (I want to be able to put math formulas in the markdown cells)
- In research, the browser tab should say the name of the notebook, not just "Research - Quantopian". With multiple tabs open, it gets confusing.
- Ability to enter a description for a backtest, and show the description in the list of backtests. I put a description as a comment in the code, but then I have to dig through the code to find out which backtest is which.

I would love it if, when I cloned an algorithm or research notebook, if Q put a comment into the code that linked back to the forum thread that the algorithm came from, or at least tell who the original author is. I clone this stuff, then come back to it a few weeks later and can't find the original discussion about the algorithm.

To Dan and the Q team;

I think one area that could do with a major upgrade is the performance report of the algo after the back test is run. There are significant number of performance information that is not provided. These could include:

  • Profit factor calculations
  • percent profitable trades
  • win loss ratio of each trade
  • K-ratio
  • return amounts in percent and dollars / weekly/ monthly/ yearly
  • return data vs. fixed initial capital and not just the compounded equity. This is important for any leveraged strategies to see where the maximum dollar draw-down happens in the range.
  • return retracement ratios
  • total commissions paid
  • summary information in the performance report list for all back test so one can quickly see percent draw-downs and Sharpe ratios per back test without having to click on every single one.
  • parameterized back-tests that can run the back-tests for a range of inputs
  • and many more ...
  • All of the above for short side vs. long trades. This is critical as it shows for long/short market neutral strategy where is the heavy lifting being done when it comes to performance. I.e. is the short just loosing money and acting as a hedge or it has a positive profit factor etc.

I use Tradestation alongside Quantopian and I utilize the portfolio analyzer to run some technical bases strategies. I will email a spreadsheet sample output from this tool. It provides a significant amount of information for the back test. I find this to be critical to closely analyze the performance of the algo. Please review and hopefully you can incorporate this style of performance report in the platform. In particular individual trades list is very useful too.


+1 for al of them plus slippage paid.

Is this thread still relevant and actually read by staff to pick ideas to implement?

If so:

  1. allow multiple pipelines, this seemed an obvious one since attach_pipeline asks for a name and I wanted separated pipeline outputs for long and short positions, turns out only one instance can be attached; also attach_pipeline is not documented in the API reference;
  2. offer a builtin filter matching securities with open positions, they need computed metrics but they don't necessarily match filters created in initialize(), the result is that one has to implement metrics in Pipeline and at the same time manually re-implement the same metrics in a data handle function thus partially defeating the point of having pipeline engine in first place;
  3. provide a mean (http API, Dropbox or GoogleDrive sharing folder, fuse, whatever) of synhronizing the algorithm files; the IDE is not-so-great in terms of editing and offers no versioning at all, it'd be nice to be able to use one's preferred editor and VCS and then see the change automatically reflected in the algorithms.

I like to emphasize the trades list as well. It should contain the trade direction as well a signal name. We could add order("signal name", ...) to the syntax so this signal is used for reporting. The trade direction should include;
-enter long
-exit long
-enter short
-cover short

right now we just see buys and sells in the amounts without begin able to tell if the trade is covering a short or buying long.
here is a sample single trade info from TS. (sorry the text wraps bit it give you an idea).


Date Time Symbol Type Price Quantity Commissions Profit ($) Profit (%) Strategy Signal Name Total Efficiency Entry Efficiency Exit Efficiency Drawdown ($) Drawdown (%) Runup ($) Runup (%) High Price Low Price
1 01/05/2015 1:00 PM XLP Short Entry 48.03 3121 $15.61 Parabolic SE ParSE
1 01/07/2015 1:00 PM XLP Short Exit 48.74 3121 $15.61 ($2,247.12) -1.50% Parabolic SE ParLE -76.34% 23.66% 0.00% ($2,215.91) 1.48% $686.62 -0.46% 48.74 47.81

2 01/05/2015 1:00 PM XLE Short Entry 78.36 1913 $9.57 Parabolic SE ParSE
2 01/13/2015 1:00 PM XLE Short Exit 74.48 1913 $9.57 $7,403.31 4.94% Parabolic SE TimeBarsSX 84.35% 100.00% 84.35% $0.00 0.00% $8,799.80 -5.87% 78.36 73.76

@Tar - Thanks, these requests are getting noted and I'll add your vote.

@Kamran - Have you seen Pyfolio, our open-sourced risk analysis library? The first three requests are available in the roundtrip transaction analysis. It's not yet integrated onto the Quantopian platform, but it will be later. Here is a sneak peak. The others are great suggestions and we'll take note. You're welcome to submit pull requests to the public repo!

@Andrea - If you want to work separately with your long and short positions, you can but the criteria in separate columns in your pipeline. And then keep it within the single pipline. I think this should work, let me know if I misunderstood your setup :). And thanks for the other requests, a mechanism to share code within your own algos has been commonly requested, I'll add your +1.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.

i know ib lets you trade options, it would be cool to use yalls backtest ide to play around with some strategies.
thank you for what there is already!

I'm already doing that -since it's actually the only way to proceed with pipeline- I just added the filters so my pipeline in order to have two boolean, disjoint columns (never both True) that I can use to filter out the resulting dataframe.

My point is that it's odd requiring users to pass a name but then limit the simulation having just the one.
The sole fact that you need too use a name to fetch pipeline results suggests you can have multiple. I figure the multiple pipeline is a longer-term feature and is now being limited for practicality reasons, but at least document that a little more clearly.

I'm not sure what you're referring about with your "sharing code within one's own algos" point, I was referring to synchronized algorithm-space between an user's computer and your platform, not sharing code in-between different algos. To be more clear I'm talking about writing in my own text editor.

My most desired feature tho' is a filter matching securities for which there's an open position, i.e. the second feature one I mentioned. That would streamline the algos since you can put it all the computation logic in the pipeline once rather than making the same kind of computation twice, one with Factor logic and the other by hand.

In compare backtests, a tickbox or some option to only show differences...helpful for long algo's with 50+++ backtests

About Git Integration you might have a look at
GitBook, Authorea, Overleaf, Leanpub, Softcover provide such a feature. When pushing changes to repository, compilation is run and a PDF, PS or EPUB file is created. A similar approach should be possible for Quantopian except that we are running backtest instead of compiling and generating positions history, metrics... instead of PDF. User should have access to his queue of processing backtest (like with a continuous integration server)

User should have access to his queue of processing backtest

For a given algo, if you navigate to the "All Backtests" page, you can see the backtests underway, and their progress. However, I am unaware of a consolidated dashboard that shows backtests across algos; one has to navigate to their respective "All Backtests" pages.

One way to crack this nut would be to build the functionality into the research platform (e.g. get_running_backtests() which would generate a report on-demand of the current status of all running backtests).

More memory in research platform please. I am unable to load a backtest of 10 years.