Back to Posts
Listen to Thread

We launched this website with some bare-bones features. We wanted to get enough out there so you could try it, and then tell us where we should take it. What would you like to see? What features do you need? What data sources would you like? What data transforms do you use? What forum features will make this community stronger?

Thank you for all your feedback.

Hi, I've just received the invite it looks great!

I'm thinking maybe making some python modules available for the algorithms, for example is nympy available?



Hi Pithawat,

Yes, many popular scientific python libraries are available - including numpy, scipy, and pandas. You can use them as you would in any python code, just import and enjoy:

import pandas  

Please let us know if there are libraries you would like us to add!


When I click on the various community discussion items, the performance graph can take a while to load. It would be nice to be able to review the source code of the algro while the performance graph is being built in the background.

Great idea Rob, we'll give it some thought!

The order(sid, amount) function requires amount to be the number of shares. If I'm trading a portfolio, each share has a different price. In this case I would like to order not a fixed number of shares, but how ever many shares I can buy for X $. That way I could split my portfolio more even.

@Thomas, I took a quick stab at a helper function to let you place orders in cash instead of in shares. Add the following code to whatever project you'd like to take this cash-based order approach and call this new "order_in_cash" function instead of your normal order function.

def order_in_cash(data, security, cash): shares = cash // data[security].price if shares < 0: shares += 1 # to fix python's flooring of negative integer division. order(security,shares) One key difference to note, you have to pass in "data" from your handle_data function so that the helper can access stock prices. Otherwise, it works as you mentioned.

Andrew: Looks great, thanks a bunch! It's good you caught that weird flooring problem of negative integer division. I haven't been aware of that and it's quite odd.

I wonder if it makes a difference though, have you tried running a backtest with fixed cash instead of fixed shares?

please do consider a re-running feature for one backtesting. That will easy for user to debug his algorithm without setting the backtesting time periods again and again.

+1 on Xingzhong feature request :)

Maybe a simple visualizing execution with short-term (random or true) market data on chart for preview my algorithm's actions? This can be use before a full scale long-term market data backtest.

1) question & FR - what is the benchmark that shows up in my backtests? (just the avg returns to the stocks you initialize?) and then, the FR is, make it so I can explicitly choose my benchmark.

2) Allow for user groups/ selective sharing of algos. So allow me to share an algo with colleague X, but not post to forum (y, I can just email the code, but it would be nicer to have more of a user group feel).

3) add an option to 'winsorise' returns for outlier handling - a notorious issue with backtests is hidden outliers in returns data - sometimes they are obvious, you trade a stock and it makes 10,000% in 1 day (oops pricing error, currency issue etc) - but sometimes these errors can be hidden. Winsorizing your returns data allows you to set sanity bounds on what returns you think a stock can achieve, so you might say, clip my returns data at -99% and + 2 standard deviations from the mean returns for that time period. Better explained here:

4) allow me to plot returns to multiple portfolios in one backtest. so e.g. show long book, short book and a market neutral long/short all in one backtest ( i think you guys said you have the whole 'data cube' behind the scenes so it is just a matter of plotting more of the data already generated)

Hey Jess!

1) The current benchmark is an ETF that looks like the S&P 500. Yes, we want the benchmark to be configurable. I think we're going to squeeze that into the schedule sooner than later.

2) Yes, there's definitely a feature around partial-sharing that we need to build. People want to work collaboratively. We need to solve a way to do that. We started with all-or-nothing, and we will add "some."

3) Wow, that's new to me, and it makes a lot of sense. I'll see if we can get someone to put that into Zipline, and then make it available in Quantopian.

4) This one, I think, we already can do! (Unless I'm missing something). In your algo you need to compute all three of those numbers. Then, add this line to your algo:

record(short=your_short_var, long=your_short_var, neutral=your_neutral_var)  

And the graph will magically appear in your backtest. Full doc here.

Great feedback, thanks.

Hi all,

  • Being able to change the date from, date to and initial capital directly in the backtest screen would be nice.
  • Being able to change to date format would be nice too (currently it's MM-DD-YYYY in the IDE screen and YYYY-MM-DD in the backtest screen, I'm more familiar to DD-MM-YYYY or some "universal" format like 01-JAN-2013).
  • Being able to run algorithms from the beginning of day D to end of day D (currently you can only run it from beginning of day D to beginning of day D + 1)
  • Changing the chart's period from one day to one minute if we run our algorithms on one day only and with minute-based data.
  • Being able to use simple transform functions on minute-based window.

That would great to see :).

Thanks a lot,

Hi All,
Great work. The features I would like to see are
1, Version control of my algorithm. Many a times I wish I could get back my old change. Cloning the algo for every version creates too many algos to manage
2. More items mapped in the record functions (current limit is 5)
3. Ability to get price at an arbitrary date (for example, if I want to quickly get AAPL yearly perf compared to SPY for last 10 years, I need to run the backtest for the 10 years. There could be other scenarios where we might want to tap into the database to get a single data
4. Use some other stock other than AAPL for examples :) . The user gets thrown off whenever he sees the example and sees the huge gain that his algo gets

Thanks and Regards

+1 for version control!

I've been cloning my own algorithms to do this and it results in tons of "cloned from" algorithms that are difficult to manage

All of these are great, thank you. Some of them are closer than others, in terms of how quickly we can deliver them.



I would like to receive the forum discussion by mail as well... than the plot and metrics needs to be included as a picture.

Best Regards

Some machine learning models require a repetitive training and validation scheme. A partial answer was already given in the ML topic but a quick look doesn't teach me how to train for example a neural network model with many parameters to train (e.g. neuron weights) and many characteristics to optimise (e.g. number of neurons, hidden layers,...). Ideally, one would be able to perform an cross-validation scheme online.

Secondly, how do you deal with algorithms with high computational demands? With increasing popularity of ML techniques in this domain, models will become more advanced and more computational demanding. What is the limit? Will users have to buy computational power, or will they have to train there models offline on their own computers?

My wish list:

  1. The ability to run an algorithm in batch mode in the background. Thomas Wiecki gives a nice example on, where he uses cloud computing to optimize a parameter in an algorithm. One risk of the current Quantopian backtesting paradigm is that it does not facilitate rapid exploration of the full algorithm parameter space (i.e. the response surface).
  2. Full plotting capability, via matplotlib.
  3. A secure virtual desktop (running in "the cloud") that would allow free-ranging access to the entire Quantopian data set. Basically, this would provide the ability to run zipline (and the browser-based backtester), but with full access to the data. The desktop should also have access to parallel/cluster/GPU/cloud computing.
  4. The backtest output should include an explicit reporting of commissions paid/expenses. I've poked around, and nowhere can I find an accounting of the expenses. Perhaps something like the expense ratios reported for mutual funds could be adopted?


A forum

+1 for full plotting capability :)

@Fabian - that one we do already. At the bottom of the Quantopian homepage, lower-left corner, turn "auto listen" on.

Machine learning and other parameter optimization methods are definitely on the road map.

As for high computational demand, I think we're going to cross that bridge when we get to it. If you have an algo that you want to run that is having performance problems, please share it with us and we'll see what we can do. We're regularly working on our performance.

The ability to plot and otherwise explore the data is also on the list. Data exploration is very exciting, and it's also a very big project that we want to do well. That will come after live trading.

We don't currently show commissions anywhere. I'll look at exposing those.

I would like see a feature that lets you compare two algorithms. Something along the line of comparing backtests from two different algorithms.
I'm not sure if it's already possible.

You can compare the code between two backtests for the same algorithm, to see the diff of the code. Go to the backtest list for your algorithm.

Hello Fawce (& Ben),

I think that Ben is talking about a convenient way of comparing the backtest results of two or more algorithms, not the code (Ben, correct me if I'm wrong).

This is basically the batch-mode functionality that I suggest above:

  1. Define a series of runs over N parameters (e.g. in a table or an array within the algorithm).
  2. For each run, store the results, along with the corresponding parameter settings.
  3. When the batch is complete, notify the user (e.g. an e-mail).
  4. Provide the results in an online table and/or a downloadable file for offline analyses.


@Dan: great, thanks!

Have not read the other posts here, sorry if I'm repeating something. There needs to be some information about portfolio turnover and sensitivity to slippage. Ideally there would also be defaults for trading cost analysis. (coming from FI we look at per basis point effects on IR given the simulated turnover)

+1 for version control!
+1 for matplotlib plotting

Some additional nice-to-haves:

  1. Collapsible code blocks, to hide/reveal code.
  2. A directory structure and the ability to specify a search path, for storing custom functions outside of the main algorithm script (similar to MATLAB).
  3. The ability to block comment/uncomment sections of code.

@Ray, can you tell me more about what you're looking for? We do include a slippage model and a trading cost model already. But it sounds like you are looking for something more? Additional reporting?

@Grant we have that 3rd one done but not well documented. Highlight your code section, then click cntrl-/

extended-hours trading. (after-hours trading). i have a separate post about this, but this would make a big difference.
if not that, then being able to trade against user-supplied market data.
if not that, then the ability to execute trades against arbitrary data (i.e. trade at arbitrary time/price)

@Dan, yes. I'm simply asking for the realized turnover and a chart displaying how sensitive the strategy is to slippage in execution (a function of turnover). These are among the first things I look at when someone shows me a backtest. Not having those displayed with the backtests makes evaluating the quality of a model on quantopian difficult.

@Dan, sorry me again... getting the mails works, though a feature request would be to get the title of the thread as the subject of the mail.
Right now it says just:
xxx has submitted a new post in Quantopian

This makes it quite hard to follow.

Yeah, that drives me crazy too. Will do.

That's interesting Ray. I'm going to have to learn more about that.

I don't know what kind of data quantopian has, but an important factor when determining market impact in high turnover portfolios is the depth of the order book. E.g., there may be an asset that almost never trades but the order book is enormous. Take a look at the euribor contract. I don't mind being 100% of the volume for five minutes because I'm not going to move the market (I may, however, pay the spread). In other assets, there may be huge volume but the order book is shallow and if my strategy is taking liquidity from the market I could be moving the price significantly (significance of course depends on the importance of slippage in the deterioration of my alpha), even if I'm under 10% of the volume (well below the 25% limit quantopian imposes).

Ah, thank you, that helps clarify quite a bit.

Some form of github integration would be nice, for algorithm/function development collaboration (I realize that there would be privacy/security considerations). Awhile back, Thomas W. had outlined some ideas along these lines--let me know if it'd be helpful to pull them up. --Grant

It would be neat to have buy/sell support for Bitcoin.

Hi Daniel,

I'm not so familiar with Bitcoin, but it seems like Quantopian would have to take orders in Bitcoin and then convert them to dollars before sending them to Interactive Brokers, right? Unless IB also accepted Bitcoin...

Is there any precedent for securities trading in Bitcoin?

What about avoiding currency altogether, and just trading shares peer-to-peer? Bitshares... Somewhere I think I saw a quote by Fawce that there may be an untapped worldwide market of 10 million individual, retail quantitative traders...if so, maybe they could just trade amongst themselves and cut out the middlemen? Should be doable (brokerages hold share records electronically, right?), but I doubt that it is part of the Quantopian business plan.

If you look at Quantopian and their funding, I think this kind of thinking will go nowhere, since they are looking to plug into an existing retail business model already established by IB and other online brokers by providing an improved API and backtesting solution...nothing revolutionary, at least for the near-term (but definitely innovative and entrepreneurial).


Would it be possible to allow us to make folders in the "My Algorithm" section?
They are starting to get unwieldy and difficult to find.

Michael, I agree. We have some organization work to do on the My Algorithms page for sure. I haven't planned that out yet.

Hi Dan,

I suggest a test instance of Quantopian, so that members can assist in testing and de-bugging, prior to release. Part of the test instance could include an interactive tool for developing test cases and plans, and displaying test results.

You might also consider publishing a build/update/release schedule, with an accompanying change and test report to be published prior to release.

Generally, you need to address the concern voiced by one member that he'll wake up one day and find that his live trading algorithm is broken (and in a worst-case scenario, losing money due to a unbeknownst change to the underlying code).


What I would like to see changed.
1) We need more documentation!
2) Direct portability between Zipline and Quantopian
3) support for alternative asset classes.

These are the three features that have significantly reduced my reliance on Quantopian/Zipline. I am extremely optimistic about Zipline and think it could be an extraordinary tool in my arsenal, however I cant really get off to the races simply because its extremely difficult to explore all of its features. For instance the batch transform is one of the staple features, however the limited documentation and examples are on the Quantopian side so when I try to port a strategy over to Zipline things quickly fall apart. I seldom get on Quantopian simply because I am trading futures not equities and rely on local data. I am currently trying to put together a frankenstein version of Zipline that will allow me to backtest futures, however handling the data and trying to figure out how to use the module are major hurdles. Would love to get any thoughts or input on this.

If you guys decided to implement futures down the line, I dont think you should use continuos time series because no matter how you adjust it, it simply does not reflect reality. Rolling into a further out contract is typically going to have a negative price impact. You should have data for all the active contracts at any given time and let the user determine what contract to roll into via their algo, typically the contract with the greatest OI. Also you need to consider the difference in tick size/value as well as changes in margin. Many of the platforms I have seen in the past assume .01 minimum tick and static margin across time.

If I may, I would like to suggest the following metrics to help evaluate models
1. Per trade MaxProfit/MaxLoss
2. Probability of Profit/Loss along with Expected Profit or Expected Loss
3. Combine the table of Return and Benchmark Return that tabulates trailing 1,3,6,12 month returns so that its easier to compare


I'd love to see two things:
1. A convergence between zipline and quantopian. E.g. quantopian includes a graph of the benchmark portfolio value. I've found that difficult to reproduce in zipline, which is probably me doing something stupid, but it'd be nice if I didn't have to recreate this since it's already present in quantopian.
2. The benchmarking data to extend back further (e.g. by using 10-year bond data instead of or in addition to the treasury yield curve data) and the ability to easily turn off benchmarking when the primary data set extends back further than the benchmarking data set.

Better search feature for the community posts.

I have found doing a site search through Google returns more results than using the search box.

Also, I would like an easier way to browse the "forum" and check out posted algorithms.

I would like having the capability to compare my portfolio to another portfolio and not only to the benchmark.

Is there a way to indent a block of code in the editor? I found that un-indenting works by selecting code and pressing [shift]+[tab], but attempting to indent by selecting code and pressing [tab] just replaces your text with a tab
Also, a find a refactor feature (or even just find/replace) would be pretty slick

@Vesper, indenting follows the ipython shortcuts. Cmd + ] to indent, Cmd + [ to unindent.
at least in os x, I imagine windows would be ctrl instead.

@Brandon: awesome, thanks. Ill look around for some more useful nuggets

1) Please include an easier way to use time in quantopian. Something like recording each day's bar would be great!
I think generally most traders work through bars rather than line charts.

2) Groups to share my algos only with selected people

3) more technical indicators

Please make transaction record downloadable. I found this especially important to me because I will typically import some arbitrary data to test the vulnerability of a trading system, i.e. flashcrash days, doomed days. And if the transaction records could be downloaded, I can further verify them. Thanks!

In the Algorithms Page, for the list of the Live Trading Algorithms, it would be helpful to show a very short summary data along with each live trading algorithm.
Helps to see how all my live trading algorithms are doing without having to click through each one. If the developer can decide which of his calculated metrics to display in this summary with a default list provided by you, that would be the best.


Please also make it more convenient in adding sid's. For example, context.stocks = [ingredients(SP500)]

@Vesper and Brandon - thanks, we need to document those better, and make them discoverable.

@Daniel When you say use time, I'm not sure what you mean. I do understand recording a bar on the chart.

Collaboration is a great feature request, I definitely want to build it.

Are there technical indicators outside of ta-lib, or do you think finishing ta-lib is the right step?

@Jiaming Can you tell me more about what you'd do with the dowloaded data? I'm not yet understanding what you want to do with a downloaded list.

I agree that it would be awesome to have the SP500 ingredients as a command, but we'd have to license that data from the S&P, and their fees are astronomical.

@Saravanan Thanks for those suggestions. I agree that page needs to be easier.

I had asked question earlier in one of the threads as to why the returns and benchmark returns were all under 1.0 and not %values.
And I was told these were not %value and there was no specific reason why they chose to leave it as <1.0 instead of %return values.

I just noticed the max drawdowns are shown as %values.

Some consistency would be nice, and preferably all of them shown as % return values and like I mentioned preferably show returns and benchmark returns side by side.


Haven't read this whole thread but I keep running into an issue where my statements get cut off. Sometimes the log statements themselves reach a threshold and other times I get truncated statements. For example, if I want to print a dataframe with 10 rows and 63 columns I get the following:

DatetimeIndex: 63 entries, 2013-04-08 00:00:00+00:00 to 2013-07-16 00:00:00+00:00  
Data columns (total 10 columns):  
19654 63 non-null values  
19656 63 non-null values  
8329 63 non-null values  
8554 63 non-null values  
19659 63 non-null values  
19660 63 non-null values  
19661 63 non-null values  
19662 63 non-null values  
19657 63 non-null values  
19658 63 non-null values  

I'm not sure if this sort of truncation is a python thing or a Quantopian thing but I am a chronic triple-checker when I'm writing code and I like to print things out and look at what is going on during each frame before moving on. Just a suggestion - maybe add some sort of "extended" log file (possibly one that saves to the user's desktop after the simulation runs, if that is faster than printing each log message in the log panel)

I suggest adding more detail to your help/API docs on data sources. Basically, I think that you need to describe, technically, exactly how the data are collected and reduced/adjusted/cleaned, for backtesting, paper, and live trading. Perhaps you could simply provide links to your vendors' specifications? Or post their specs. on your website?

It'd be nice if a copy & paste of log output into the community forum would appear as it does in the log output window of a backtest. As it is, spaces are removed and multi-line output ends up on one line. Or is there a trick to getting it to look pretty?

Alternatively, when backtests are shared, you could display the log output in a separate tab.

Thanks, all. I've noted each one. Logs are a continuing challenge for us.

I'll look at expanding our data description, too.

For the record function:

  • Minute-level resolution.
  • Optionally change the plot type from lines to points, with no filling/interpolation (the help page states "Each series will retain its last recorded value until a new value is used").

Just signed up so I may have missed it but I would like to be able to see the values of my variables at each data point in a backtest.

Provide the list of securities, their corresponding symbols, sid numbers, trade date info. etc....basically everything except the OHLCV data. You could post it every day for download by registered users, so that users could search and filter it, rather than using the simple one-by-one search in the code editor.

Support for Futures contracts

Provide a means to load all of the sids in the database at the start of the backtest. For example:

context.stocks = [list of all the sids]  

I just signed up and have been playing around with writing algos. Four requests -

1) Could the backtest represent the percentage gain of the actual security as oppose to what appears to be SPY? At the moment I have to bounce from window to window comparing charts to my results.

2) I may have missed this feature but it looks like you can either buy or sell, was hoping it would be possible to short a security?

3) After-hours ordering.

4) Access to the news wire. For things such as earnings reports it would be nice to be able to parse that document, evaluate it and make split second earnings decisions.

Hi Mike - Welcome to Quantopian! Thank you for the feature requests.

Here are some quick answers inline:

1) Could the backtest represent the percentage gain of the actual security as oppose to what appears to be SPY? At the moment I have to bounce from window to window comparing charts to my results.
> I think you are asking for the ability to define a custom benchmark that will show up in the performance chart. If yes, that is on our to do list. Can't guarantee when you'll see it - but you will see it.

2) I may have missed this feature but it looks like you can either buy or sell, was hoping it would be possible to short a security?
> We do support shorting currently. The way to sell short is simply to sell shares of a security that you don't currently own. So for example, if I do:
order(sid(24), -100) and I don't own any shares of AAPL (sid=24) then the simulation will sell short 100 shares.

3) After-hours ordering.
> Noted as a request - we don't currently support simulating or trading outside of market hours.

4) Access to the news wire. For things such as earnings reports it would be nice to be able to parse that document, evaluate it and make split second earnings decisions.
> We had a great research talk on this at our last NYC meetup, it is a very cool idea! We don't have this built-in to Quantopian currently - but we do have the capability to parse external CSV data using Fetcher (see here: ). So if you wanted to built a text scraper/parser and feed stock-specific data in that would be possible.

It doesn't seem possible right now, but would be great to be able to implement the following logic (ie, the choice to trade on the open or on the close):

# Buy at the open and sell at the close of the next bar  
if signal == True:  
    order(symbol, 100, open_price)  
    order(symbol, -100, close_price)  

Alexis, that type of trade is pretty explicitly not permitted by our backtester - trading on an open price implies having foreknowledge. Check out the mechanics of the backtester in the FAQ.

If you want more info, please open a new thread and we can go into more detail. I bet we can find a way to do what you want with the existing code.


1) Feature request: 1-to-1 correspondence between zipline and QP site coding environments.

2) The ability for zipline to work with local CSV files directly (or at least better documentation for it, if it is already possible df=read_csv(".."); handle_data(df)... )


Thanks for the quick answer. In my understanding, the backtester always executes the market order at the close of the next bar. In intraday mode, I agree with you that it's the way to go. However, in the daily mode, I'd like to be able to trade the overnight session (from close to next day's open) and the day session (from open to close) via market-on-open (i.e., standard market order) and market-on-close orders. I guess it's a pretty standard feature for many swing traders.

Koala - for the zipline request, I suggest you put it in the Zipline group - they are very responsive. The one-to-one is definitely for both, of course, thank you.

Alexis - you can do that today in minute mode by coding up things to happen at the beginning and end of the day. We certainly can make that easier for you, and we will.

scikit for their covariance estimators.

sklearn is already available

Sorry didn't see that. Is it possible to store order objects?
I tried

in initialize
context.order_list = []

do order somewhere in the code:
order_temp = order(sid,amount)

But that doesn't work since order_temp is a str object. I want to be able to access the order so I can see how much has been filled etc.

Bernd, the order() command returns an order id, which you then can use to monitor the order. More info on the various order commands here.

order_temp = order(sid,amount)  

I really really want to see better documentation and more examples. My request is also for you all to write the documentation in layman's terms and not techie terms. I am hoping you want to help traders who know a little bit of programming to use your site. If you just want techies who want to write trading programs, then probably the documentation you provided is fine. However, for people like myself, who trades and wants to develop computerized systems, we need some help. I will give you an example.

Here is a paper that you all published. Please read the line on What's Changing... especially some impedance mismatches around using minute data and specifying the parameters in day units. What does impedance means over here? When I did a Google search for "what impedance means", it returns - "the effective resistance of an electric circuit or component to alternating current, arising from the combined effects of ohmic resistance and reactance." Please use terms that we can easily understand without having degrees in computer science.

Also, when you publish the examples, please comment them in such a way that someone beginning to program will understand. We need your help and I believe proper and user friendly documentation will go a long way.

Thank you for your help.


Tnx Dan. Is it possible to set the timeout yourself? Because when I try to run a pairtrade algo on a lot of stocks I keep getting timed out.
Selecting a universe based on market cap would be nice too.

matplotlib, please.


Please please please make the back testing environment faster for both minute and daily data.
When I try to backtest algorithms it takes couple minutes to have a 5% progress in the backtest.

Thank you

Add a means to programmatically add and subtract capital (cash "sink" and "source" functionality). This would allow simulation of cash inflows and outflows (costs such as fees/commissions and taxes).

Another plug for matplotlib...or perhaps just some way to execute parameter scans to do a simple heatmap. For example, it'd be great to be able to do a 10x10 grid over two parameters. I realize that this would require 100 backtests run in parallel, but it's commodity cloud computing, right? Should be dirt cheap. You could even send out an automated e-mail/text to the user when the scan is complete.


Is this the best place to post ideas for new features? Or is there a more formal ticket style system?

I wanted to request the ability for set_universe to be able to filter by more than just dollar volume data. Ideally, I would love to see the ability to select a security based on it's dividend rate, dividend yield, and industry sector. I'm sure there are quite a few variables that would make for helpful selection of a useful SID set. What are some of the things other people would like to select securities by? Is this even remotely possible given the structure of the way quantopian stores data?

Hi Pumplerod,

Thanks for the suggestion. We totally agree that universe selection should have much more data available, and users should be able to define custom criteria for stock universe selection. Quantopian can manage universes using any criteria, but we need the data to drive the filters. We are starting down this road by updating fetcher to allow universe selection to be driven by fetcher data. You can find the proposed spec and ongoing discussion about the updates to fetcher in this thread:


As a feature request for the site I think it would be really nice for there to be a special log of feature requests. It would be great to have an easy way to view what other people are interested in and perhaps know what is being worked on. Might even help the devs in allowing people to vote to indicate what they find to be more useful. Just a thought. At any rate, I love the community you have going here, and how responsive, and helpful everyone has been. I've learned a lot already and I know I've barely scratched the surface.

A default monitor function that imposes realistic margin requirements.

Recording/plotting down to the minute level and control over plot formatting.

When a runtime error occurs, it would be convenient to be able to click a "Report Error" button that would capture all of the relevant details of the error event and additionally provide a means to send an accompanying message. Or does this already happen in the background if I send feedback via the "send us feedback" link above the error details?

The backtester needs to be more efficient. Minutely backtests should run ~100X-1000X faster, to support development of minute-level algorithms for paper/live trading.

I would like to see more detailed error reporting. I think that while it is good that for example the backtester tells me there is an error in line 36, it would be better if it would tell me where on that line and what type of error.Continuing with my previous example, perhaps it tells me that the variable profits can't be resolved to a value. That way I can more easily troubleshoot my code.

Minute-level resolution on charts.

I think a random data generator would be nice, one where we can designate certain market characteristics on a virtual benchmark or stock (or both) and test our algo with that. With something like that, I would be able to design latent calibration devices that adjust my trading parameters according to developments in major market characteristics such as changes in volatility, interest rates, policy, etc

In the position values tab, a column that sums the various holdings, so you could see the total value of all of your assets.

A random universe of Sids that exist from start-date to present would be cool. I like to use random portfolios for testing but don't want DollarVolumeUniverse to update quarterly because I rely on stats from previous observations.

Maybe a repo of importable sid sets. Pickle is the first thing that comes to mind, it has security issues but sids can be stored as integers making them easy to parse over. This would allow for user generation of sid sets, they could submit sid sets that they would like to be able to import.

A "Save As" button would be nice. Presently, to save a copy of an algorithm under a new name, a copy-and-paste operation is required.

Also, a directory structure for algorithms would be helpful, or some other means of organizing them.

Plus 1 on the directory structure. A feature to collapse functions in the IDE would be nice, things can get cluttered without the directory structure.

Possibility to place orders to be filled at opening and closing auctions.

Trailing stop.

It's been a bit since I weighed in here. Just wanted to let you all know that we read and consider all of this and put it into the feature prioritization.

Also - the error reporting got a bit more verbose last week, which should help some of the requests about errors.

Please add support for backtesting imported CSV data. For example, for Bitcoin, you can import but you can't actually model buying and selling. It would be very useful to be able to do this, even if we can't actually use it for live trading.

A change history for an algo, much like a wiki keeps when a page is changed. It would help for those times I realize I've gone down a dead end and need to backtrack.

The ability to bookmark/permalink/favorite a thread or post so I can find it later.

Devon, this is standard in software development, as version control is used on any project and I agree it is something Quantopian should have eventually. The ability to check in changes, as in software development would be very useful so you can look at history and roll-back to a version, if needed.

In addition to the version control, there needs to be some way to recycle and import custom code. In MATLAB, it is a matter of setting up a directory path or putting all of the custom subroutines in a working directory. Specifically, for example, I might want to write a custom portfolio rebalancing routine and then use it across multiple algorithms.

I am new here so maybe I missed this feature: A possibility to save an algorithm as a file and to send an algorithm to a printer.

Please add the ability to sort all the post by most replied to, most viewed, and make the top algorithm list a little longer than top 5.

I agree Pamo.

  • imported data that you can "trade anything" in the backtester is on my personal favorite list
    • version control, change management, algo and backtest organization are all great suggestions and on the list for future work
    • improved organization of content in forums is on the list too

Thanks, all, keep them coming.

It would be great to have vix and $vix and $vxv available in the history. Downloading them in a csv file only provides daily values, but they can move a lot during the day.

This is related to a directory structure. I'd like to see a root directory where the running algos live. The idea is to use pandas to_csv and the fetcher to send signals between running algos.

How about letting users post their paper trading strategies live for everyone to view. That would be interesting and encouraging for the competitive people who will want to rank high on that performance list and it will provide a transparent feedback mechanism between users and quantopian. This could also be useful for generating revenue as eventually people could purchase systems from selected users who perform well and quantopian could keep a portion of the fee. It is time for more transparency regarding what people are coming up with and who really is confident enough to share their work not just troll around.

A ranking system would be great to rank all advertised strategies, also a collective average ranking of all users who post for live viewing could be tracked which could demonstrate if the development of the quantopian community is improving at generating revenue in live markets as time goes on.

"Find/search & replace with" functionality in the code editor would be handy.

Adding the boto module to the list of available python libraries would be nice, allowing us to store csv files using Amazon S3 or Google cloud storage.

I've been traveling, with only an iPad which is not the most friendly tool for working with quantopian. It got me wondering what it would take to have a dedicated Quantopian App and if people would be interested in that?

Ability to have a lib directory of some sort so that we can save custom python modules and classes that we tend to frequently use would be really nice. I can see how that might make sharing code more complicated and I don't really have a solution for that, but I would love a way to include code rather than cutting and pasting every time I start a new algo.

I'd like to add couple of features, sorry for maybe omitting it mentioned but its huge thread

  • Other languages, like R ( I know You don't want to introduce other languages, but just a vote)
  • International stocks
  • Other securities

Since many Quantopians will keep their best algorithms private I would like a feature where you can keep your algo private but publish statistics (or allow backtest) on it along with a link to allow others to live trade with it without seeing the source code. Other users could then pay the developer of that algo a fee to live trade with that algo without gaining access to the source code. This would create a market where individual developers can monetize their work while sharing the benefits with other users. Quantopian would also create an additional revenue stream by taking a cut of this fee. This market would create a strong incentive to make high quality algorithms that perform well under real trading.

There is some risk that this would reduce sharing but there are many ways incentivize sharing such as an n-year privacy limit for monetized algos similar to patent expiration which balances the individual incentive to make money with the community interest to innovate.

Alternatively, a user could also offer to make their private algo semipublic for a fee (possibly a handsome fee). Other Quantopians would be allowed to form a group to pool contributions to meet this fee and have exclusive access for some limited period of time. This would accelerate diffusion of knowledge without immediately degrading the value of an algo.

I love the idea of allowing for sharing results without exposing the code.

I'm not sure what the tax or liability ramifications are for charging to allow others to trade off of your algo but there are a lot of great possibilities there.

The time remaining before an algorithm time-out or some other indicator should be available, so that code can be written to avoid an algorithm crash due to a time-out (for live trading, ~ 50 seconds per call to handle_data, I understand, and for backtesting?). Alternatively, the error could be made accessible for some sort of error-catching structure (e.g try-except). --Grant

i would like to see multiple Record charts. (maybe 2 would be ok) and the ability to annotate the record chart datapoints with text (so when you mouse over, you can see log data)

  • Community Rooms separated or tagged by experience, strategy-types, or other logical groups
  • More documentation about referenced libraries and modules (for non-python or non-programmers)
  • More functionality on indicator writing or function writing, with more chart functionality.

Futures! I've visited the site a few times and am purely a quantitative trader (mostly use AmiBroker and Ninja). I love the idea of the platform, but it isn't useful without at least eMini data and support.

I'd like to support that. At this point, the platform is very functional, though still needs improvement. The biggest thing that could help is more data available. The more data, the more clever things we can do with the platform. I understand that data would be expensive. I for one wouldn't mind having to pay a premium to get certain types of data.

In addition to the OHLC values, the actual Nanex timestamps could be made available, associated with the values (i.e. O at time 1, H at time 2, L at time 3, and C at time 4). This would be just a bit more information with minimal overhead (versus dumping the entire Nanex dataset onto the algorithm every minute to sort out). --Grant

Version control on algorithms would be really helpful as well - maybe periodic state saving based simply on time, amount of code edited, or manually controlled.

As others have mentioned...futures support

documentation of risk metrics. and as mentioned here, the current overall metrics are broken:

let me pay for more cpu/bandwidth/resources for backtesting! I'd suggest you try digital ocean. i've been running a 30 vm cloud service on it for about 6 months with no problems

Once again, futures data is really what would be helpful. Understanding that there are particular issues with rolling contracts on futures trading, access to this data is really what this site needs to grow. Algorithmic trading makes up a massive portion of trading in the futures markets. Even further, you are working with Interactive Brokers which allows futures trading. It would seem that as such, it should be easy to get this data.

It seems there is substantial interest in providing support to futures products, so I might as well put in a plug for a project I have been working on that does just that. While the library is meant to fit my specific needs, the intent is to eventually push all futures related code back into the main branch of zipline. It is extremely alpha, but if it can amass some additional developers the process in which the code is pushed back could be greatly accelerated. I am strongly against continuous contracts so instead you either have to explicitly state which contract you are ordering or define logic on which your algorithm rolls over using the roll decorator. Currently the roadblock is in the performance tracker and is largely attributed to futures margins. It seems no one has a historical database of initial and maintenance margins and no literature seems to exist in developing a margining model for backtesting purposes. Once I can crack this I believe it wont be far off from having futures accessible to everyone on Quantopian.

@Jason - - very cool!

Just a +1 on one of the earliest requests (Xingzhong - Aug 23, 2012)
"please do consider a re-running feature for one backtesting. That will easy for user to debug his algorithm without setting the backtesting time periods again and again."

Ideally, backtest the algo a number of times, say, 10 runs, over different dates and different time periods, and then give the mean returns over these 10 runs. To enable reruns on same dates, we can base the run on some number (which appears random to the user), but actually is a key to start and end dates for running the backtests.

A code collapse feature in the IDE would be awesome!!! I'm getting cluttered over here.

  • simulation using minute data needs to be faster
  • sth that allows the user to scroll through logs more efficiently
  • multiple charts for recorded data
  • privacy on source codes as preferred by each user
  • more securities in the database: futures and options market data and standard BS model to go with it as a built-in function

For aggregated daily and minutely bars, access to the tic-level datetime stamps for the OHLC values (i.e. O @ t_O, H @ t_H, L @ t_L, & C @ t_C, where X = O, H, L, C are tic-level prices and t_X is the associated tic-level datetime stamp).

Testing algos (other than very simple ones) on Quantopian is a time-consuming and often painful experience due to the lack of a debugger. Zipline is much easier, but doesn't use the same data and it's non-trivial (though not VERY complicated) to transition to Q. As for testing live algos - well, I'm finding that somewhat frustrating due the very painful testing with minute data and to the S L O W loading of algo data (I've been waiting 15 minutes and my 10 line test is still 'loading data'!). In short, I'd like to see some attention given to the test/debug/live run cycle to make it less frustrating.

A couple of other ideas:
1) Folder structure for algos and/or seamless GIT-based version control
2) I second Grant's ideas re a cloud-based 'virtual-desktop'
3) I'm sure you guys have thought about providing for users to make their algos available for a monthly fee (like C2, Portfolio123 etc). Any chance this might happen?
4) Any chance of extending yr deadline of 31st March? :)

Having said this, I think you guys are doing a great job......

Dave, I made a cross-platform shim, see this thread:

The version I pasted in the thread can run on both zipline and quantopian, though the stuff in the github repo are quantopian only at the moment. I'll probably add the additional zipline compatabily if I end up needing it, but not yet.

Many thanks. Very helpful.

Just wondering if there is the possibility of having a "batch run" capability, or better yet, a simplex optimization feature that allows the user to optimize parameters?

The algorithm build process could be made more transparent, to avoid and to de-bug "Build Errors" reported prior to running the algorithm. For example, simply enabling logging during build would help (e.g. see

Log in to reply to this thread.
Not a member? Sign up!