Back to Community
In light of Quantopian shutting down live trading, what would be the alternative option?

Just asking. Any programming language for minute-resolution algorithmic trading?

Has anyone tried using IB TWS directly for algorithmic trading?

86 responses

Options I found:

1) Quantconnect
2a) Run Zipline yourself (I think you can grab minutely data from IB)
2b) I saw this on another thread, looks interesting: http://www.zipline-live.io/
3) cloud9trader.com (javascript, platform is in beta, which looks more like an alpha)
4) Tradestation (a broker with a proprietary script called "EasyLanguage")

Edit: Kory Hoang suggested:

With that said, if anyone has interest in migrating your Python codes to TradeStation or MultiCharts, please email me at: [email protected]

I highly suggest checking out EasyLanguage for TradeStation and MultiCharts. You can do a lot of the things you do here on Quantopian with EasyLanguage and it is very easy to learn (obviously).

I also suggest checking out QuantConnect and Quantiacs.

Edit: Peter Bakker said:

What now? @Lecoque shared his libraries, there is IBridgePy.com and there is Zipline-Live that probably have the least amount of work for algo's that have a known set of assets as those algo's don't need pipeline. FYI: IB has a limit of 100 assets where you can get realtime quotes for (or pay for more) so algo's that have less then 100 assets, they are fine.

I would only trust Quantconnect as an alternative service, as it has been around as long as Quantopian, so it is mature.
http://www.zipline-live.io/ seems really interesting, but not ready to fully replace Quantopian live trading yet (no pipeline and what about the data?)

@Burrito

Thanks for answering! This really helps.

  1. Heard there is RAM issue...
  2. I guess it is still in alpha/beta...
  3. alpha/beta...
  4. just thought the name, "EasyLanguage", funny... plus proprietary...

Any comments about using IB TWS API?

Haha. EasyLanguage is so 1998 :)

@Luca if you only use Pipeline with daily data + Quandl daily data, can't you just fire up your own Jupyter research environment? I'm not an expert, having relied on Quantopian, but I was surprised how easy it was to install Python with all the trimmings. I installed Anaconda as part of my Kaggle adventure recently.

@Dan, I need all the data Quantopian provides

@Luca What is stopping you building your strategies on Quantopian, with all the data, in Research? Is it that once proven in Research, you worry about porting the code over to a live trading environment?

@Dan, exactly, that's my concern. I cannot use http://www.zipline-live.io/ for live trading given its current limitations (and also because i don't have the data) and if I want to use Quantconnect I cannot share anything between Research and the algorithm. I know I can still do it, but it's going to be a pain

Hmmm. That is a tough one. The problem is, the only reason Quantopian can share this data with you for free (or very low cost) is their platform doesn't allow you to steal it wholesale. So, you're very unlikely to get any of the premium data to your own hosted zipline. So, then you're reliant on Quantconnect's Research environment and live trading environment. It doesn't look as mature as Quantopian's, but it does support Python, and what you want is their stated business model. Do they have the data?

Seems like zipline live is a client side solution to quantopian. All you need is data integration. If that is the case I much prefer this than being hosted by quantopian.

QuantConnect has a lot of asset/pricing data in other markets too but not the same premium/factor datasets. For those you'd need to import them manually with custom data. Their business model is infrastructure but they make it free when the brokerages sponsor the live trading.

They posted in response to this move today: https://www.quantconnect.com/blog/democratizing-finance-empowering-individuals

They do have Morningstar fundamentals.

If QuantOpian is goning to shutdown the live trading, who knows when will QuantConnect do the same? :-/

Besides, the QuantConnect uses mainly C#. They have Python, but seems not so strong as QuantOpian here.

@Thomas the difference is that Quantopian's business model is a crowd-sourced hedge fund. QuantConnect "provides a free algorithm backtesting tool and financial data so engineers can design algorithmic trading strategies"... so perhaps more likely to continue the live trading offering.

It seems like QuantConnect is the natural choice. I hope that the Data ecosystem develops wherein the providers provide free external access to their data sets to QC hosts via HTTP or something, then people can pay for access individually. That's a lot of contract negotiations for everyone though.

I personally plan to investigate running a QC algo in F#. That would be nifty.

  1. Sofar I know the TradeStation has no connection to IB.

  2. The EasyLanguage is not really easy. The User Manual has 1000 pages.

And the EasyLanguage is not a programming language like C, Python etc. It's more like cobble together of LEGO.

Has someone heard about NinjaTrader?

I spent some time, and I was able to port part of an algo of mine to QuantConnect, and I have been reading about IBridge.py.

The people at QuantConnect are very helpful and fairly quick to respond to inquiries, however, with what I have in mind, it does not offer a complete replacement of Quantopian -- hence the problem.
The reason I say that is because Quantopian's research/backtest environment is more polished with the alpha lens and the pyfolio returns analytics notebooks -- I would be really missing them on the QuantConnect research environment. QC's backtest environment is OK speed wise, but there will be a learning curve and the fact that python is sitting on top of the Lean C# engine makes it more difficult to work with because often the data structures used are encapsulations over C# objects, rather than the types you might be used to from pandas and numpy. It's overcomeable, but a nuisance and increases the cost & difference of porting from Q.

So if the standard workflow is a) research factor analysis, b) backtest, c) production, if Q still has the irreplaceable research environment, and this environment continues to be available as is today, then in my view this is probably the preferable research environment. The existence of pyfolio tilts over the value of Q for backtesting (given that only QuantConnect has a nice data environment to compete with Q), so I am leaning towards using Q for backtesting as well.

So if Q is my environment for research & backtesting, what is the best environment for production? Not quite clear at the moment...
Using either zipline-live or IBridge.py would theoretically be easier for porting since they are either pretty close or simulate Q API, and QC would be a larger job because requiring a larger port and significantly more testing of the new code. From a production stability perspective, QC (even though I didn't test production) should have the upper being a mature, server-based environment, and supported as opposed to me setting up PCs to execute the other environments. However, if I had a big money situation where interruption of service would be a problem, and given the current experience with Q, I would probably take the time to learn how to manage a cloud based instance of IBridge.py because I would be a master of my own fate.
Recurring cost-wise, if you don't count my time wasted administering my own PCs or cloud instances :-(, QC has a subscription of $20/month + $10 for each additional algorithm you run.

I am still flip-flopping on the production environment... Anyone with experience setting up IBridge.py with IB, especially in a headless cloud environment can offer some experience?

Long term, who knows what will be around, so I am not even trying to guess that... all actors are very small in this space.

Quantopian's business model is a crowd-sourced hedge fund

I've come to view Quantopian, over the long term, to be a tool for recruiting quants. If one wants an entry point to eventually get a job as an institutional quant, Quantopian is probably as good as it gets. It will be interesting to see how things go. My prediction is that the present, apparent purity of the crowd-sourced model will diminish as time goes on. They will want (and possibly need, due to legal/compliance issues) to hire on individuals identified on the site and through face-to-face interactions at various Quantopian events. It is heading in this direction already. I learned that one Quantopian "manager" is required to share any relevant trading ideas with Quantopian first for the next three years, as part of his licensing agreement. So, it is not just about licensing algos. This is o.k., and a great opportunity for some, but it highlights that the value is really in acquiring, training, and retaining talent, and not simply licensing a bunch of algos from the crowd and cobbling them together into a fund.

http://www.ibridgepy.com/2017/08/27/tutorial-migrating-from-quantopian-to-ibridgepy/
This tutorial is about migrating from Quantopian to IBridgePy.
IBridgePy was officially introduced by Interactive Brokers on Nov 10th 2016 in their webinar.
The webinar is published by IB at youtube https://www.youtube.com/watch?v=hogXB07OJ_I
A lot of Quantopian algorithms can run on IBridgePy even without any changes.
Disclaimer:
This is Dr. Hui Liu, the developer of IBridgePy.

Dr. Hui, thanks for very nice tutorials. Could you tell us also kind of historical data coverage does IB provide with a paper trading account? Length of history/breadth for daily and minute data?
If ppl were to switch to using IBridge.py most would like to be able to reproduce or at least get close enough to their results obtained on Q

Dr Liu, can you please clarify whether IBridgepy is owned & supported by Interactive Brokers? what exactly is the relationship between IBridgepy and IB?

Thanks,

One possibility not yet mentioned, I believe:

http://www.zorro-trader.com/

@Tim

Are you using it now? Any experience?

By QuantConnect there is another problem that they do not support trading on or connection to linked accounts by IB at the moment. Maybe next month could be supported.

@Thomas

No, I must admit that I have never actually used it, just had a good look around on their site some time ago. Sorry. :-)

If you want to diversify and run, say, 8 different algorithms simultaneously on QuantConnect, it will cost you 100$ / month ($20 subscription + $10 per algorithm, as someone mentioned).

For about the same monthly amount you may consider the combination of the RightEdge trading software and an appropriate data source, as described by Andreas Clenow on his Following the Trend site, especially if you are familiar with C#:

http://www.followingthetrend.com/2015/10/making-a-proper-equity-simulation-on-a-budget-data/

http://www.followingthetrend.com/2015/11/how-to-make-proper-equity-simulations-on-a-budget-part-2-software/

http://www.followingthetrend.com/2016/06/norgate-data-for-rightedge-review/

I haven't tried it myself, I'm afraid, just trying to be helpful. :-)

That's not entirely a fair comparison since we gen quantconnect service is hosted. If I remember correctly rightedge is a desktop application. If you wanted to run quantconnect locally with your own data, it's free. I'm not sure what good options you'd have for UIs though, apparently people have written some open source front ends but I haven't gotten that far yet...

You are of course absolutely right, Simon. Sorry for this oversight.

@Karen Chaltikian
You may refer to this page to learn the historical data availability at IB http://interactivebrokers.github.io/tws-api/historical_limitations.html#gsc.tab=0
Data provided by IB is not exactly same as those Quantipian provided but the backtest results are close enough. I have tested by myself and I will publish my study on this issue.

@ Takis Mercouris
IBridgePy is neither owned nor supported by Interactive Brokers. There is no relationship in any way between IBridgePy and IB. Last year, IB hold a webinar on Nov 10th 2016 and introduced IBridgePy to their customers because IB did not have their own Python API at that time.

Would anyone recommending building your own infrastructure?

@Carl Bosson , You could do it but don't underestimate the amount of time (a lot) and money (mostly the data) you need to maintain your own infrastructure. I'd be happy to hear from people who do it though.

My preference is to run the code on a platform that does all the infrastructure work and provide the financial data (and pay for their work too) so that I can focus on my algorithms. BUT I like to know there is the option to run the algorithms by myself in case the service provider shut down: this require the backtester and broker integration code to be open source and that the data can be bought in some way.

For example we could now use zipline-live for running our algorithms, given we spend tons of time completing the missing features. It's good knowing we can do it but I'd rather move to another service provider and keep using my time for researching. My preference goes to QuantConnect and in case they shut down the live trading here is how you can host the plaftform by yourself (this also gives you an idea of what you should do in case you opt to run your own infrastructure):

https://www.quantconnect.com/forum/discussion/2400/avoiding-vendor-lock-in-running-lean-on-your-server

@Chris
This is more than enough. :-)

Does anyone successfully port his algos to other platform? (Quantconnect or zipline-live)
I tried both, but none of them are perfect substitute of quantopian.
For QuantConnect, I think I need more time to be familiar with its APIs.
For zipline-live, I met many problems when trying to make my algos alive.

Not yet.

As I wrote in other thread, I've visited some other platforms or alternatives, but none of them can compete the QuantOpian, especially the Community. I miss QuantOpian very much. But I have to keep looking for another alternative. :-)

The zipline-live is not ripe enough. The QuantConnect is not for Python user in my eyes. They simply translate the C# to Python.

I trend to use the IBridgePy, though they have still a lot to do.

@Thomas I am sure even if we went with IBPy or Zipeline live we could make our own community via discord or something. I have been able to successfully getting notebooks working on my local machine and got some of the backtesting working.

@Jonathan
Congratulatuion! :-)

What plattform are U using?

Running zipline on my local machine. Still not great for live trading just yet but it's promising. Just glad I've been able to migrate the research off here.

My strategy, a very basic one, is nearly done by moving to my own VM, based on iBridgePY.

Although Not every Quantopioan functions are supported by iBridgePY, you may have a try if your strategy is not complicated, and most importantly, you want to find an alternative solution in short time.

Best,
David

@David,

I am also trying the IbridgePy. One of my problem is my most of my algos use the CBOE datas and I use the fetch_csv() by Q. Maybe you have tips or idea how to do this by IBridgePy?

Thanks

Thomas

Thomas,

If you want to use fetch_csv(), you have to develop one by yourself, since it's not supported by iBridgePy yet. Or you can try using superSymbol() supported by iBridgePy to directly get index&future data from InteractiveBroker. Frankly I am not a fun of VIX so I do not have such experience.

Best,
David

@David,

Thanks.

I know the fetch_csv() not supported by IBridgePy yet. But I would try by importing the zipline to have a try since the fetch_csv() is from zipline.

Thanks again.

Probably one of the main reasons there is the talk to discontinue the live trading is because zipline is far from being areliable platform for live trading, it misses many safety components one would put in place for live trading, and they are not trivial to develop thus it makes sense they would focus on backtesting and use a third party mature software for execution.
Closest bet is QuantConnect, probably if they got a bigger community they might add the quantdl data.

Another platform like TraderStation, IB to consider is NinjaTrader

@Thomas : looking at the code in zipline is should be fairly easy to rip the function from zipline as in the end it will use read_csv from pandas ( https://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.read_csv.html )

from abc import ABCMeta, abstractmethod  
from collections import namedtuple  
import hashlib  
from textwrap import dedent  
import warnings

from logbook import Logger  
import numpy  
import pandas as pd  
from pandas import read_csv  
import pytz  
import requests  
from six import StringIO, iteritems, with_metaclass

from zipline.errors import (  
    MultipleSymbolsFound,  
    SymbolNotFound,  
    ZiplineError  
)
from zipline.protocol import (  
    DATASOURCE_TYPE,  
    Event  
)
from zipline.assets import Equity

logger = Logger('Requests Source Logger')


def roll_dts_to_midnight(dts, trading_day):  
    if len(dts) == 0:  
        return dts

    return pd.DatetimeIndex(  
        (dts.tz_convert('US/Eastern') - pd.Timedelta(hours=16)).date,  
        tz='UTC',  
    ) + trading_day


class FetcherEvent(Event):  
    pass


class FetcherCSVRedirectError(ZiplineError):  
    msg = dedent(  
        """\  
        Attempt to fetch_csv from a redirected url. {url}  
        must be changed to {new_url}  
        """  
    )

    def __init__(self, *args, **kwargs):  
        self.url = kwargs["url"]  
        self.new_url = kwargs["new_url"]  
        self.extra = kwargs["extra"]

        super(FetcherCSVRedirectError, self).__init__(*args, **kwargs)


# The following optional arguments are supported for  
# requests backed data sources.  
# see http://docs.python-requests.org/en/latest/api/#main-interface  
# for a full list.  
ALLOWED_REQUESTS_KWARGS = {  
    'params',  
    'headers',  
    'auth',  
    'cert'  
}


# The following optional arguments are supported for pandas' read_csv  
# function, and may be passed as kwargs to the datasource below.  
# see http://pandas.pydata.org/  
# pandas-docs/stable/generated/pandas.io.parsers.read_csv.html  
ALLOWED_READ_CSV_KWARGS = {  
    'sep',  
    'dialect',  
    'doublequote',  
    'escapechar',  
    'quotechar',  
    'quoting',  
    'skipinitialspace',  
    'lineterminator',  
    'header',  
    'index_col',  
    'names',  
    'prefix',  
    'skiprows',  
    'skipfooter',  
    'skip_footer',  
    'na_values',  
    'true_values',  
    'false_values',  
    'delimiter',  
    'converters',  
    'dtype',  
    'delim_whitespace',  
    'as_recarray',  
    'na_filter',  
    'compact_ints',  
    'use_unsigned',  
    'buffer_lines',  
    'warn_bad_lines',  
    'error_bad_lines',  
    'keep_default_na',  
    'thousands',  
    'comment',  
    'decimal',  
    'keep_date_col',  
    'nrows',  
    'chunksize',  
    'encoding',  
    'usecols'  
}

SHARED_REQUESTS_KWARGS = {  
    'stream': True,  
    'allow_redirects': False,  
}


def mask_requests_args(url, validating=False, params_checker=None, **kwargs):  
    requests_kwargs = {key: val for (key, val) in iteritems(kwargs)  
                       if key in ALLOWED_REQUESTS_KWARGS}  
    if params_checker is not None:  
        url, s_params = params_checker(url)  
        if s_params:  
            if 'params' in requests_kwargs:  
                requests_kwargs['params'].update(s_params)  
            else:  
                requests_kwargs['params'] = s_params

    # Giving the connection 30 seconds. This timeout does not  
    # apply to the download of the response body.  
    # (Note that Quandl links can take >10 seconds to return their  
    # first byte on occasion)  
    requests_kwargs['timeout'] = 1.0 if validating else 30.0  
    requests_kwargs.update(SHARED_REQUESTS_KWARGS)

    request_pair = namedtuple("RequestPair", ("requests_kwargs", "url"))  
    return request_pair(requests_kwargs, url)


class PandasCSV(with_metaclass(ABCMeta, object)):

    def __init__(self,  
                 pre_func,  
                 post_func,  
                 asset_finder,  
                 trading_day,  
                 start_date,  
                 end_date,  
                 date_column,  
                 date_format,  
                 timezone,  
                 symbol,  
                 mask,  
                 symbol_column,  
                 data_frequency,  
                 **kwargs):

        self.start_date = start_date  
        self.end_date = end_date  
        self.date_column = date_column  
        self.date_format = date_format  
        self.timezone = timezone  
        self.mask = mask  
        self.symbol_column = symbol_column or "symbol"  
        self.data_frequency = data_frequency

        invalid_kwargs = set(kwargs) - ALLOWED_READ_CSV_KWARGS  
        if invalid_kwargs:  
            raise TypeError(  
                "Unexpected keyword arguments: %s" % invalid_kwargs,  
            )

        self.pandas_kwargs = self.mask_pandas_args(kwargs)

        self.symbol = symbol

        self.finder = asset_finder  
        self.trading_day = trading_day

        self.pre_func = pre_func  
        self.post_func = post_func

    @property  
    def fields(self):  
        return self.df.columns.tolist()

    def get_hash(self):  
        return self.namestring

    @abstractmethod  
    def fetch_data(self):  
        return

    @staticmethod  
    def parse_date_str_series(format_str, tz, date_str_series, data_frequency,  
                              trading_day):  
        """  
        Efficient parsing for a 1d Pandas/numpy object containing string  
        representations of dates.  
        Note: pd.to_datetime is significantly faster when no format string is  
        passed, and in pandas 0.12.0 the %p strptime directive is not correctly  
        handled if a format string is explicitly passed, but AM/PM is handled  
        properly if format=None.  
        Moreover, we were previously ignoring this parameter unintentionally  
        because we were incorrectly passing it as a positional.  For all these  
        reasons, we ignore the format_str parameter when parsing datetimes.  
        """

        # Explicitly ignoring this parameter.  See note above.  
        if format_str is not None:  
            logger.warn(  
                "The 'format_str' parameter to fetch_csv is deprecated. "  
                "Ignoring and defaulting to pandas default date parsing."  
            )  
            format_str = None

        tz_str = str(tz)  
        if tz_str == pytz.utc.zone:  
            parsed = pd.to_datetime(  
                date_str_series.values,  
                format=format_str,  
                utc=True,  
                errors='coerce',  
            )  
        else:  
            parsed = pd.to_datetime(  
                date_str_series.values,  
                format=format_str,  
                errors='coerce',  
            ).tz_localize(tz_str).tz_convert('UTC')

        if data_frequency == 'daily':  
            parsed = roll_dts_to_midnight(parsed, trading_day)  
        return parsed

    def mask_pandas_args(self, kwargs):  
        pandas_kwargs = {key: val for (key, val) in iteritems(kwargs)  
                         if key in ALLOWED_READ_CSV_KWARGS}  
        if 'usecols' in pandas_kwargs:  
            usecols = pandas_kwargs['usecols']  
            if usecols and self.date_column not in usecols:  
                # make a new list so we don't modify user's,  
                # and to ensure it is mutable  
                with_date = list(usecols)  
                with_date.append(self.date_column)  
                pandas_kwargs['usecols'] = with_date

        # No strings in the 'symbol' column should be interpreted as NaNs  
        pandas_kwargs.setdefault('keep_default_na', False)  
        pandas_kwargs.setdefault('na_values', {'symbol': []})

        return pandas_kwargs

    def _lookup_unconflicted_symbol(self, symbol):  
        """  
        Attempt to find a unique asset whose symbol is the given string.  
        If multiple assets have held the given symbol, return a 0.  
        If no asset has held the given symbol, return a  NaN.  
        """  
        try:  
            uppered = symbol.upper()  
        except AttributeError:  
            # The mapping fails because symbol was a non-string  
            return numpy.nan

        try:  
            return self.finder.lookup_symbol(uppered, as_of_date=None)  
        except MultipleSymbolsFound:  
            # Fill conflicted entries with zeros to mark that they need to be  
            # resolved by date.  
            return 0  
        except SymbolNotFound:  
            # Fill not found entries with nans.  
            return numpy.nan

    def load_df(self):  
        df = self.fetch_data()

        if self.pre_func:  
            df = self.pre_func(df)

        # Batch-convert the user-specifed date column into timestamps.  
        df['dt'] = self.parse_date_str_series(  
            self.date_format,  
            self.timezone,  
            df[self.date_column],  
            self.data_frequency,  
            self.trading_day,  
        ).values

        # ignore rows whose dates we couldn't parse  
        df = df[df['dt'].notnull()]

        if self.symbol is not None:  
            df['sid'] = self.symbol  
        elif self.finder:

            df.sort_values(by=self.symbol_column, inplace=True)

            # Pop the 'sid' column off of the DataFrame, just in case the user  
            # has assigned it, and throw a warning  
            try:  
                df.pop('sid')  
                warnings.warn(  
                    "Assignment of the 'sid' column of a DataFrame is "  
                    "not supported by Fetcher. The 'sid' column has been "  
                    "overwritten.",  
                    category=UserWarning,  
                    stacklevel=2,  
                )  
            except KeyError:  
                # There was no 'sid' column, so no warning is necessary  
                pass

            # Fill entries for any symbols that don't require a date to  
            # uniquely identify.  Entries for which multiple securities exist  
            # are replaced with zeroes, while entries for which no asset  
            # exists are replaced with NaNs.  
            unique_symbols = df[self.symbol_column].unique()  
            sid_series = pd.Series(  
                data=map(self._lookup_unconflicted_symbol, unique_symbols),  
                index=unique_symbols,  
                name='sid',  
            )  
            df = df.join(sid_series, on=self.symbol_column)

            # Fill any zero entries left in our sid column by doing a lookup  
            # using both symbol and the row date.  
            conflict_rows = df[df['sid'] == 0]  
            for row_idx, row in conflict_rows.iterrows():  
                try:  
                    asset = self.finder.lookup_symbol(  
                        row[self.symbol_column],  
                        # Replacing tzinfo here is necessary because of the  
                        # timezone metadata bug described below.  
                        row['dt'].replace(tzinfo=pytz.utc),

                        # It's possible that no asset comes back here if our  
                        # lookup date is from before any asset held the  
                        # requested symbol.  Mark such cases as NaN so that  
                        # they get dropped in the next step.  
                    ) or numpy.nan  
                except SymbolNotFound:  
                    asset = numpy.nan

                # Assign the resolved asset to the cell  
                df.ix[row_idx, 'sid'] = asset

            # Filter out rows containing symbols that we failed to find.  
            length_before_drop = len(df)  
            df = df[df['sid'].notnull()]  
            no_sid_count = length_before_drop - len(df)  
            if no_sid_count:  
                logger.warn(  
                    "Dropped {} rows from fetched csv.".format(no_sid_count),  
                    no_sid_count,  
                    extra={'syslog': True},  
                )  
        else:  
            df['sid'] = df['symbol']

        # Dates are localized to UTC when they come out of  
        # parse_date_str_series, but we need to re-localize them here because  
        # of a bug that wasn't fixed until  
        # https://github.com/pydata/pandas/pull/7092.  
        # We should be able to remove the call to tz_localize once we're on  
        # pandas 0.14.0

        # We don't set 'dt' as the index until here because the Symbol parsing  
        # operations above depend on having a unique index for the dataframe,  
        # and the 'dt' column can contain multiple dates for the same entry.  
        df.drop_duplicates(["sid", "dt"])  
        df.set_index(['dt'], inplace=True)  
        df = df.tz_localize('UTC')  
        df.sort_index(inplace=True)

        cols_to_drop = [self.date_column]  
        if self.symbol is None:  
            cols_to_drop.append(self.symbol_column)  
        df = df[df.columns.drop(cols_to_drop)]

        if self.post_func:  
            df = self.post_func(df)

        return df

    def __iter__(self):  
        asset_cache = {}  
        for dt, series in self.df.iterrows():  
            if dt < self.start_date:  
                continue

            if dt > self.end_date:  
                return

            event = FetcherEvent()  
            # when dt column is converted to be the dataframe's index  
            # the dt column is dropped. So, we need to manually copy  
            # dt into the event.  
            event.dt = dt  
            for k, v in series.iteritems():  
                # convert numpy integer types to  
                # int. This assumes we are on a 64bit  
                # platform that will not lose information  
                # by casting.  
                # TODO: this is only necessary on the  
                # amazon qexec instances. would be good  
                # to figure out how to use the numpy dtypes  
                # without this check and casting.  
                if isinstance(v, numpy.integer):  
                    v = int(v)

                setattr(event, k, v)

            # If it has start_date, then it's already an Asset  
            # object from asset_for_symbol, and we don't have to  
            # transform it any further. Checking for start_date is  
            # faster than isinstance.  
            if event.sid in asset_cache:  
                event.sid = asset_cache[event.sid]  
            elif hasattr(event.sid, 'start_date'):  
                # Clone for user algo code, if we haven't already.  
                asset_cache[event.sid] = event.sid  
            elif self.finder and isinstance(event.sid, int):  
                asset = self.finder.retrieve_asset(event.sid,  
                                                   default_none=True)  
                if asset:  
                    # Clone for user algo code.  
                    event.sid = asset_cache[asset] = asset  
                elif self.mask:  
                    # When masking drop all non-mappable values.  
                    continue  
                elif self.symbol is None:  
                    # If the event's sid property is an int we coerce  
                    # it into an Equity.  
                    event.sid = asset_cache[event.sid] = Equity(event.sid)

            event.type = DATASOURCE_TYPE.CUSTOM  
            event.source_id = self.namestring  
            yield event


class PandasRequestsCSV(PandasCSV):  
    # maximum 100 megs to prevent DDoS  
    MAX_DOCUMENT_SIZE = (1024 * 1024) * 100

    # maximum number of bytes to read in at a time  
    CONTENT_CHUNK_SIZE = 4096

    def __init__(self,  
                 url,  
                 pre_func,  
                 post_func,  
                 asset_finder,  
                 trading_day,  
                 start_date,  
                 end_date,  
                 date_column,  
                 date_format,  
                 timezone,  
                 symbol,  
                 mask,  
                 symbol_column,  
                 data_frequency,  
                 special_params_checker=None,  
                 **kwargs):

        # Peel off extra requests kwargs, forwarding the remaining kwargs to  
        # the superclass.  
        # Also returns possible https updated url if sent to http quandl ds  
        # If url hasn't changed, will just return the original.  
        self._requests_kwargs, self.url =\  
            mask_requests_args(url,  
                               params_checker=special_params_checker,  
                               **kwargs)

        remaining_kwargs = {  
            k: v for k, v in iteritems(kwargs)  
            if k not in self.requests_kwargs  
        }

        self.namestring = type(self).__name__

        super(PandasRequestsCSV, self).__init__(  
            pre_func,  
            post_func,  
            asset_finder,  
            trading_day,  
            start_date,  
            end_date,  
            date_column,  
            date_format,  
            timezone,  
            symbol,  
            mask,  
            symbol_column,  
            data_frequency,  
            **remaining_kwargs  
        )

        self.fetch_size = None  
        self.fetch_hash = None

        self.df = self.load_df()

        self.special_params_checker = special_params_checker

    @property  
    def requests_kwargs(self):  
        return self._requests_kwargs

    def fetch_url(self, url):  
        info = "checking {url} with {params}"  
        logger.info(info.format(url=url, params=self.requests_kwargs))  
        # setting decode_unicode=True sometimes results in a  
        # UnicodeEncodeError exception, so instead we'll use  
        # pandas logic for decoding content  
        try:  
            response = requests.get(url, **self.requests_kwargs)  
        except requests.exceptions.ConnectionError:  
            raise Exception('Could not connect to %s' % url)

        if not response.ok:  
            raise Exception('Problem reaching %s' % url)  
        elif response.is_redirect:  
            # On the offchance we don't catch a redirect URL  
            # in validation, this will catch it.  
            new_url = response.headers['location']  
            raise FetcherCSVRedirectError(  
                url=url,  
                new_url=new_url,  
                extra={  
                    'old_url': url,  
                    'new_url': new_url  
                }  
            )

        content_length = 0  
        logger.info('{} connection established in {:.1f} seconds'.format(  
            url, response.elapsed.total_seconds()))

        # use the decode_unicode flag to ensure that the output of this is  
        # a string, and not bytes.  
        for chunk in response.iter_content(self.CONTENT_CHUNK_SIZE,  
                                           decode_unicode=True):  
            if content_length > self.MAX_DOCUMENT_SIZE:  
                raise Exception('Document size too big.')  
            if chunk:  
                content_length += len(chunk)  
                yield chunk

        return

    def fetch_data(self):  
        # create a data frame directly from the full text of  
        # the response from the returned file-descriptor.  
        data = self.fetch_url(self.url)  
        fd = StringIO()

        if isinstance(data, str):  
            fd.write(data)  
        else:  
            for chunk in data:  
                fd.write(chunk)

        self.fetch_size = fd.tell()

        fd.seek(0)

        try:  
            # see if pandas can parse csv data  
            frames = read_csv(fd, **self.pandas_kwargs)

            frames_hash = hashlib.md5(str(fd.getvalue()).encode('utf-8'))  
            self.fetch_hash = frames_hash.hexdigest()  
        except pd.parser.CParserError:  
            # could not parse the data, raise exception  
            raise Exception('Error parsing remote CSV data.')  
        finally:  
            fd.close()

        return frames

or paste the code below in a file and run and you will get :

$ python read.py
          VIX Open  VIX High  VIX Low  VIX Close  
Date  
1/2/2004     17.96     18.68    17.54      18.22  
1/5/2004     18.45     18.49    17.44      17.49  
1/6/2004     17.66     17.67    16.19      16.73  
1/7/2004     16.72     16.75    15.50      15.50  
1/8/2004     15.42     15.68    15.32      15.61

code:

#!/usr/bin/env python  
# encoding: utf-8


import sys  
import os  
import pandas as pd


def main():  
    vixUrl = 'http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv'  
    df = pd.read_csv(vixUrl, skiprows=1,index_col='Date')  
    print df.head(5)


if __name__ == '__main__':  
    main()

@Peter, have you ever tried IBridgePy? Any comment for it?

I did not go further to IBridgePy because of several reasons:
1. I found that IBridgePy includes a PE binary file named IBCpp.pyd which does not have source code. I don't want to choose a small project which contains mysterious components.
2. IBridgePy only adds a thin layer to mimic zipline but it does not have many features of zipline, it might be suitable for simple zipline projects but not for complicate ones.
IMHO, zipline-live is the better choice in the above two aspects.
3. For the data source, I'm trading with IB, but I do not subscribe to IB data feed since I use Quantopian's live trading now. I don't know if IB's data feed is good enough, how much does it cost if I only want a limit set of stocks and ETFs including VIX/XIV?

I'm considering free data sources from IEX or Alpha Vantage, my solution is to enhance zipline-live to ingest live data from these data sources. But I'm not sure about the data quality, and it takes time for coding and debugging.

Quantconnect might be the choice when taking the data feed into account, but as someone mentioned python is just a wrapper for C# APIs, and I'm tired of switching to a totally new platform and redo my tests.

Anyway, I will try my free solution first and report here if I have some results.

@Peter:

Many thanks! I will have a look.

@Lucas,
For the pricing of the data you have log into your account as AccountManagement. Or better open a ticket by IB.

@Lucas, thanks, will do.

Unless you need server side execution, just use IB API. For simplicity, just use Excels VBA and IB ActiveX. If cannot live without Python look into xlWings and PyXLL. For the reasonable compensation, I can probably transfer simple algorithm...

I played with IbridgePY and it worls for me but I'm waiting at zipline-live as that will be closest to quantopian. However I dont know how the guys of ZL are tracking, as the slack channel is not very busy. I dedicate next week for porting my non-piepeline algo's and suspend any pipeline algo's for now untill ZL has pipelines. In the end its a blessing indisguise: I can finally use more then one fetcher and I can use fetcher whenever I want ... and I can use tensorflow....

@Peter
You use the IbridgePY. Have you subscribed the market datas?

Most data I need is free when you trade actively. I have real-time data of smart which includes most exchanges and I can query the app and get current prices and such.

The most difficult part is to create proper headless installations of tws or gateway that talks to everything. I have now a state where I can use IBridgePy or zipline-live on the same Ubuntu vps.

headless installations? You mean installation without GUI? Especially by Linux system such as Ubuntu?

I have worked with IB API and connections, it is not easy to build a robust interface that will reconnect on connection lost,
keep an active asynchronous update on your portfolio,all data feeds and notifications.

Comparing the effort in building such a system to other platforms cost 20$ or 100$ it would be insane to build your own integration specially if you will do for personal use only. Most library provides the minimun only.

You are better paying someone to migrate your algo to quantconnect, ninjatrader or traderstation then trying to connect directly with IB

This is another alternative:
https://www.portfolio123.com/doc/trade/

Portfolio123's TRADE feature allows you to send orders to your
Interactive Brokers (IB) account. TRADE greatly simplifies the process
of rebalancing your portfolios by eliminating the emotional, error
prone, and time-consuming process of entering orders.

Not a programming language, but you can define a lot of custom rules.
Portfolio123 has been online for 10 years and has a very good almost bias free backtest dataset, having fixed a lot of data traps that others are probably not yet aware of.

I see the "weak" point by portfolio123 is, if you chose 'Investor', which costs 30$ monthly, you can use only 2 year historical data for back testing. If you want longer, you have to pay about 100$ monthly.

Attractive?

Someone above has mentioned the Quantiacs. But Quantiacs is just a knowledge platform, not for trading.

Peter Bakker,

"The most difficult part is to create proper headless installations of tws or gateway that talks to everything. I have now a state where I can use IBridgePy or zipline-live on the same Ubuntu vps."

Any good links or notes on that? Also, at this point, if you had to deploy an algorithm that trades on IB and that doesn't use the pipeline, would you feel comfortable to deploy on zipline-live?

Did you use the vagrant file that comes with zipline-live?

Thanks,
Takis Mercouris

Takis Mercouris :
Non-pipeline algos works with zipline-live now.
There is no vagrant file at the moment, but Peter managed to get it installed on Amazon and I have docker files for on-prem install.
Join us on Slack if you need help on setup.

@Takis
Seems you have been using the zipline-live for a while and have colloected some experience, right? I got the "Time to say good bye" email from Q today. It seems a lot of work have been done in the last few weeks. Could you conferm this?

I just visited the zipline-live.io. Truly to say I find the documentation such as the Tutorial and features are still too 'thin'. I am not sure if it realy so easy to use. How to do multiple account trading if I have several linked accounts by IB?

@Thomas,
No I don't have much experience with zipline-live yet, I was just doing my due diligence to figure out which environment I should direct my time investment. As it stands, I am leaning towards zipline-live & in the next 1-2 weeks I will work towards porting an algorithm to it with the intent to trade live.

The documentation is thin, but there seems to be an active community on https://zipline-live.slack.com/...

@Takis
It seems the slack group is not for everyone and must be invited?

@Thomas: you can find the invite url in the contacts page: http://www.zipline-live.io/contact

Is there anyone have experience on backtrader?
Although Quantopian/Zipline is my first choice after several weeks survey, I think backtrader is possibly a better candidate than others at this moment. For my concerns of a trading platform are:

  1. Data Feeding: Both online and offline should be well supported without too much troubles. For zipline, I've been suffered troubles (for almost one week) just only loading a simple yahoo downloaded history csv file. Here you can refer and I think it's possibly a design issue. I am not sure if the other similar project, zipline-live, has a different result. Further, backtrader also supports real-time data feeding includes the popular IB, here is a list.
  2. Pandas support: pandas is a very import package in python for data analysis and there will be trouble if the trading platform doesn't support. There is another popular (seem mostly used in China) trading open source, PyAlgoTrade, doesn't support pandas and this causes more coding on data transferring in my experience.
  3. Analyzer: Quantopian has an awesome analyze report, pyfolio, that I didn't find in any other python trading project. But it seems backtrader can integrates it well, see here.
  4. Live Trading: right now it only supports IB but I'm not sure its reliability, here maybe a reference.

For the coming days or week, I will have a try and feedback here if there is any good progress.

PS: There is a list for almost all trading platform open source project in the backtrade page, see here.

@Jones
waitting for you to share your experience.

I have been being looking for an alternativ since weeks but till now not satisfied. I planed to change to IBridgePy.com. But there are problems. Since one has to use the IB-Gateway API and I can't acces it from at work in the company, and especially after I was told that I have to pay 1000$ yearly (600$ for the first year) if I do multiple-account trading, I decided go else where.

@Thomas
Thanks for the sharing.

I don't really ever try IBridgePy but I had a short brief of this package in technical. It's likely an adapter for bridging python application to IB. In quant world, this is not wrong except you could write everything for back testing mechanism and some analyze report of the test. I just don't want to spend time to build a new platform although it's not difficult for me. Furthermore, research in algo trading seems also an important step before testing. A good tool which can seamlessly, or at least not too much effort, to connect your data research to back testing, and to strategy development and then to production is preferable for me.

Hi, eventually we will say good bye to Q live trading today.
Fortunately, I almost made my algos work without Q, I'd like to share some of my experiences of running Q algos locally.
1. I trade with IB
2. I use zipline-live to replace Q live trading
3. I use free date sources, Yahoo for EOD, a free feed of Intraday.

The reason why I use zipline-live is it provides all functions of zipline plus live trading, I only made some minor changes to my Q code to make it run properly (I didn't use pipelines and other fancy features in my code). Another benefit is I installed pyfolio and I could use it to examine the output, same as what I did in Q notebook.

Some problem I have met:
1. zipline-live does not support realtime data ingest now, so I replaced data.history(...), data.current(...) with my own functions to get real-time price quote.
2. zipline-live is not fully automatic, I have to follow these steps to make it run properly:
a. Setup IB TWS or gateway with zipline-live in a cloud server.
b. Ingest EOD data before or after market.
c. Setup realtime quote connections
d. Run zipline-live in the market time with saved state file which contains the context info, I believe.
e. goto b
3. The trading system is quite primitive, and we have to take care of all aspects.

I still didn't finish all of them, but I believe it's doable and will work for me finally. As Peter mentioned, we will have more choice if everything is settled, we could get mail notifications, we could install whatever modules we want.

@Lucas,

Thanks for your experience sharing.

  1. zipline-live does not support realtime data ingest now, so...
    Have you subscribed market datas by IB? If you have, maybe you cuold direct use these datas? Especially the minute datas. Such minute datas can't be got from Yahoo! Finance, right?

  2. How do you do the back testing? By Q here?

Besides, I wonder if you do multiple acconut trading?

I would like to use the IB Gateway, by as I wrote above, I can't get connection to IB Gateway from at work in the company because of the fire wall.

Good luck!

@Thomas: zipline is the backtest engine drives Q. zipline-line is extending it by brokerage support, hence backtesting is exactly the same as at Q.
I recommend you read through ziplines documentation: https://zipline.io

regarding realtime data ingest: just posted a pr which solves that problem.

@Thomas
1. I did not subscribe IB realtime data, my algo basically use EOD data and I just need real-time data for stop loss, so I just use a free data feed to achieve this.
2. As Tibor mentioned, zipline-live is based on zipline, so if you could do back testing with zipline, you could do same with zipline-live.
I do single account trading now, and I think you could find a cloud server for your trading if you want to make it full automatically.

@Lucas

Care to share with us the exact procedure that you used to integrate your free EOD data (wherever you're finding it) with zipline? Specifically, I understand the quantopian-quandl bundle, but what about things like the SPY, or futures?

@Morgan
Right now, I use another free data source which I don't want to disclose here.
But I think you could try Yahoo EOD data, it still works, or you could try apis from https://www.alphavantage.co, the output looks neat and promising too.

@Lucas Thanks, but that's not really what I wanted to know. Suppose you download a bunch of data. How do I tell zipline where the data is, and how it's formatted? Where do I put the files, etc. For the SPY_benchmark it seems there's a file for it, but for example futures data I can't find where it's supposed to go.

@Morgan, maybe the following link is useful for you:
http://www.prokopyshen.com/create-custom-zipline-data-bundle

Here I just want to have a short summary about backtrader that I mentioned on last week post.

Yes, it does work and everything seems smoothly that I didn't experience in other open source trading platform. Especially on the data feeding step, it can simply loads the both popular types of history data, csv and pandas DataFrame, for back test.

The environment setting doesn't consume too much time for trouble shooting. Just applying anaconda package installation and follow the installation guids on backtrader homepage. The only thing of the installation should be awared is the pyfolio installation, backtrader can only support the last pyfolio version (0.5.1) at this moment, but it can be installed easily from Quantopian repository. Also, I suggest you should know how to create a virtual environment for installation that will eazy your life.

For the algorithm programming, of course it is an event driven platform. I wrote a very simple SMA cross over strategy for one year data backtest. I think the algorithm programming will be time consuming and a steep learning curve for beginner because it has a different concpet compared with zipline. I agree zipline is more eaiser for beginner at the first touch.

Then the pyfolio integration. The pip installation seems not working for my environment in the first look, so I should install it from Quantopian repository by conda command. I'm not sure if this repository will continue in the future and this will be a risk that I still need spending time on sovling the environment issue for pip installation.
Furthermore, the pyfolio plotting creation for backtest result should also be awared because I was also suffered a simular benchmark problem that I met in zipline. Fortunately, pyfolio prvides an extra parameter to give external benchmark data that zipline doesn't. I spend some times to identify the benchmark data formt so that I can feed it properly. Pyfolio give many rich performance data so that I can have a feeling about my algorithm.

The last thing left is live data feeding and trading. This doesn't seem to be finished in a short time but at least I can have a full back test environment now.

I hope this message could be helpful for those who are suffering troubles on building their own backtesting/trading platforms.

@Jones
You are very helpful!

But I have a question:
I want to know how to do multiple account trading. I have a primary account and sereval linked accounts by IB. Formerly by QuantOpian I open several live accounts separately.
But by QuantConnect, the first account costs 20$/month, each further costs 10$/month.
By using IBridgePy.com since one uses the IB Gateway, if I trade on separat account, I have to start several instances of IB Gateway. This is not possible on the same machine or it will make many problems. So I have to buy several VPS. Anotherway is I login to my primory account but trade on linked account. This is possible. But if I use the IBridgePy.com, they will ask me for 1000$ yearly (the first year 600$).

So I wonder if one can do this with the backtrader. Or I have to pay for that?

@Lucas Yup, that is it! Seeing as those instructions are a pretty crucial piece of info, I wonder why they aren't more readily accessible. Thank you!

I wonder if there is someone here has the experience on multiple account trading? Does everybody just trade on single account?

@Thomas
It sounds these 2 platform may not be for your option because of cost issue. For you special requirements, I think the investment on IT technology is required for you, either by self-studying or outsourcing. Lunch is always not free.

Based on my experience, I will use only IB gateway to be connected from my trading platform directly. It looks possible as I have a search on Google:

https://ninjatrader.com/support/forum/showthread.php?t=82020

If the multi-runing IB gateway on same machine is a real problem, just try multi virtual machine and run the strategy application on each VM separately. However, this will cost more on hardware investment.

@Jones
You are right. It's simply the question of cost. Either I pay for that, or I do myself. Thanks for information sharing.

Hi , I developed an automated trading suite in Python, which includes:
1. mySQL database with 1-minute historical stock data, for backtesting and model training purpose. This comes with a daily process to fill the database from public sources
2. automated trading tool with Interactive Broker API, including Mkt Data Capture and Circuit Breakers
3. backtesting tool, using real-time visualization library (also in Python)
1 and 2 can run on any local or AWS linux.
Contact me on my website if you need help to build your own tool : www.kitquant.com

Any updates on the best platform to choose in 2018?

Interactive Brokers has added a new course at their Traders Academy website of https://gdcdyn.interactivebrokers.com/en/index.php?f=25243
When you scroll down the page, you will find a course called "Automated Trading with IBridgePy". It is a free course, hosted by Quantinsti.
IBridgePy can run most of Quantopian codes without any changes and it provides much more user-friendly functions.

Disclaimer:
This is Dr. Hui Liu, the developer of IBridgePy.

no such thing as quant or quantopianx, ml etc is just toolx, nonerx