Back to Community
Why ever use 0 commission in backtesting

I constantly see people setting 0 commission to see the "real Alpha generation" potential of the algorithm.
I don't get it, when some algorithms will perform completely different as soon as you introduce any realistic
commission.

When you set the commission to 0 and find some Alpha what does that means?
That in an hypothetical environment it generates x Alpha works?
Isn't the whole purpose of backtesting, to simulate real conditions?

8 responses

Hey Lucas,

Good question. In general when working with a new idea in statistics/modeling disciplines, you want to start with the gentlest possible test conditions and then ramp up the intensity as you polish your idea and make it more robust. For example, if you have an idea for an alpha factor and it fails on real data, you can't be sure if it failed because your idea was bad, or because it wasn't developed enough and needed a bit of work. I like to use the following general workflow when exploring new ideas.

  1. Hold at least one set of data out of sample. Usually more.
  2. Simulate some data to test if my idea might make any sense under ideal, laboratory conditions.
  3. Increase the noise parameters of my simulated data and see how my idea's signal falls off.
  4. Make improvements.
  5. Move to testing on real data.
  6. Assuming it still passes, make more improvements based on its new failings.
  7. Increase battery of tests.
  8. Once I'm sure it works, test it out of sample.
  9. If it fails out of sample, then I have N chances to try to fix it, where N is the number of sets of data I held out.

You don't have to follow that exactly, that's just what I find works as a very rough guideline. In practice if you immediately jump to full blown commission and slippage backtesting with an alpha you may miss a lot of good ideas that needed some development. You may also miss some finer subtle points of how the alpha is strong and weak. Lastly, even if an alpha is not tradable because it only works on illiquid stocks or has too high a turnover, clever improvements can sometimes be made to fix that. Especially if you use it when combined with other alphas that might temper out some of its orders in the portfolio weighting stage, or maybe use it as an input to other alpha formulas.

Does this help?

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I am not sure I agree with the idea of starting as gentle as possible, besides setting commission to 0, I could also pick a bull market or compare to a terrible benchmark. At the end my algorithm will looks good but as soon as I introduce the other factors I will have wasted a lot of time? and probably overfit it?

Good question. In general when working with a new idea in statistics/modeling disciplines, you want to start with the gentlest possible test conditions and then ramp up the intensity as you polish your idea and make it more robust.

I am also not sure how testing with commission would mask some ideas? The algorithm can underperform benchmark but still have good metrics.

Hi Delaney,

Great response. I think it would be even more helpful with a concrete , even if basic example. Maybe there's already a relevant notebook?

Lucas,

Less about the comparison you make, but in the amount of noise in the data you test on. I agree that cherry-picking market conditions or benchmarks is a bad idea. Say you want to determine if stocks with poor fundamentals experience a fundamentals boost after a sentiment uptick, or if the fundamentals stay just as bad and the value converges back down. Going straight to market data on this might be tough, so what I might do is set up some fake simulated data first on the two cases and while programming my model just to ensure it matches expected behavior on data for which I know the answer and can control the conditions. Then as soon as my model seems ready to go I'd dial up the noise or move to the market. The simulated data can also help you develop hypothesis checks to know what kind of behavior you're looking for in real data, and whether the performance of the model is lucky or reflecting the same conditions as your simulated data. It's less vulnerable to overfitting because the data you're simulating is different from the market data, but yes you are more at risk to waste time here. It's up to you how much time you spend tweaking on different types of data.

As far as commission, it's just another masking factor that you have to pry apart when looking at the results. Alphalens does show you turnover, so you can see if something is too high turnover to make money pretty quickly without having commissions added into the return stream. I would argue you're also at overfitting risk if you just try a ton of ideas on the real data, commissions included, you'll likely go through a bunch of ideas before you land on one that happens to work.

Michalis,

Thank you. I don't have a notebook right now, in general I'd like to have more materials on model development and validation in the lecture series. We're working on that but obviously it takes time.

Hi Delaney,

Maybe I am mixing two phases here, when we are in the data analysis phase, say checking factor X correlates to factor Y, in this case I completely agree that noise is not helpful, but at this stage you don't really need to be placing orders to test the hypothesis.

I still cannot imagine a scenario when you are building a trading algorithm with buy/sell that the commission would make it harder to check if things are working or not. Maybe an example would help out

Lucas -

Here's an example to play around with. With commissions turned off, you can see that the algo "works" as expected. With commissions, it looks like I have a bug or something. Complete jibberish.

My sense is that frequency of trading, commissions, and such are treated as follow-on perturbations by developers rather than factors since they already have a feel for what is gonna work, and that commissions will be a negligible minor optimization. The factors are continuous in time over some timescale, so once I identify the good ones, then I can sort out how frequently to apply them and with how much capital. The commissions are a drag on returns, just sucking out capital on a continuous basis. For a large portfolio, it just ends up being percent/unit time/per transaction, I'm guessing. You are right though, that if one just thought of the problem as a whole, to be globally optimized, then the commissions drag would need to be in there from the get-go, rather than managed at a later stage. I could ignore that commissions exist, and find a factor that only works by turning over my whole portfolio every minute, put in days/weeks/months of work, and realize it was a total waste. As the attached example illustrates, one could end up doing this, if one keeps set_commission(commission.PerTrade(cost=0)) as the strategy is being explored and developed.

Clone Algorithm
3
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import random

random.seed(31415927)

def initialize(context):
    
    # set_commission(commission.PerTrade(cost=0))
    set_slippage(slippage.FixedSlippage(spread=0.00))
    
    context.leverage = 0
    context.n = 0

def handle_data(context,data):
    
    context.leverage += context.account.leverage
    context.n += 1
    
    record(avg_leverage = context.leverage/context.n)
    
    weight = random.uniform(0, 1)
    order_target_percent(sid(8554),weight)
There was a runtime error.

Grant makes some really good points. One similar one I was going to make is that in general you want to combine multiple weaker models into one stronger model. So whereas each model may fail transaction costs on its own, after combining them the resulting model will be better than the sum of the parts and possibly do fine after transaction costs and such.

In response to backtesting, generally backtesting should be the final test done in a series of many rigorous statistical tests. Namely, some consider it a way to develop algorithms, but for concerns that Lucas raises (mainly overfitting) one generally wants to spend most of one's time in research and then backtest only when you've convinced yourself that one's model (often combined alpha factor) works. The purposes of the backtest are mainly as follows:

  • See if your model survives real world liquidity/turnover constraints.
  • See if your model survives real world market impact (slippage) constraints.
  • See how much these constraints eat into your returns.
  • See how much transaction costs eat into your returns.
  • Estimate capacity range.

Basically I'm not sure if I can think of a specific case where I would try to run a backtest with 0 commissions, but certainly I would probably do a parameter sweep across a range of assumptions for commissions, slippage, liquidity, and capital to get a sense of how robust the model is to different possible market and execution conditions as well as see which parameters cause the biggest drop-off in returns. For instance, you might find your high turnover model is generally okay under different liquidity assumptions, as it trades very liquid stocks, but transaction costs will quickly change the estimated returns. Or you might find that a model which is okay under one set of assumed parameters quickly breaks under a small tweak to one of the conditions, indicating fragility.

Lastly you want to check that your model will work under the execution conditions of the brokerage in which you plan to execute. Every brokerage has different parameters as discussed above, so it's a consideration when developing and deploying models.

in general you want to combine multiple weaker models into one stronger model

Looking forward to that one. Guess you'll get a beta workflow working and we'll see where we land. Using pipeline and daily bars exclusively, my prediction is that turnover and commissions will be the least of our concerns, but I could be wrong. It'll be more like asset-allocation style investing than trading, but maybe my picture is wrong.