Back to Community
Losing Trades are NOT "just part of the game" - Don't over-trade!!

I hope that my friends, the "usual suspects" James, Joakim, Grant, Karl, Blue, etc might pick up on this one, but just FYI for anyone who doesn't know me, i have been trading my own very modest little account for > 30 years and still going.

As newbie traders, perhaps some of us might remember having been attracted to advertising nonsense like "97% win rate with this amazing system" and hoped to be able to do even better because we thought we were pretty clever. After a few (few hundred more likely) losses, we maybe then succumbed to the trader psychology concept of "accepting our losses as just part of the cost of trading". And maybe we stayed with that. After all, if we understand the "traders equation" regarding expectancy and if ours is positive, then basically we are doing OK, right? However with time, we started to wonder if we could do better and, more specifically, just how much better could we do? So then we tried NN & ML & DL & AI & whatever. Did they help us? As much as we had hoped?

I have just been having some discussion with James here on Q about the topic of avoiding bad trades and how, from a practical traders perspective, even if not from a ML theory perspective, missing out on good trades is definitely NOT symmetrical with avoiding bad trades. I would now like to present the following intentionally provocative comment: You are probably OVER-TRADING and more-over, unless you have considered this very carefully, probably about 25 - 30% of your trades are DOOMED LOSERS ..... AND a significant proportion of those could be AVOIDABLE ..... and THIS issue probably merits a LOT more attention than most of the system-tweaking that you / we are doing!!!

Before you conclude that i must be looking for a fight, please forgive my deliberately provocative wording style. How can i know for sure what you are doing? Well of course i don't, but i would like to attempt to convince you, or rather to allow you to convince yourself, of the probable truth of my statements above, and so hopefully you might then engage with me in more discussion about the value of and methodologies for avoiding bad trades, even at the expense of missing a few of the good ones.

So, here is an experiment for you to try. You will (as far as i know) not be able to do this on Quantopian, so you will have to use some other platform of your choice, but hopefully after doing this experiment, you will then be able to bring your learning back here to Q and apply it. Here is an outline of the experiment. It is an experiment NOT a trading system, and you must be able to use look-ahead, as follows: For each bar, starting with the closing price, see how much you could have gained or lost (if stopped out) by the end of the next bar (1-bar look-ahead) if you had been Long and similarly if you had been Short. Do the same thing for look-ahead periods of 2, 3, ... up to 10 bars. Using John Sweeny's concepts of Maximum Favorable & Maximum Adverse Excursions, track your MFE & MAE for both Long & Short for all periods between 1 & 10 bars. Within this range, choose whatever you think is typically your longest trade duration (or just use 10 bars ahead) and calculate the maximum values of the MFEs & MAEs. Normalize everything to percentage moves or (even better) to units of ATR. Now decide what you think is the minimum trade result (in ATR units) that you would like to accept for your trades and set this as a threshold for "viable trade" MFEs, with anything less that that being "too small". Do the same thing for maximum MAEs that you would be comfortable allowing without being stopped out, and treat anything larger than that as "too large". What remains as trades bounded by these thresholds can be considered as "viable", by whatever threshold criterion you chose. Do the same thing for both Long & Short trades and plot the results along with the price series bars. What you will find is that, even IF you had the benefit of perfect foresight (look ahead) which of course we don't, you will find that looking ahead N bars, there are many times when we could not actually do "viable" trades EITHER Long OR Short, and that any trades placed at such times will inevitably be losers or "go-nowheres". This result is based only on price data and is independent of any particular trading system, so any actual system will inherently be even less successful unless it specifically takes these "non-viable trading bars" into consideration. Now, the shock to me was to find out just how large is the proportion of bars (for a wide range of stocks, indices, futures, FX or whatever else you might want to look at) that inherently represent non-viable trading conditions. Of course it depends on the ATR threshold chosen, but with 1 ATR as a threshold, i was surprised to find that somewhere between 15% and 40% (typically about 20-30%) of all bars were NOT useful as candidates for "viable trades" at all, irrespective of whether either Short or Long! Logically these are bars that we SHOULD be diligently seeking to exclude, even BEFORE presenting to our trading system. So, my conclusions from this little experiment were:
1) Even with the benefit of look-ahead, a surprisingly large proportion of bars are not suitable as entry candidates, either Short or Long.
2) For any real systems (with no look-ahead) the proportion of bars that we should logically exclude will be even greater.
3) If we are not excluding these "non-viable" bars and are entering trades on them, then we are inherently over-trading (and losing).
4) Trades entered on such bars will either be losers or will produce results that are so small as to be of minimal benefit.
5) The best way that i can think of to improve trading system (irrespective of the system) performance is to avoid these "useless trade bars".

If you actually try this for your self, then i look forward to reading and sharing your comments which will of course be most warmly welcome.
With best regards, Tony.

11 responses

Hi Tony,

I'm glad you brought our discussions on avoiding bad trades (periods) and back it up with your own study to this thread. I concur with the results of your findings that most of algorithmic trading systems have a tendecy to over trade mainly because they have not factored in avoidance of nonviable trading periods into consideration. I have arrived at the same conclusions in a previous study of SP500 index using Amibroker platform that has Maximum Favorable & Maximum Adverse Excursion features. A substaintial portion of short term time horizons have non viable trading opportunities due to thin spreads that is only eaten up by friction costs. Entering such periods is, more often than not, going against the predictive grain.

These have many practical implications in prediction model design. One approach is after identifying the viable and nonviable tradeable opportunites through a mechanism such as Tony just described, you could use this information to detect when it is OK to enter/exit a trade and when to avoid one. This can have a positive impact in designing systems that have target/output variable that wants to mimic actual trading outcome with trade efficiency.

Hi James, thank you. This all ties in very nicely with our favorite topic of "regimes". Here are a few more comments from me, based on work that i am doing right now "as we speak".

1) I also use AmiBroker as my main experimental trading platform, as i now have many years of experience with its AFL language which IMHO is more powerful that that of any similar alternative platforms such as Metastock, Tradestation, EasyLanguage, MultiCharts etc and, at least for me, is a lot easier and more intuitive than python for the specific context of trading financial price data series. Yes AmiBro does have built-in MFE & MAE, but irrespective of whatever platform one uses, MFE & MAE are actually very easy to program and use for anyone who has read John Sweeny's book and understands just how useful and powerful those concepts really are as part of general trading system development.

2) Identifying non-viable periods for trading is not really very difficult and there are many different indicators that one can use for this. One good place to start, for anyone who isn't doing this already, is with Al Brooks concept of "Barb Wire", as something to be avoided as much as possible in trading. Avoid it or else you will almost certainly get whipsawed & cut!! With a little creativity, his concept (as per any of his 4 books) can easily be extended and improved further.

3) What i found pleasantly surprising is the extent to which non-viable / BarbWire periods, as identified by appropriate minimum-lag indicators that you can easily devise & apply for "real time" work, ALSO provide quite good short-term predictions of future behavior, as can be confirmed with look-ahead analysis. What this demonstrates very nicely is that, in the context of market regimes, if we define "non-tradeable regime" carefully and well, then it really does have quite a good degree of persistence. Much better in fact that that of uptrends or downtrends for example.

4) What you say, James, about thin markets & spreads may be true, but i think is not really the key issue. The sort of "non-tradable" or at best only marginally tradable times that i am talking about occur even in very liquid stocks with small spreads. It is simply a part of real-life market behavior that occurs a surprisingly large amount of the time, but which most people, for some reason, just don't bother to think carefully about!

5) I suggest that anyone who is NOT using some sort of "regime pre-processing" like this BEFORE going into ML (or whatever else they use) is probably wasting a lot of the potential power of their predictive tools in trying to generate predictions on that part of the input data that is identifiable in advance and is best simply left out altogether as being "non-tradable so just avoid it". Then the trader's favorite ML etc tools can work more effectively by allowing them to focus on just what is potentially tradable, rather than confusing them with other periods of market data (regimes) that are not.

6) Notwithstanding 5) above, i expect that most people will simply ignore this and keep on doing the same sort of stuff as they have always been doing ... and getting the same sort of results as they have always been getting. Oh well, good luck to them!

So yes, in conclusion, i agree absolutely with your statement that: "This can have a positive impact in designing systems that have target/output variable [that wants] to mimic actual trading outcome with trade efficiency" (or perhaps that wording should be "efficiently"?).

Safe trading!
All the best, T :-))

Hi Tony,

The sort of "non-tradable" or at best only marginally tradable times that i am talking about occur even in very liquid stocks with small spreads.

That is exactly what I intended to imply. With a short term time horizon, price movements are very noisy, more so if you factor in transaction costs. One has to extract, identify then segregate periods where there is enough volatility that causes spreads to widen to call it a mean reverting trading opportunity or low volatility scenarios where there is successive breach of recent highs or lows to form an upward or downward short term trend to call it a trend following trading opportunity or what you call "non-tradeable regime" wherein there is low or normal volatility but small spreads that it is rendered or viewed as merely flat or consolidating where chances of being a profitable trade either side your on will be very small if not none.

What I meant by "trade efficiency" is the market timing metric used by some platforms to measure how efficient a trade entry/exit is vis a vis the short term future horizon (i.e. if I entered long today and exited 10 days from now at its peak, my trade efficiency score is 100!).

Cheers!

Tony, James, I agree with you both. In my own research, way back, when looking at all the trades taken, the top 5% accounted for about 80% of the profits. The rest could have been dropped as not that useful. But, when you dropped them, you would also lose much of your 5% simply because they originated from the same trade triggering mechanism.

I even had one strategy where I reintroduced the losing trades in order to increase the overall profits. In order to maintain your 5%, you might have to take the other 95% as part of the cost of achieving the 80%. And since that 95% still produced the other 20% in profits. The problem is somewhat mitigated, you did more work, for sure, but you still got that 5% making the 80% of the profits.

Nonetheless, I do think that 10 days is really too short a predictive interval. However, you could use the dull periods as some price consensus where no one big enough is ready to commit or initiate a move in one direction or other. And when you see volume expanding with price, it might be your cue to piggyback along for a short duration momentum ride.

Thanks Guy, I hear you, understand, and largely agree. I think the trick is to try to get better with understanding the details of differences between continuous periods of non-tradable regime vs the interspersed isolated bars that don't give useful trades either. They are not necessarily similar.

This approach of focusing attention on what doesn't work seems somewhat counter-intuitive, but it leads to the formulation of the problem in a different way that i think has the potential to be more efficient for solution in ML.

As an analogy, and despite the fact that many (most?) people don't spend enough time thinking about pre-processing (and i don't just mean de-noising) of data, i spent years working in the oil exploration & production industry, watching Geologists & Petrophysicists naievely dump data into NNs while saying that "the machine can correctly pre-process the data itself", and then wondering why their results were not so great. It makes no sense to give any sort of ML the task of trying to struggle with figuring out something for itself, especially in the context of noisy data, when an operation (for example differencing or taking ratios), can be done far better as part of intelligent human pre-processing first, leaving the machine to spend its efforts looking for more subtle features. So i see this idea of working with the not-so-good trading results first as part of good pre-processing, in which i am definitely a strong believer. I think we are more or less on the same track so far.

With regard to your comment that "10 days is really too short a predictive interval", this is an interesting topic for consideration in the context of ML, and of course has significant practical implications. I understand where you are coming from, and let me start by saying that i am also a strong believer in the importance of fundamentally driven trends due to many factors such as macro-economics, government interest rate policies, time delays in bringing new mines & factories on-stream, effects of inventory buildup & drawdown delays, etc, etc. HOWEVER while it is definitelt true that many trends do unfold over time-frames much longer than 10 days (and sometimes much longer than 10 months), nevertheless the question remains as to what is the best way to trade them? At one extreme we have a sort-of "buy and hold the trend" approach, and if one uses that, then i would agree with your comment, but in many cases it is preferable to take a series of short-term "trading chunk bites", all in the same direction, out of longer term trends rather than to sit and ride the pullbacks. What is best in this regard tends to be not only market-specific, but also varies over time (typical non-stationarity effects) even within single markets. There is a general tendency to believe that markets have become more Mean-Reverting in character over time, and if this is indeed true, then the idea of shorter (e.g. 10-day chunks within longer trends) does make sense. I didn't make that statement without some careful consideration & research first. Nevertheless, sure, the timeframe mentioned is indeed largely a matter of personal preference. Another related aspect of this is also the fact that the reliability of any form of ML results tends to degrade the further out in time one looks, and so this interval is to some extent an effort to find a useful compromise. I know people who run futures trading funds and believe that anything longer than a 5-day "prediction horizon" with any form of ML is completely unrealistic nowadays.

I'm with Guy -- often we enter these pointless positions because we cannot discern between them and our winners. I've often thought the solution would be to combine disparate signals, and where they agree you'd have a higher confidence the stock will go somewhere. I'd never really considered the inverse approach -- that there'd be some way to identify the efficient stocks and filter them out. If you could predict stocks that won't move, you could make a fortune selling straddles. For this reason, I doubt it is so easy to separate the wheat from the chaff.

Hi Viridian,
-- Yes, combining disparate (in the sense of orthogonal) signals always makes sense.
-- Predicting Moving vs Not moving becomes more difficult the longer the time horizon.
-- "... make a fortune selling straddles" ? No, i don't think so. The market makers control pricing to make sure that you generally can't!
-- Not-so-easy doesn't mean not worth trying. If you do some experimental analysis, you may find (as i have) that if you use a Markov chain approach, then it is easier (i.e. transition probabilities are higher) for predicting "not moving" than for predicting trends or directional movement, which is the way that most people approach it. But honestly, i'm not intending to try convince anyone else here to whom the idea does not appeal.

It's definitely a very appealing idea. Can you share a before and after example of the results you've achieved by cutting the fat ala Markov chain?

Just had a critical situation in my family & flying overseas now. I will get back as soon as i can. Tony

Tony, my prayers are with you in your time of crisis. Take care my friend. James.

@Viridian Hawk,

Luc Prieur posted awhile back his attempt at identifying market regimes through Hidden Markov Chains and OneClass SVM in this post: here