Back to Community
The Gaming of Stock Trading Strategies

Usually, we plan on having one portfolio playing a specific automated stock trading strategy. We find the best strategy we can and go from there. However, when looking at multiple trading strategies to be applied at the same time, the nature of the problem changes. You now have to take into account how these strategies will behave together. It is where you might have to game the strategies you have within your overall portfolio objectives and limitations.

For the single strategy scenario the solution is simple, and it has a “simple” mathematical representation:

F(t) = F(0)∙(1 + g_bar)^t = F(0) + Σ(H∙ΔP) - Σ(Exp) = F(0) + n∙x_bar

where g_bar is the portfolio's average growth rate, and Σ(H∙ΔP), the strategy's payoff matrix. It becomes about rates of return, time, and evidently, the initial stake, whatever it is.

In playing multiple strategies, the equation is the same, except for the payoff matrix which is now 3-dimensional: k strategies, by d periods or days, by j stocks for all the n trades taken over the portfolio's lifespan. Allocation to each strategy could be as simple as F(0)∙1/k. This, giving each strategy its share of the initial capital. It might help to partially diversify risk, not just over stocks, but over strategies as well. Resulting in reduced volatility, reduced drawdowns, reduced betas, but also, most probably reduced alphas, at a cost of a reduced overall CAGR. As a side effect, you would tend to get a smoother equity curve. And, there is a price for that.

All Strategies Are Not Equal

You have a mix of trading strategies. Certainly, they are not all equal. We can order them by outcome and get something like this:

Σ(H_a∙ΔP) > Σ(H_b∙ΔP) > , … , > Σ(H_spy∙ΔP) > Σ(H_z∙ΔP) > Σ(H_k∙ΔP)

Strategy H_a has more profits than strategy H_b and so on. As such, you know that strategy H_a is the most profitable of the group which is then followed by the others.

The thing to do is to drop all the other strategies and concentrate on your best performer.

H_a performs better than the average indexer too since it produces more than its benchmark H_spy, a surrogate for the market in general. Strategy H_a might not be the most desirable, but it is still the most profitable.

Why play anything else than H_a since this will only reduce the potential portfolio profit generation? That is the thing. However, there are other dimensions to this.

You might do it if there were other benefits to be had, like more diversification, a smoother equity curve, or for spreading risk not only between stocks but to strategies as well. A better risk management kind of thing. Also, a way to scale up and allocate to larger portfolios.

However, there would be an opportunity or convenience cost for this. It is the difference in profit generation: Σ(H_a∙ΔP) – Σ(H_b∙ΔP), and this cost will grow the more you add strategies and the more you add time. The larger the spread between the first two strategies and the others, the higher the cost for this “pseudo-volatility” protection.

It gets worse if Σ(H_a∙ΔP) >> Σ(H_b∙ΔP), meaning that the top trading strategy produces a lot more than the next one in line. It is one thing to mix 10 trading strategies close to the same outcome, it is another to depreciate or downgrade your best performer to what might be a much lower combined CAGR level.

Gaming Strategy CAGR

The top panel in the above chart is a returns table. It gives the future value of $1.00 invested over some time intervals. The higher the rate of return and the more years provided, the higher the overall performance. Common stuff, used for centuries.

You want to know how much $10 million invested would generate, choose the cell you want and simply multiply the numbers by $10 million. A strategy at 10% CAGR will see its $10M portfolio grow to $174.5M over 30 years. If you take 10 of those, your initial capital will need to be $100M which will grow to $1.745B.

But, and that is the point. You do not have the $100M. What you have is the $10M stake which will be divided into the 10 trading strategies. And the outcome will be $174.5M, the same as if you had done only 1 with the total stake. Take 10 strategies similar to H_b having an average 10% CAGR, divide the initial stake and you get: 10∙[(1/10)∙Σ(H_b∙ΔP)] ≤ Σ(H_b∙ΔP)]. It most likely should be less since the 10 next strategies, even though similar, were ordered by their declining payout. Nonetheless, it is good enough as an approximation.

Strategy Performance

The second panel of the above chart shows how many trading strategies at the same CAGR level would be required to achieve the same performance as the strategy at the 30% CAGR. In the beginning, it is not that many, but as you put in more time, the impact becomes considerable.

The stock market game is still a compounding game. Playing for a couple of years is not enough. That is not where the money is. It is at the other end of the time spectrum, at the 30 years+ level.

We could mix say 29 strategies with an averaged 10% CAGR with a single one at the 30% level, and that single strategy would still be able to pull in 83.8% of all the generated profits all on its own. But even this is distorted. Those 29 strategies, each needed their own $10M stake. You could put numbers to this for the 29 strategies sharing an initial $10M stake:

F(t) = $20M + 29∙[(1/29)∙Σ(H_10∙ΔP)] + 1∙Σ(H_30∙ΔP) – Σ(Exp)

F(t) = $20M + 16.45∙$10M + 2,619∙$10M – Σ(Exp)

which accounts for the initial stake only once.

The 30% CAGR strategy Σ(H_30∙ΔP) would have brought in 99.34% of all the profits of this 30-strategy combination. Should I stress the importance of having at least one such strategy in your portfolio?

But here is the real problem, such strategies are hard to come by, or are they? Evidently, 30% CAGR strategies are. But, not that much. Just over the last week, I took a posted trading strategy on Quantopian and transformed it into such a thing. You can see its conclusion here: https://www.quantopian.com/posts/the-wazoo-thing

The strategy's code is in the public domain, just follow the above link where it is referenced. It could provide you with a starting point to design your own or transform it to what you want to see. All I did was change some numbers, some strategy assumptions. No trading logic or trading procedures were changed. Note that I have not as yet added protective measures to this strategy which would tend to reduce volatility, drawdowns, and further increase performance. You should do that on your own, if you want, and at your own risks.

Within the Quantopian context, especially its contests, this strategy would not qualify since it is long only, has higher volatility and higher drawdown than desired based on their stated requirements. However, they should realize that having low-volatility, low-beta, low-alpha strategies with a strategy like Σ(H_30∙ΔP) would greatly reduce its volatility and beta while raising the entire portfolio's alpha generation.

But, like in many things: to each their own. Nonetheless, an elementary calculation of volatility for the 30 strategies would give: (29∙0.10 + 1∙0.25)/30 = 0.105 for the case where all 30 strategies got a $10M allocation. The average portfolio volatility would pass from 0.10 to 0.105. And some would like to count this as too much added risk... Incidentally, the same thing would apply when considering drawdowns: (29∙0.10 + 1∙0.55)/30 = 0.115. It would be expected to have overall portfolio drawdown go from -0.10 to -0.115, again, not a lot of added risk. I think Quantopian might be overlooking the obvious. Unless it was their intent or they were already doing this. However, I have not seen anything to that effect on their site. Even though that is what they should do, game their allocations, strategies that is.

15 responses

Gaming trading strategies is not an indulgence, but it does become inevitable as you grow larger. Evidently, you do need to have more than one strategy to start gaming what you have.

I would point out that people can get Mr. Buffett to manage their money simply by buying Berkshire Hathaway stock. How hard can that be? And as a side note, if you look at the second chart presented below, Mr. Buffett's portfolio has the same structure, his 6 largest performers, out of his 90 or so stocks in his portfolio, account for over 65% of all his Berkshire Hathaway generated profits.

Notwithstanding, you need a vision of where you want to go and what it will look like for when you look back at what you did and how you got there.

Once the journey is done, at the end of the rainbow, and you did not make it, it is done. There is no coming back with: let's do it again as if, let's run the program again. All that will be available is: scrap all that was done and start all over again from scratch with new capital. It is here that you will find so much wasted time and resources just to be sub-optimal. Not even doing better than market averages which were available to the majority and almost for free.

The following chart shows a 12-strategy portfolio. Each having its own CAGR ranging from 2% to 24%. They all started with equal weights (1/12). However, with time (20 years), the higher CAGRs are taking over most of the space. The top 3 alone representing 65% of the total portfolio.

CAGR Distribution. 12 Strategies. 20 Years

If you do not have any trading strategies at the higher CAGR levels, the graph might still look about the same, but their values might represent considerably less.

The top performers' impact can be accentuated if you include higher CAGR levels as well as more time as illustrated in the next chart.

CAGR Distribution. 15 Strategies. 30 Years

Here, the top 3 CAGR strategies account for 76.7% of total return. Even worse, the bottom 10 strategies accounted for less than 8.5% of total portfolio return. Therefore, it is not where one should concentrate his/her efforts.

From all the choices of possible trading strategies that can be part of a multi-strategy portfolio, the lower performers over the long term simply get clobbered, squished by their more powerful siblings.

The only difference you will find in all those trading strategies is some code, where some trading procedures were better adapted to do what was requested of them. And that is: generate the better trading strategies.

Why would you buy trading strategies that aim and do operate below market averages? The real competition is not there, it is at the higher end of CAGRs since one strategy at the 30% CAGR is worth 150 strategies at the 10% CAGR level as illustrated in the previous post. And such a strategy can still be tampered to the level you want.

Hi @Guy,
I appreciate that you are taking the time & effort to make a point, but honestly i'm not quite sure if i am actually getting it. What I do understand from your posts (both above and also previously over the past few years) are the following .... and please set me on the right path if I have any of these wrong.

  • As any fund (e.g. Quantopian) grows in size, it needs to adopt a "portfolio of strategies" approach. OK, that's clear enough and it is what Q is doing.

  • The overall benefits to a fund of diversification into multiple strategies that behave differently is analogous to the benefits to a small investor of diversification into multiple different stocks . OK, that's clear enough.

  • Strategies can be ranked in order of their (historic) outcome and, over a sufficiently long timeframe, the overall performance of the fund will be determined and in fact dominated by the performance of the best strategies (as per Pareto-type chart concept). OK, clear enough.

  • Your statement: "The thing to do is to drop all the other strategies and concentrate on your best performer." is rather contentious. The "best performer" is not known in advance but only with the benefit of hindsight. So I assume you are speaking somewhat rhetorically here. Whatever was best in the past may not even come close to the best going forward, and that is basically an unknowable. Nevertheless, I am reasonably happy to agree that, using historical precedent (or a historical payoff matrix), it makes sense to at least "throw out the worst" anyway.

  • I think I understand that you are making the points that:
    1) Reasonably good strategies are not necessarily sooooo hard to come by, and
    2) It makes sense to try to focus on those, and
    3) Not waste time on strategies that are known to be sub-standard.

OK, so far so good, and sorry if I'm a bit "slow," but somehow I feel like maybe I'm still not quite getting exactly what you are driving at, explicitly with regard to what Q is doing? If you were Advisor to Q (or any other fund), what would you advise them to do or to do differently?

Cheers, best regards, TonyM.

Hi Tony, nice to see you back.

First, you do get it all.

When we do some backtests on multiple strategies, we can order the results and determine which were the best performers to then add other criteria to sort them based on our preferences. Which is OK.

When dealing with a multi-strategy scenario, you know beforehand that whatever you do going forward, the overall distribution of the benefits generated by your trading strategies will resemble the second chart in my last post. It does not predict which strategies they might be, but that the chart will be similar to the one presented.

Over the past data, you could not help it. Your programs ran their respective courses from start to finish, and then, you tabulated the results. Figured which you liked best and for what reasons. We would all do that no matter the nature of the trading strategies.

Going forward, the problem is different, as you mentioned, since we do not know which strategy will perform best, and we do not know in which order they will come in.

That is where you need to game those strategies. At any time, you can tally the state of each strategy, order them according to your liking, then use reinforcement to favor your best performers to emphasize their lead for the next round. Every step of the way, you can change the weights of the strategies by allocating more to your best performers and less to the also-ran. You will end up with a chart like the second one showing your best performers on top, and your other performers at the bottom with a lesser CAGR.

It's like having a horse race where you can change your bets every second of the race. At race end, most if not all the money should end up to be on the first few crossing the finish line. You will not have known which would have been the best, but they still will end up to have been in your stable. The more you have to allocate, the more horses (strategies) you will allow to run and the more you will let cross the finish line. What the second chart says is that whatever your trading strategies do, your top performers will carry the day.

Looks like I missed the “s” there “... concentrate on your best performers”. ;)

Here is a different view of the second chart presented in a prior post.

This time instead of having scaled CAGRs from 2 to 30%, the chart shows 14 strategies operating at 10% and 1 at a 30% CAGR.

The rationale for 14 strategies at 10% is simple. Having 14 strategies at 10% is the same as having 14 strategies that do average at 10% CAGR. So, there is no loss in generalities.

CAGR Distribution. 15 Strategies. 30 Years

The chart does emphasize the contribution of the 30% CAGR strategy to the overall portfolio.

That strategy alone generated 91.5% of total profits for this 15-strategy portfolio. The search for that single strategy is on. It should be relatively easy to find. If not, well, keep on searching, find a couple or more. They are just programs after all, some code that translate what you think a trader should do to win the game.

Hi @Guy, even though you have not stated this explicitly (and perhaps are quite deliberately choosing not to do so explicitly), my interpretation of what you are saying could perhaps (?) be re-phrased as a recommended strategy for running an investment portfolio along the lines of:

a) Start with a diverse basket of strategies.
b) Based on past performance, throw out the known bad ones and keep a "racing stable" of the best.
c) Keep running with the "best-of-the-best" (by whatever metrics you decide to use to determine "best") and, as time goes by, continue to dynamically re-weight the running strategies so that, at any given time, the highest weights are always being given to the currently best performing.

Is what I have written above in fact a reasonable re-statement / paraphrase of your concept, or have I wandered off the rails a bit? If so, how would you suggest "adjusting / correcting" what I have written?

Alternatively, if I have correctly re-stated what you are saying, then presumably this process should be fairly easy to automate as a sort of continuous adaptive optimization (or at least "portfolio re-tuning" process if you don't like the word optimization). What do you see as being the practical problems in doing that?

Cheers, TonyM.

@Tony, you have it right the first time. And the task is not that hard.

The answer has already been provided. You sort out the strategies by performance and whatever other criteria you want. This will give you an ordered set of strategies to your liking where you assign more weights to the best behaving strategies or the ones you like best. The CAGR formula for the k strategies is simple: CAGR_k = [(F(0) + Σ(H_k∙ΔP))/F(0)]^(1/t) -1.

In Quantopian, the payoff matrix is a 3-dimensional array (k-strategies, d-periods, j stocks) and you sort on the k-strategy axis.

Like a very basic thing to do is to keep strategies that behave better than a benchmark like SPY. Strategies are enabled in your portfolio of strategies only if they outperform SPY. The result, evidently, should be that the whole, the average, outperforms SPY, thereby generating your alpha with a higher CAGR than SPY. A strategy can outperform SPY for years at a time. You do not have to guess that much.

As for controlling the thing, it can be done from inside or outside the program. I have done that before. See the program series DEVX ending with DEVX8 for instance where those principles are applied. It's a 21-year simulation where all trades are the result of random-like functions (entries and exits) and where the strategy is also controllable. See the following as an example:
https://alphapowertrading.com/index.php/12-research-papers/6-trading-a-buy-hold-strategy

You know what your strategies will look like at the finish line (the previously posted charts). The task is to guide your strategies to get there. Not as if every one of them are equals, they are not. You let the few stars shine and enhance their brightness using reinforcement by allocating more to the best performers.

Also, you can re-weight according to your preferences within the front-runners. What I found is that I always have behavioral preferences as in I might like the way a strategy trades compared to another even if they have lesser expectations. And that can also be expressed by favoring that type of strategy.

Strategies can be designed using fundamental data, factors, indicators, and whatever combinations thereof. They can also use directed functions as in: Σ(θ(t)∙H_k∙ΔP) where θ(t) is something you control just as you can control your trading unit function on individual stocks.

Hi Guy,

I don't mean any disrespect, but I don't think you'll find anyone here arguing against the powerful effect of compound interest. I'm curious however if you've given any thought to the difference between ranking strategies (or individual assets for that matter) based on past returns vs ranking strategies based on future estimated returns. Just because a strategy has performed consistently well in the past doesn't necessarily mean that it will continue to perform well in the future. With the benefit of hindsight, I can quite easily create a strategy with a CAGR in the hundreds, even thousands. That doesn't mean that it will continue to perform at that CAGR into the future however. This, to me anyway, is the hard part, and what we're all trying to achieve (just PM the formula directly to me if you have it readily available).

A high estimated future CAGR is also pretty useless unless you also have the financial means (and stomach) to handle its associate volatility and drawdown. Most people wouldn't be able to stomach the 55% max drawdown of your strategy, and most investors would pull their money at that time (if not sooner), even if the 'right' thing to do might be to 'double down' on the strategy. However, how does one distinguish a strategy that's 'just' pushing it's max drawdown, vs a strategy that's stopped working?

Also, even if we assume that your strategy will continue to perform at 30% CAGR (I'm not saying it won't; I quite like the original strategy as well), since it also has an associated annual volatility of about 25%, it could quite easily be beaten by a strategy with estimated future 10% annual return, 5% annual volatility, and 5% max drawdown. 'All' you need to do is apply leverage. Applying leverage of just 2x to your strategy would be pretty disastrous however, and you'd be receiving a margin call from your broker way before the expected 110% max drawdown.

@Guy, same question as Joakim. Any recommendations on how to keep volatility (and MDD) within Quantopian requirements.

@Leo, yes. First, add the protection as suggested. This will lower volatility and drawdowns, mostly drawdowns. Then increase the number of stocks to be handled. It will have for impact to reduce drawdowns and volatility since any price shock from a few stocks will be weighted down. And finally, mix it with a small group of other Quantopian low-volatility, low-drawdown strategies in order to reduce volatility and drawdowns to what you will find as an acceptable level. There is a caveat in that process too.

I have not looked at the logic behind the strategy's trading decisions, and I have not changed it either, evidently. My concern was not there. What I wanted to see was the trail left by those decisions (the trades). I look at the big picture, the strategy's payoff matrix as a big block of data that I can coerce to over/under-weight its behavior. In the last state I left it, meaning before giving it some protection, what the strategy did was operate on “some excuse” to participate in the market. I did not argue or try to find out if the trading logic was good or bad, predictive or not, but I did analyze its tearsheets in detail to see the strategy's behavior, its averaged way of doing things in light of its portfolio payoff matrix equation.

The series of modifications I brought to that strategy were not: let me try this and see what it does. It was if I increase or decrease that particular number it will increase the number of trades or the average net profit per trade and as a direct consequence increase overall profits. The tearsheets presented were simply to show that it did do what was intended.

The strategy buys stocks it “defined” as rising in a generally rising market. It is not good at all in declining markets. The reason why it does need protective measures. However, whatever its definition for rising prices, it does not know what it is going to buy, when and how much. Stocks are selected to be in the portfolio on the basis they made the list or not. And that decision is relegated to the 12th decimal! Could I say that the strategy might still be operating on market noise. But, that might not matter much. Look at the trail the strategy leaves behind, look at its payoff matrix. There it wins. Not just for a few months but for 15 years.

Nonetheless, the strategy is slowing down. Part of it due to alpha decay but also part of it, and a big part, due to the trading strategy's decision process where less and less candidates make the list. That too needs to be addressed, even prior to giving it protection.

Regardless, the whole point is not there. The point is: whatever the setup of your multi-strategy portfolio, the sum of their payoff matrices will produce something similar to the charts presented that we like it or not. You will always be able to sort the outcome of a group of trading strategies. Period. And the total outcome will look like those charts. Therefore, maybe we should plan and help our strongest strategies occupy most of that space. They are easy to identify, they are the front-runners of that group of strategies in this race to the finish line.

Hopefully, this also answered @Joakim's questions.

Yeah, it's easy, just pick the stocks that are going to perform best this year. Buy low, sell high, it's what I've been saying all along.

(In all seriousness: you're basically describing a momentum strategy here, which works, sometimes, and not at other times)

General Stuff to Consider

My suggestion is: pick something, whatever it is and deal with it. The future will not bring you back the past. However, your trading procedures if not your trading policies could make it so that you will win nonetheless. And that is where one should concentrate. Whatever trading strategy you design, you will not be able to be wrong all the time. Try it, you will see. And if you do manage that feat with a large number of trades, I'm a potential buyer of that strategy.

Your program will not adapt to what is coming if it is not programmed to do so. That sounded relatively simple.

Out of the 2,000 something stocks in the QTU, you are asked to pick some 500+ stocks based on some fundamental data, factors, indicators, or whatever data series you can find that has merit in your eyes or somebody else's.

So, every day, or whichever day you want, but at least once in a while, you need to pick one combination of stocks from n!/(k!∙(n-k)!) possible choices. Which means you pick one such group of stocks out of 5.6^486+ possible combinations. That is not a small number!

I say all the computers on this planet could not even make a dent in such a search problem for the best combination, even if you gave those machines 10 billion years to do so. Let alone do a Monte Carlo on the thing. And, this applies that you look at past or future data.

You can imagine that the problem is even harder to solve, if not currently impossible when dealing with future data. At least, over past data, you had only one recorded timeline available, only one price matrix P. Going forward, you enter a probabilistic quasi-infinite multiverse quagmire where more than chaos dominates.

It is as if some do not realize how singular and unique their trading strategies really are. It might have been interesting over some past market data, but that was just an appetizer to give you a taste of what your trading strategy could do and how it could behave. The real test, the live test is still out there to be had. That one will take time to show its merits. And, I think it will take more than just a few months.

Whatever your program selects, it is in its own tiny, really tiny speck in this humongous universe of possible variations on its huge universe of possible themes. A way of saying that your program might be so unique that there is nothing identical out there.

Therefore, one should not worry about someone reverse engineering their code. If you make your code public in some way, like putting it on someone else's machine for all to see, clearly your protection against other's quest to reverse engineer just disappeared.

Can you over-fit if the trading strategy you designed is so unique that it will do what it did for the test you ran, but will have to face a totally new price matrix P going forward?

You built this program, it is set in stone with hard-coded trading procedures. You know that future price series will be different, but seem to not accept that your program will stay the same. And, then you want it to give you the same results as your backtest on new data going forward.

Doing a backtest is only to give you an indication that what you have in mind might have worked in the past and could, therefore, be somehow applicable going forward. If your trading procedures are “technically” sound, they should apply going forward whatever will be thrown at them. That is where your trading strategy needs to specialize in order to produce the best portfolio payoff matrix. Will it be the best? Most probably not. Not even close. It is not the object of the game. Will it be good enough to outperform your peers, or what is out there? This will depend entirely on you and what you put in your code. I will add: definitely you can outperform.

You want to know if your strategy has value? Then do the homework, make the long-term tests. Give it adverse conditions, put in all the frictional costs, make it endure. See and analyze what it does and how it behaves in all types of markets, especially over extended periods of time. Enhance the strengths, filter the trading decisions even override them, and eliminate or at least attenuate the weaknesses.

The price matrix P you are seeing is the same for everyone. It is not something you control. The only way you can be different will be the strategy you implement in your code. The outcome will be your portfolio's payoff matrix: Σ(H∙ΔP). You need a long-term vision of what your trading strategy is going to do. Going from one trade to the next, period to period, is too short-sighted. Getting to the finish line just to say: “... oops”, most certainly is not the way to go. Plan for that finish line, it is way farther than 6 months. Backtest not just over a few years like I often see on Quantopian. Make sure that your trading strategy can handle it, meaning by that, that it can survive what will be thrown at it for years and years.

Hi @Guy, up to 3-4 days ago, I was carefully following your logic, checked back several time to see if I was indeed on track with you and found that I was. LeoM & Joaquim came in at that point with their comments, basically saying yes, we are also on-board with these ideas and no-one here is disagreeing. OK, so far so good, and I thought we were moving forward.

My hope was to move the discussion on from here and attempt to tease out of this some firm practical suggestions as to how to implement these ideas which, in concept, we all agree upon. Unfortunately we seem to have slipped back into the philosophical which is not specific enough to be helpful. I will try to steer us, if I can, back to more practical aspects.

For my part, I have NOT personally conducted any multi-portfolio studies of my own. However I do have some experience that I think is relevant here and which may perhaps (I hope) serve as a simple analogue that will help elicit practical comments. Some years back I was looking at some characteristics of very simple moving average systems and trying to address the problem of what is the "best" value of moving average to use in a practical way. Doing some sort of optimization is NOT the answer. It shows what worked best in the past, but is of little value going forward, except that it may perhaps suggest values that are NOT useful and are best thrown away.

This, I think, may serve as a very simple analogue for my comment of 3 days ago that was applied to your (@Guy's) original thread regarding multi-portfolios, in which I paraphrased your/his concepts as:

a) Start with a diverse basket of strategies.
b) Based on past performance, throw out the known bad ones and keep a "racing stable" of the best.
c) Keep running with the "best-of-the-best" ... and ... continue to dynamically re-weight the running strategies so that, at any given time, the highest weights are always being given to the currently best performing.

And then my question to you (and I'm still hoping for an answer) "....... how would you suggest adjusting / correcting" ?

In an attempt to get at some answers, I will continue with my simple case of a single MA system as a plausible analogue. What I did was to look at various possible suggestions, OTHER than anything based on optimization using past history. Some suggestions that came up, which I tried, were:

  • From Perry Kaufman: Pick several different MA length (parameter in this case rather than system) values and run these together, either with either equal or varying amounts actual money assigned to each, or else with all but one being run as "shadow" systems (with no actual money) and switching into the "best" one as this changes over time. In fact this also corresponds to one particular case of dynamically weighted systems.

  • Single system with adaptive parameter value for MA. In fact there are quite a few well-known systems that try to do this. (For example see Kaufman "New Trading Systems & Methods", Wiley , Ch. 17 entitled "Adaptive Techniques"), or alternatively Kalman Filters.

The problem in ALL of these that I attempted, whether (sub-)system switching or continuous parameter adaptation, is that the decision process is either based on past performance or on some forecast (which itself is also calculated based on past performance), and either way if we make the response and switching or re-calculation too slow then it is like "driving by looking in the rear-view mirror" with too much LAG, or alternatively if we make the response too fast then it is like "driving by looking in the rear-view mirror" with too much OVERSHOOT.

In conclusion, I found it difficult to come up with good ROBUST solutions even for one single (composite) system, which is what I am suggesting as a simplified analogue for the kind of decision process that is required in the multi-portfolio approach that Guy is discussing.

If you do not believe that my example might be a reasonable simple analogue for what is required with a portfolio approach, then I invite you to suggest a better analogue.

However, if you think my comments above are at least partly applicable, then I invite you to suggest HOW IN PRACTICE we might extrapolate to get something practical out of this with regard to @Guy's conceptual development regarding portfolios. Please can we keep this on a practical, experience-based level, rather than on generalized philosophical aspects that no-one is arguing about.

Cheers, best regards, TonyM.

@Tony, yes, I agree with everything you said. And, all of it should end up in practical systems. Otherwise, what's the use?

What I have stated many times in these Q forum threads is that: there are a huge number of trading strategies that can be worthwhile. It is not looking for the ultimate strategy but finding something that is acceptable or that we can transform into acceptable and that can outperform the averages. Not under someone else's conditions but according to our own preferences and tolerance levels. Then, we need to game what we have found since these strategies will not all be of equal value.

For example, a few days ago, the strategy “Trend Follow Algo” made the top of the newest thread for whatever reason, it does not matter. It was described as a “might be useful” kind of thing by the initial poster. Nonetheless, I had it pass my preliminary acid tests to see if it could grab my interest. You had some critical analysis on it including totally destroying it. But such things for me did not matter. I was looking for something that had “some excuse” to execute a trade and that I could transform to be more productive. There it was at the top of the list. I have not looked a the logic of the trade triggering mechanics yet but I do suspect that it will be related to some moving average of some kind. And therefore, fits in the moving average theme you referenced. And presently, I am gaming the strategy to see where it can go. Notice that it is not the only trading strategy I've enhanced on Quantopian using the same methods.

Yes, if you optimize your past moving average crossover system you might find that the 37.328914756102 SMA is the best. And yes, for obvious reasons it will not work that good going forward. But no matter, going forward you still have a moving average crossover system. It most probably might not be optimal, but nonetheless, you could game the strategy to be more productive. And simulations should help you do that. Evidently, you will find that you should be less sensitive than responding to a 12-decimal decision thing and you will probably change the structure of the program itself to be more resilient to price change.

Now, to answer the real question. How do you go about monitoring your horse race?

By keeping a long memory. (philosophical? Maybe, but see what follows.)

What I see on Quantopian are strategies that never look back more than a year or so in their past and often shorter. Nothing beyond those rolling windows is remembered and therefore none of it is there to influence what is coming next. Those strategies having no memory might have no vision of their future either. All they have is this vague short-term memory of things past (LAG).

The solution is in the payoff matrix equation itself:

$$ F(t) = F_0 \cdot (1 + \bar g)^t = F_0 + \sum (H \cdot \Delta P) - \sum (Exp) = F_0 + n \cdot \bar x $$

You sort the portfolio's 3-dimensional payoff matrix by strategy (axis=0) divided by the total sum of the payoff matrix.

You get the sorted strategy weights to which you apply a decaying reinforcement function \( \Delta w_k \) that will favor the more productive and gradually reduce the weights of the lesser ones to minimal or oblivion. This will set a merit classification for your trading strategies. Note that the payoff matrix is a long-term memory structure, not only that, it remembers every single trade executed in every strategy in the portfolio.

The nature of your trading strategies is secondary. The procedure will sort them out by performance, or other metrics if you want, and provide reinforcement feedback to the front-runners while neglecting the also-ran making with time your portfolio strategy weights appear in order of portfolio impact as depicted in the presented charts.

Now to simplify the work, the payoff matrix equation itself again provides the answer.

The 3-dimensional payoff matrix could be of considerable size: 30 strategies \(\times\) 2,000 stocks \(\times\) 5,040 trading days (20 years). This is over 300 million data entries. Not just for the payoff matrix, but for the holding matrix, the price and price difference matrices and any other that will be needed by the strategy itself for decision making. You would spend all the time appending rows and columns to this set of ever-growing mathematical contraptions. As if saying that this is more trouble that you would like.

Note that the portfolio payoff matrix equation do bear equal signs. The outcome of \(\sum (H \cdot \Delta P) \) is equivalent to \( n \cdot \bar x \). So, sort the strategies on their respective \( n \cdot \overline {x} \) numbers and divide by their total to get your weights.

This took the 300 million entry payoff matrix and reduced it all to a 30-element vector which you can now carry around with ease. This is more than just data compression. It is why we need to keep track of these numbers within the program and let them guide the ensemble of strategies to their destination based on their merits. Let them prove that they merit your top accolades, let them perform or else.

It is why I like the round_trips=True option in the tearsheets since it is the place to get the last two numbers that synthesize all the trading activity of the payoff matrix.

It's still a simple momentum approach to portfolio management though. You're assuming the best-performing strategies (highest momentum) will continue to perform best in the (near-)future.

We all know it isn't that simple, right? Else we could just generate 1000 random strategies, select the best performing ones, and sit back and relax.

@Ivory, yes it is a momentum approach to portfolio management. You always want the best in your portfolio and have a long-term vision of the end results.

A small example for the Quantopian world. You have KO and PEP. You can go long KO and short PEP in equal amounts in a mean-reversal strategy to make it market-neutral (pairs trading or whatever). You make it a short-term trading thing. Sometimes it works and sometimes it does not. Mostly due to the time slicing.

However, a momentum trader observes from the long-term data that PEP's CAGR is higher than KO's. It has been so since late 1974 (some 44 years). And the reverse to the mean guy is still waiting for the spread to go down. Meanwhile, the player with a long-term vision has been riding PEP all along while ignoring KO. Unless KO, in his ensemble of stocks, is still higher than the lowest performing stock in his 200-stock or more portfolio. And, even in such a case, KO would not be shorted, it would also be a long position.

You could go long PEP and short KO for a positive momentum spread. All the KO short position would do is eat up PEP's CAGR, force paying interest on its margin, and pay KO's dividends. You would see that a lot of PEP's CAGR simply disappears. On the other hand, what you got out of it is lower volatility, lower drawdowns but most importantly, much less money. Sorry, that is a bad word, make it a return deprived portfolio.

As silly as it may seem, taking at random a thousand different trading strategies and sorting them out by their respective payoff matrices might effectively be more productive than you might think. All you need is that those strategies have basic characteristics which are more related to their environment than anything else. They should all trade over extended periods of time (with 10, 15, 20-year backtests), and they all should make a lot of trades during that period so that their payoff matrices demonstrate something statistically significant. And you could take the best few, something like the best 10 or more depending on the initial capital you want to allocate to each strategy in your portfolio.

Surprisingly, that is what we all do using the Quantopian platform.