Common Factor Risk Snapshot

I put together a notebook that you can use to generate a quick snapshot of the performance of common risk factors over the past year. If you're interested in getting a feel for how different factors have behaved in recent memory, have a look. This is not directly helpful for creating an algorithm, but it is always good to have a sense of how the market is behaving when you are developing and testing different hypotheses.

71
Notebook previews are currently unavailable.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

40 responses

Very cool, thanks Max!

I'm curious if this notebook could somehow be combined with the notebook in the Risk On blog post? In essence, to identify what 'market regime' in terms of the Q Risk Factors, are 'in play' during certain market periods, and how a strategy may prefer a high or a low [insert Q Risk Factor] market regime?

That and also perhaps, as @James Villa pointed out, testing if a strategy prefers a bull or a bear market, using S&P500 MA200 to determine if we're in a bull or bear market (or 'sideways market' if there's a good way of estimating this)?

Let me know if I'm 'overthinking' this and on the wrong track.

Joakim, I think that's definitely on the right track. I love the idea of a regime-aware algo, especially using that knowledge to toggle different factors. I imagine that you could follow a similar procedure with risk factor returns that was followed with VIX.

I don't see why you couldn't try to test sensitivity to bull vs. bear markets and either have an algo that switches what it does or have two separate algos for each. For a "sideways" market, maybe you could use the Short-term Reversal factor? I'm not sure, but it's an interesting hypothesis.

Hi Maxwell -

It would be interesting to see some analysis showing why Quantopian imposes exposure constraints on the risk factors. I asked the question awhile back, and there were various responses provided. I kinda got the sense that Quantopian hadn't developed the risk factors in any rigorous way, but was adopting some kind of industry standard practice and/or applying them based on a customer requirement.

Regarding the industry sector factors, one could imagine better ways of approaching the problem of diversification. For example, I recall someone suggesting something like clustering to define groups of stocks, versus the standard industry sectors.

The general idea behind style factors makes sense so long as only factors that pass certain tests are included. If style factors that are just noise are included, then you are imposing an additional source of unnecessary turnover on algos. And if the style factors are actually good ones, then you are throwing away returns (although I guess the argument is that those returns are not predicted to be profitable, because you can't charge enough for the return streams).

Any thoughts?

We can only define alpha relative to the common factor risk that you are trying to avoid. Defining the risk model allows us to say "alpha is what is left over after we remove these influences". Every style of trading is a risk factor to someone. Some people only trade momentum, some people only trade value stocks. Including these as factors in our risk model just indicates that we don't want those, and that we do want returns that are unexplained by any of our factors. That is not to say that other styles of trading are wrong, or invalid. It is totally reasonable to make a confident bet on market performance or momentum. It is to say that what Quantopian is looking for does not include those kind of risk exposures.

Hi Grant, Maxwell,

The way I view specifically the style common risk factors is that they are as Q defines and measures them whether Q is adopting some kind of industry standard practice and/or applying them based on a customer requirement. It is a risk mitigation constrain that is defined uniquely by Q. For example short term reversal, Q uses more or less industry standard -RSI(15). There are many short term reversal defnitions and measures out there, for example, the well studied Connor's RSI(2). My point being, it is relative to how one defines and measure things, which one is better, who knows? We are at the mercy of how Q designed what their model to be, however, we are free to make suggestions for improvement or even argue the model's veracity.

The chart below (Cumulative Sector Factor Returns) is based on @Maxwell's notebook above (nice job btw, and thanks).

The chart is a look at the bigger picture, a change in perspective. If you want to design trading strategies that can last, maybe we should look at longer trading intervals than just one year.

The chart uses the same code as with @Maxwell's 1-year chart. I only requested it to consider the last 10 years:
 last_year = dt.datetime.today() - dt.timedelta(weeks=52*10) 

From the above chart, we can see a definite underlying upward trend in these sector return factors. They stayed in about the same relative order throughout. In fact, the top 5 remained the top 5 from 2012 onward (over 6 years).

You can have factor variations on a 1-year rolling window (as illustrated in @Maxwell's chart), but they do not offer much in predictability since from one year to the next that 1-year window will look quite different. The same would apply if you shorten the rolling window to say 9, 6, or 3 months.

It is more like as if looking at the trees from too close when you should zoom out to see more of the forest.

However, if you continuously extended that first 1-year window, leaders would take hold and ride their upward trends, maintaining their ranking for long-time intervals. Therefore, if you had to make any predictions, it should have been in the same direction as those lines and not in predicting, every other day, a reversal to the mean which did not happen.

If you compare your trading strategy's mean-reversal style factor to something that did not mean-reverse, does it not make your style factor almost irrelevant? Do you want to show that over your investment period mean-reversal worked, even if it didn't, or are you in it for the money? That too is a choice someone has to make.

The best sector performance levels, evidently, were on the top few lines. The others should be considered as also ran, right down to the bottom line. If someone had to make a choice, they should have followed the leaders. That is where the best returns were.

This could have been done by periodically monitoring the sector leaders for the highest performances. And within the few leading sectors, select the best-performing stocks. Easy to detect, they were outperforming the averages. And if your selected stocks outperforming the averages, collectively, they will also outperform the averages. And bang, “alphatos gratos”.

Questions and Observations

Where is the sector-rotation thing in this? It would appear as if since 2012 the market forgot a whole group of sector-rotationists. From 2012, there seems to be little to be gained in the sector-rotation business.

All sectors are not equal. Right, and they do not oscillate and mean-reverse around the zero return line all the time either? In fact, they tended to do as if most had aspirations for a better world (read trend upward).

What! These things did not mean-reverse during that period. You must be kidding. These things mean-reverse all the time. There is not much evidence of that in the above chart. But, you should make your own conclusions. I'm just providing the chart.

So, I am saying: one should have followed the best performing sectors since that is where the money was flowing. Yes, and as in many things, follow the money. It's right until its wrong. And in this case, it has been right for at least the past 6 years. You are still in this game for the money.

Note the above chart reflects what we see in the market. If you wanted to use market indexes or sector ETFs instead, the chart would have about the same look and design.

Can we deduce all that from a chart? Yes.

For anyone doing the 10-year test on this notebook. Notice that the only style factor that might be of value is “value” which should not come as a surprise. The best companies having the best performance should somehow show that they should be valued higher than the others and that there would be this correlation. Long-term value, long-term higher price.

However, we cannot rely on this since there appears to be a major flaw in the data. See the 10-year “Cumulative Style Factor Returns” chart which displays a huge jump for the “value” factor in 2013. And if it is a comparative chart, then it distorts everything in the cumulative style factors.

Hi Maxwell -

I was hoping for a more fleshed-out discussion of why Q landed on your set of common risk factors and what benefits they provide.

@Guy,

No offence but in hindsight it’s easy to say that that Value should do well, but I’m not sure you ‘knew’ this 10 years ago.

‘Value’ stocks may have had a decent ‘over performance’ run in recent years, but they don’t always outperform. They very much underperformed in the mid to late ‘90s when high flying Dotcom ‘growth’ stocks was all the rage, oftentimes without even much revenue.

In other words, just because they have done well in recent years, even for a long stretch, doesn’t mean Value will do well going forward. One could even argue that it’s time for a different market regime, so under-exposure to Value might do better (I’m not saying it will, just that it’s possible).

The best companies having the best performance should somehow show
that they should be valued higher than the others and that there would
be this correlation. Long-term value, long-term higher price.

You’re kind of contradicting yourself with this statement, at least in terms of Qs definition of Value. Companies with the best performance prospects/outlook are priced higher compared to their book value, i.e. LOW Value.

@Joakim, every day for the last 10 years you could have ran @Maxwell's notebook or have done the equivalent by other means like tracking comparative sector ETFs on Yahoo. What you would have had is these gradually evolution lines morphing into the presented chart. Each day having a chart that just looked much like the day before.

Every day for the past six years, that is over 1,500 trading days, those ETF sectors would have stayed in the same relative order, each with the same relative uptrend. They would have shown that they were not all equal, that they did not reverse to the mean all the time. Day by day, every day, 1,500 days.

What does it take so that at one point you might say: for the moment, for as long as it can last, the order of those ETFs by performance level is such and such? How many days do you need before declaring that there might be a trend in there? Or, that the best performers are such and such? Isn't that what the chart is supposed to give you?

Betting that, probably, tomorrow will be close to the same as today does not seem like a big hindsight thing to me. More like an expectation, and of the obvious kind at that.

Hi @Max,
Thanks for this good thread. I agree with @Joaquim's & @James' comments. I certainly like the concept of "regime aware" algos and I have spent lots of time trying to PROMPTLY answer the question "what regime is the market in NOW"? I definitely think this deserves more attention and for some years I have regarded this as one of the most important questions to address in trading system design.

@Guy, yes, sure, it is good to look at all these issues on a range of different timescales, commensurate with the time-frame of your trading system(s). Interesting comment on sector rotation effects or what you perceive to be the relative lack thereof. Thanks for the long-term plot. However this is a Cumulative returns plot and actually there is quite a bit of crossing back and forth of the lines. Or maybe one could argue that we have not really seen many different trading regimes in the last 10 years.

Picking up on @James point that there are multiple different ways to define even fairly similar risk factors (e.g. RSI15 or Connors RSI2 and lots of others) and : "which one is better, who knows? We are at the mercy of how Q designed what their model to be", I would like to request a clear specification from Q regarding exactly how each factor is currently calculated (it's probably there somewhere, i just can't find it right now). I would also like to request multiple (i.e. at least 2) different versions / commonly accepted calculation methods for each factor.

Hi Tony,

All risk factors' specs can be found in Q's whitepaper on risks here
Hope this helps.

@Tony, how do you want to define a market regime?

The market in general has been going up for the past 9+ years. Does that count as saying the regime in recent history has been up for now? Or are you looking to explain short-term price fluctuations? If so, what length of time you want to consider in defining the boundaries of these regime changes? How do you classify them in code? More importantly, how do you anticipate them in order to play them?

Without looking at the games we play over the long term, how could we plan for what is coming our way? For sure, what is not coded in our trading scripts is not executed.

Do these series cross the zero-return line? Yes, daily fluctuations do it all the time. The chart below is also from @Maxwell's notebook. I only added horizontal lines, just cosmetic stuff. Over the short-term, those return variations appear hardly predictable. From such a chart we would be hard-pressed to declare what is the underlying long-term trend. Yet, the general expectation should still be up.

@Guy,

Ok fair enough, I agree with about 50% of that (slightly different from your earlier explanation though as to why there might be value in Value).

Tomorrow’s performance may indeed be similar to today’s performance (momentum). Unless perhaps today’s performance was unusually good (or bad), significantly better (or worse) than the historical mean trend, and without any noticeable increasing trend in variance. Then a ‘mean-reversal’ strategy may be a better bet.

Anyway, that’s just the way I see things, and everyone is entitled to my opinion. That said, I have many blind-spots and a big ego that blocks me from seeing them.

:)

@Joakim, I have not changed my use of the word value.

In the last chart presented, you can see the noise up close. In the previous chart, you can see it from a distance. It shows what it was the previous 9 years and how it got there. The most productive sectors are displayed in order of highest returns. I think it is much more preferable to follow the few top lines rather than the bottom ones no matter how you want to trade.

Fortunately it's quite easy to see the relationship between todays move and yesterdays move (I think I've done this right, but if not please correct).

5
Notebook previews are currently unavailable.

What follows is the 10-year version of the previous chart.

Risk Factor Returns 10 Years

Some think that one could make some sense out of it. All I see is mostly the signature of quasi-random noise.

If you see something different, then you can make predictions on this thing and have a continuous hit rate in excess of 66%. Just 1 sigma above the mean. It might not be enough statistically, but you sure would make a lot of profits.

Nonetheless, you can still win with a lesser hit rate, but the closer you get to the 50% mark, the less your “predictive” powers will prevail. You would be in the territory where luck has a hard time overcoming randomness, but it still can. Even if it is a program that is running the show.

@Quant Trader's notebook says the same thing. It has an $$R^2 = 0.008$$ indicating very little relationship from one day to the next.

@Guy,

I see your python skills are getting better. Perhaps good enough to put your skin in the game, join the contest. Joakim is currently on top of the leaderboard. Just saying, it might up your credibility and showcase your crystal ball :-()

@James, I have no crystal ball. Unless you find that a long term trend is anticipated to last until it ends as some kind of forecast.

Edited:

I would add a quote from Mr. Buffett.

"I don't know when to buy stocks, but I know whether to buy stocks," Buffett, celebrating his 88th birthday, said on CNBC television.

"Business is good across the board," he added. "It was good two years ago, it keeps getting better."

Guy,

Are you talking about long only portfolio or market neutral? I have found it difficult to extract long term alpha from price in market neutral context with fixed basis slippage, have you tried it?

@Guy - Although my notebook says that there is very little relationship between today and tomorrow, we're not taking into account relationships between certain types of moves. For example, a move of +10% could lead to a higher chance of a negative move etc. (not saying that it will, just saying we need to take some nuance into account when it comes to these strategies).

@James Villa - To be fair to @Guy, he did describe the markets as quasi-random noise, if that's his view on the markets as a whole (I'm extrapolating here) he'd most likely benefit from options being made available on the platform. Linear trading strategies are going to be impossible in a random market, options however can be arbitraged in that scenario still. I know I've had a lot of success when it comes to pricing options in unorthodox manners, but I can't transfer that over to Quantopian.

@James, thanks for the link. Nicely stated regarding skin in the game, otherwise its really all just "so much piss & wind", as they used to say back in the old days. Guy's python skills are probably already considerably better than mine, and more than adequate for the showcasing of any crystal balls.

@Guy,
Your initial question: ".... how do you want to define a market regime?" is absolutely THE key question to be asking. The answer to it depends entirely on what YOU then want to do with the definition chosen.

I will not answer in generalities. My own personal choice is to build practical systems and then actually trade them. I use "market regime" trading methods sometimes but not always (and not right now in the Q context because there are so many other things to explore too). To @James , @Joachim, @Grant & others who I know are interested in practical results-oriented trading, here are my own answers to what I have done personally in those systems where I employ(ed) a "regime" concept.

• Timeframe: Of course "market regimes" can be defined for any timeframe at all, but using anything that is either significantly larger or significanlty smaller than your actual real-life trading timeframe is, IMHO, largely a useless waste of effort for most practical purposes unless you intend to apply upscaling methods (similar to those used in geo-modelling). So the first point is to be very clear on what timeframe I am actually trading and to look at market behavior over windows of length somewhat similar to that. Then, if I change my trading timeframe, so I would change my market-regime timeframe accordingly. Personally I am mainly interested in either multi-day trades up to a week or more (with EOD bars) or sometimes intra-day trades of up to about an hour or so (mostly with 5-minute bars), but no HFT.

• Changes of regime: I identify regime Boundaries as the points in time where at least one clearly defined characteristic of the market undergoes a significant change. Typical "characteristics" of market behavior are the ones that we all know well, such as for example: trend direction, steepness of slope, volatility on both intra-day and multi-day scales, presence or lack of gaps, and so on. Basically anything that would cause me to say "yes, now the general character of the market LOOKS different", as well as also using various other less visually obvious metrics. As part of that I include a statistical analysis on "market capability" for trading; i.e. how much potential does the market ITSELF intrinsically offer for generating trading profits during this period IRRESPECTIVE of any trading system used.

From the above, I can identify those periods of time when I simply do not want to be in the market at all, because either Long OR Short trades are both most likely to be losers' games.This is probably the single most important thing that I can learn from regime analysis, namely when to just stay OUT!

Number of market regimes: How many market regimes are there? Well, you could choose as many as you like and then make it as simple or as complicated as you want want. I have tried all sorts of different approaches, but (probably because i'm just a dumb ol' injuneer who likes to shave with Mr. Occam's Razor), I say that for PRACTICAL purposes, for me, there are basically only 3 regimes and they correspond to 1) the times when I would really want to be Long the market, 2) the times when I would really want to be Short, and 3) the times when I would really prefer to avoid unnecessary risk (in relation to potential reward) and just be Out. You can define and select these for yourself by using anything from eyeballs to ML, your own favourite classifier, fuzzy or crisp logic, or whatever.

Regime transitions: Having reached this point, I then play a bit more with statistics and have some fun introducing stochastic methods (nothing to do with the eponymous oscillator), as in looking at the transition probabilities between different regimes. That gives me a little bit of a heads-up regarding what is probably NOT very likely to happen next. A serving of Markov, anyone....?

Does it work: Yes, and I have designed & built several workable systems based ONLY on the "3 regime = Long / Short/ Out" concept as described above, but generally better as a background filter.

@Max, my apologies to you for already drifting a very long way from your good original theme of "Common Factor Risk Snapshot".
If anyone wants to follow up further with any of my ideas on "Practical Market Regime Analysis", perhaps we should break it out as a separate new thread?

Hi Tony,

Yeah, we're both old school. I really like your methodology, prespective and approach to regime change analysis. I have a similar approach though much simplier and less elaborate right now, thanks to Occam it dwindled down to between 2-6 measurements after years of playing with it, LOL! But thanks, you've given me new angles to look at.

An article of interest to consider.

@ Maxwell -

How did Q land on its particular set of common risk factors? I understand the concept that you decided that you don't want them to contribute to the 1337 Street Fund returns, but how did you arrive at this particular set? Another way of putting it is, suppose one were considering starting a market neutral, long-short U.S. equity fund. How would one formulate a hypothesis around common risk factors and test it? Presumably, you are in the process of testing some common risk factor hypothesis presently, using out-of-sample data. How's it going? Did you pick the right set of common risk factors? Or might there have been a better choice?

This statement doesn't really make sense to me:

We can only define alpha relative to the common factor risk that you are trying to avoid. Defining the risk model allows us to say "alpha is what is left over after we remove these influences".

This comes across as a kind of industry received knowledge; it might not make any sense at all, and could be adding risk (perhaps in the form of a steady drag on returns).

The Q common risk factors just appeared out of nowhere, and then there was a white paper published, which focused on the mechanics of measuring them (and controlling them, via the Optimize API). Recently, we received guidance that consistent exposure to any one of them, even within limits, is a bad thing. So, I got to thinking "Does anyone at Q actually understand what they are doing? I don't recall a rigorous explanation of this common risk factor jazz. Why should I care?"

@Grant,

Agreed. My thoughts are that they are simply factors that are so easy to obtain without any algorithmic process that there is actually value added in avoiding them (they can always increase/decrease exposures at another time without much effort)? To that point though, would this not tempt users to run an algorithm, check what risk factors they are overexposed to, and adjust accordingly? For example using a vol. factor to 'adjust' vol. overexposure from the backtest. Although I suppose you could do this with an optimize constraint, not completely familiar with that....

Dont totally understand, but interesting to think about.

The article cited in my last post makes some interesting points, some of which are not even exposed. So, I will add a few of my own to consider.

Whatever we design as trading strategy it will have to compete against “those” people. But the results presented did not look that great, most failed to outperform the index. So, the question really should be: why invest in these guys?

That is where you come in because you know you can do better, much better. It is why you practice here on Quantopian to formulate a method of play, program it and test it to see if at least over past data it would have worked. If your strategy somehow did not perform well over past data you can easily surmise that its “predictive” inherent expectations to glory appear extremely foggy at best.

Nonetheless, it remains your task to do better than “them”. And you can be assured, they are bright people too, just as you are. So, what will differentiate you from “them”?

Based on a portfolio's payoff matrix equation, there is only one area where you could make a difference and that is in the trading methodology used. It might not matter much how you arrived to design your particular brand of trading procedures, what will, however, is the outcome of your own payoff matrix. And it can be compared to “theirs”.

You can all have a different trading strategy and win. It is like in any other type of game. You need to strategize your every move. Know why you have your strategy do what you are telling it to do. If there is no economic reason to explain your procedure, you might have one in the gaming itself. Should it matter which set of underlying principles you used if your strategy wins?

Also, consider the value of your trading strategy. What price tag would you put on it? Should you discount its future value for instance? If you did such exercises, you might be surprised at the results. Because, there, you will have to think long term. Which is where “they” failed. But, you do not have to follow in their footsteps. You can design something better. You are part of the regulating mechanics of markets. You have a service to render and you can profit from it too.

One contrarian angle on this would be to formulate an un-algo. Basically, run with the concept that alpha can only be obtained by defining what it is not, and rather than formulating factors, come up with un-factors, and then remove them from the QTU until alpha appears. The right set of un-factors should be profitable, if I'm following.

Hey, maybe an un-ETF could be launched? In the prospectus, just start with the S&P 500, and state what has be removed, and the investor is left with whatever is leftover. Could be a whole new angle on smart beta. "Leftover ETFs" might be a winning appellation.

The sector-neutral proposition made in @Maxwell notebook has for equation:

$$F(t) = F_0 + \Sigma^s [w_s \cdot \Sigma (\mathbf{H}_{sect_s} \cdot \Delta \mathbf{P})] - \Sigma (\mathbf{E}xp)$$

where a weight $$w_s$$ is assigned to each sector $$sect_s$$. The proposition sounds reasonable at first since indeed: $$\Sigma [w_s \cdot \Sigma (\mathbf{H}_{sect_s} \cdot \Delta \mathbf{P})] = \Sigma^s (\mathbf{H} \cdot \Delta \mathbf{P})$$, except.

All sectors are not created equal. Even if you assign them equal weights to start with, it won't make them equal. Nor rebalancing the portfolio all the time on equal sector weights for some market neutrality concept. It is one of the places where one might wish to strategize his/her trading strategy for some extra return.

Based on the presented equation, here is what will happen. Return from the best performers will be taken away to be given to the worse performers averaging the whole thing down to less than the actual average of the sum of the few best sectors. A way of rendering the whole strategy mediocre by design when it could have been easily avoided.

Whatever the scenario, it will still end up with most of the portfolio's performance attributed to the top few performing sectors anyway (count 70%+) while the bottom performers will get the rest.

Should you take only the top 5 sectors, already it would improve the overall average portfolio return simply by having dumped the also ran.

Why put money in the bottom performers at all when you have the top ones available and easily identifiable? It is a CAGR game after all. Ignoring less productive sectors is a way to increase performance, not reduce it. The same goes for stocks. At the very least, it should raise expectancies.

Everything you lose, or not make, due to the lower performing sectors has to be compensated for by the above average performers. That could be quite a drag on a trading system and have long-term negative consequences in the CAGR department.

Going for sector-neutral with equal weights throughout the life of a portfolio is not a great idea. It's much like throwing good money after bad.

My understanding of the sector-neutral requirement for the Q market-neutral long-short equity algos is that it is a way of imposing diversification at the algo level. One can imagine that in the time-frame of the dot-com bubble, certain sectors may have been particularly amenable to systematic, automated trading. And perhaps, around the Great Recession event, other industry sectors were "hot." If Q doesn't impose some breadth across the market, then they'll just get a bunch of algos focused on the market anomaly du jour. I suspect that traditional hedge funds address this by having individual quants/groups/managers focus on specific slices of the market, and then everything is combined into one diversified fund. My read is that this would not be a good fit for Q at this point (although they've talked about eventually having a potpourri of algo styles, perhaps with a separate contest for each).

The question I have is whether the industry sector diversification is the right way to slice and dice in the first place. There are potentially other ways of identifying related stocks (e.g. http://scikit-learn.org/stable/auto_examples/applications/plot_stock_market.html). I suppose one could use some other diversification scheme, and hope that it aligns so-so with the one Q imposes.

There were a few statements in my last post that needed more substance.

The following chart depicts a portfolio equally invested in 11 sectors, each with their respective compounding rates of return.

Fig. 1 Sector Contribution to Overall Portfolio Performance

The chart is simple. The sums follow the first equation in my previous post. Initially, $10M is distributed equally to 11 sectors. Each displayed in decreasing order of their equivalent long-term average compounding rate of return. We easily observe that the highest contributions to the total portfolio's performance came from the few with the highest compounding rates. One would have to say: evidently. And that is the point. Evidently. If, instead, you distributed the initial capital to the top 5 sectors only, you would get something like the following chart: Fig. 2 Top Sector Contribution to Portfolio Performance The sector's CAGR were not changed. Nonetheless, each sector contributed more to the portfolio. The reason is simple, they were allocated more capital (120% more). Not new capital since all of it was coming from the abandoned sectors. Compounding did its thing. By moving the capital to the top 5 sectors it improved overall performance for the presented model by some 71.9%. This should not stop someone from including the best stock performers from the abandoned sectors. If any of them tended to outperform even the lowest of the top performers they could also be considered as available trading candidates for your portfolio. Neither should it stop you from designing whatever trading strategy you want. It is always your game and you play it like you want to play it. Common sense should still prevail. Since we can identify the top sectors for extended periods of time (this was demonstrated in a prior post), one could allocate more to the best performers of the group as depicted in the following chart: Fig. 3 Top Sector Enhanced Contribution to Performance Making such a move improved performance by 100%. Note that in each selected sector, only the top performing stocks should be chosen which in turn will improve overall performance even further. What was presented is the long side of the problem. It said to enhance performance, select the best-performing stocks in the best performing sectors. Automatically, you will be performing better than market averages. Doing the same thing for a$50 million portfolio will show a dramatic improvement.

Fig. 4 Top Sectors, Enhanced Contribution, $50 Million Initial Capital Building along these lines can only improve your trading strategy's outcome, whatever it is. What is proposed did not change the architecture, structure or underlying philosophy of a trading program. Nonetheless, it did change the strategy's behavior and how it should weight its alternatives. It becomes an opportunity cost to not do something when you know what could be done. Take the outcome of the first chart and compare it to the one above. Amazing what compounding can do. What I think strategy developers should do is find ways to increase overall portfolio performance, even if it was by only 2-alpha points. The value of those 2-alpha points above market averages is considerable since the alpha is also part of compounding. To gain an idea of the value of those 2 percentage points, simply compare the chart below to the preceding one, or to the first one, whichever you prefer. Fig. 5 Top Sectors, Enhanced Plus 2-Alpha Points,$50 Million

Doing ordinary, meaning doing no more than the averages is of little interest. That stuff can be had by so many other means. Buying an index fund would do the job.

If you have the talent, then differentiate yourself. Show that you can get those 2 extra alpha points above averages. It would be more than sufficiently rewarding for all involved. If you can do even better, then go for it.

Adding a zero to the initial capital would generate ten times the above scenario (Fig. 5). All of it could be due to the single trading script you designed. Think big was part of the 70's culture. Maybe time to bring it back.

What was presented is kind of generic. It does apply to sectors, and is pervasive throughout all aspects of portfolio management. That it be sectors, industries, stocks, strategies, portfolios, portfolios of strategies, or funds of portfolios of strategies; they all can be ordered by outcomes. And doing this will generate tables like those shown above. Enhancing the best performers in all those groups will tend to increase overall performance.

Hi Guy -

Any insights on how to improve the odds of making money in the Q contest as-constucted (and the odds of getting an allocation)?

There was a suggestion above that you give the contest a go. Seems like you may have an edge. Have you entered any algos? Code name?

As I remember from a year or two ago, Guy did make various comments to the effect that he had no need to prove himself to anyone in any contest. He also wrote at very considerable length about why Q should NOT use the "color / creature" handle naming convention (that many of us actually rather like), including citing his concern that it would result in potential loss of credibility in industry if he happened to be given a name like "blue dikdik". (....??) But ya 'know really though, as they say in Indonesia: "Jangan malu-malu" = No need to be shy! ;-))

@Grant, what is proposed goes against the grain for current contest rules.

The presented argumentation exhorts to not go for total sector neutrality. Seeing it as counterproductive or less productive than it could be, in the sense that you could do much better. I do think that the generic charts provided in my prior post illustrated that.

First, it throws away 4, 5 or 6 of the sectors depending on taste but mostly due to lack of performance. Then it more than suggest to be sector biased by putting more weights on the best-performing sectors. It recommends to do the same at the strategy level since that would also improve overall portfolio performance.

Most of all, it emphasizes the need to push for that extra 2+ alpha points higher than the average guy. Over the short run, it might not make that much of a difference. But, over the long run, it is another ball game since the alpha is compounding. Due to its compounding nature, that alpha can only be had if you had it all along. It is not something that will be given to a portfolio on its last year just to catch up.

That post might be a suggestion for future contest rules to allow more leeway in the sector selection constraints, among other things.

I have not entered any contest yet. I am not ready. Mostly, I specialize in other areas outside the contest constraints.

@Tony, I still hate the use of pseudonyms. I understand that some like it. I have enough of a hard time remembering names that I have little interest in remembering two. You will find that post here: https://www.quantopian.com/posts/contest-32-rules-changes-commission-and-leverage#597529eacd56e1000de6c9d0 so I will not have to repeat myself.

@Guy, sure, certainly no need to repeat yourself. But I hope that you will not let it stop you from participating.

@Max, et. al...thanks for the interesting discussion!

Got tired of trying to correspond the legend to the traces, so looked up how to improve the visualization by sorting on the last value(row) of the traces, and reordering the legend by that sorting. Also smoothed out the traces a bit to better see the relationships. See cells 13,15 and 16.
alan

3
Notebook previews are currently unavailable.

Hi @Alan, Thanks, looks good, nice to see you again. Cheers, TonyM.

@Alan, thanks, great work.

Can anyone compare @Alan's chart with the corresponding sector ETFs?

@Grant, to answer a part of a previous question on how to improve your game in the contest, I would suggest the following.

Since all sectors are not created equal, and because of this, you could take advantage.

$$F(t) = F(0) + Σ(w_s∙Σ(H_s∙ΔP)) - Σ(Exp) = F(0) + Σ_s(H∙ΔP)) - Σ(Exp)$$

which states that the weighted sum of sector strategies will still account for as if (s) strategies were applied to your stock selection since each stock traded belongs to only one sector. So, there is no loss of generality here.

The exposé argued that sectors could be ordered by performance levels (long-term returns). And therefore, $$s_1 > s_2, \cdots , > s_{11}$$ which translates to each payoff matrix is expected to produce less and less as the sector index increased. It was also noted that all sectors had been going up in the past 10-year period analysis. Evidently, this could change going forward, but not overnight. And since your trading strategy can monitor this on a daily basis, you will always have time to reconsider the sector's ordering and adapt the sector weights. Look at Alan's factor returns chart for a smoother perspective.

The formula for the market-neutral strategy is

$$F(t) = F(0) + 0.50 \cdot Σ(H_{s=L}∙ΔP) + 0.50 \cdot Σ(-H_{s=S}∙ΔP) - Σ(Exp)$$

You would not be sector-neutral, but you could still stay market-neutral and play all the sectors. The sector weights could still sum to 1: $$\Sigma w_s = 1$$, even if not distributed evenly.

As you know, such a move is sub-optimal return-wise. Nonetheless, what it will give you are reduced volatility, reduced betas, and reduced drawdowns. You will be playing the sector spreads and these are built into the data itself. You would be using the fundamental structure of the game to your advantage especially knowing that some sectors can stay on top while others at the bottom for long periods of time (years).

This would be like “pairs-trading” without finding a corresponding pair in the same sector. Instead, you would be playing sector against sector, even if not related. The spread is there and expanding.

Hope it helps.

Thanks Guy -

Q would like to see (per the Get Funded page):

Low exposure to Quantopian risk model factors

Strategies should be less than 20% exposed to each of the 11 sectors defined in the Quantopian risk model. Strategies should also be less than 40% exposed to each of the 5 style factors in the risk model.

Recent guidance from Jess Stauth suggested that beyond these requirements, qualitatively, they'd like not to see any consistent exposure to any of the risk factors, even within the bounds (it would be interesting to see if this holds true for algos they've funded...). Further, I've gathered "the more stocks the merrier" even if all of the constraints are met. In my mind, this all translates into a salable "interesting" alpha that can be summed up in a neat strategic intent statement (also one of the "constraints"), that has no risk vis-a-vis the defined risk factors, and can be scaled to tens of millions in capital.

So, my read is that any "tilt" with respect to the Q risk factors will be a non-starter. They've gotten religion on the risk factors, and probably have a little risk factor shrine with incense, candles, and equations at Q headquarters. Whether this makes sense or not is not something to be discussed in the open.