New Strategy — “In & Out”

Intuitively, it might be possible to generate excess returns via cleverly timed entries and exits in and out of the equity market. This algo may be a first step toward developing a strategy that derives optimal moves in an out of the market on the basis of early indicators of equity market downturns. At the very least, the algo could start an interesting discussion regarding the sense and nonsense of trying to time market entry and exit in this way. Either way, your contribution is highly appreciated.

Backdrop for the initial code:
- Resources and industrial products are early in the value chain and
market value drops in corresponding firms are early indicators of
growth worries that ultimately affect other sectors and the broader
market
- Equity market value growth benefits substantially from cheap debt,
such that increases in bond yields (i.e., drops in bond prices)
should be an early indicator of a slowdown in growth
- Yet, no matter what these signals say, if the market drops by 30%
(‘once in a decade opportunity’), we want to be in

Measures used in the initial code:
- Resources: The DBB ETF (Invesco DB Base Metals Fund) provides the
signal. DBB tracks the prices of three key industrial metals:
aluminum, copper and zinc. A 7% drop over an approx. 3-month trading
period is considered a substantial drop.
- Industrials: The XLI ETF (Industrial Select Sector SPDR Fund)
provides the signal. XLI tracks the broad US industrial sector. A 7%
drop over an approx. 3-month trading period is considered a
substantial drop.
- Cost of debt: The SHY ETF (iShares 1-3 Year Treasury Bond) provides
the signal. SHY tracks short-term US Treasury debt (1-3 years) and
changes in this debt’s ‘risk-free’ interest yield should be
indicative of changes in firms’ cost of debt which is based on the
risk-free rate (risk-free rate + risk premium). A 60 basis points
increase (i.e., drop in the bond price) over an approx. 3-month

Rules of the algo:
- In terms of equity, only the market (SPY) is traded
- If out of the market, the money is invested in bonds (IEF and TLT)
- If any of the indicators drops substantially, we go out of the
market and into bonds. We wait for 3 trading weeks for the dust to
settle, unless the market drops by 30% during the waiting period in
which case we enter immediately
- Notes: this algo’s focus is only on the market ‘entry vs exit’
decision, i.e. not on the ‘what equities do I select’ decision. The
assumption is that you will be able to plug your equity selection
logic into this algo and get an additional boost in terms of your
strategy’s returns. However, finding an optimal equity selection is
not the focus here (if you are interested in equity selection, see
other community forum contributions such as “Quality Companies in an
Uptrend” or “Uncovering Momentum”, among others). Scheduling
functions: Whether we ‘go out’ is checked daily since equity prices
usually drop quickly when things deteriorate and hence speed seems
paramount. In contrast, whether we ‘go back in’ is checked weekly and
this is a personal preference so that complex equity purchases only
have to be executed once a week, at the end of the week. This is so
that a ‘lazy’ trader who does not have the time to execute complex
reshuffles (other than doing a ‘sell all’ and going into bonds) of
the portfolio on a daily basis can combine the algo with a more
sophisticated equity selection strategy.

Outcomes for the initial algo:
- from 1 Jan 2008 to 2 Oct 2020, the total return is approx. 860% vs
190% for the SPY (= being always in)
- the backtest indicates a beta of 0.34 and the tear sheet shows an
alpha of 20%
- Note: regarding the backtest period, consider the launch dates of
the different ETFs. All of them should be available from 1 Jan 2008
onwards but some may be unavailable in earlier periods, creating a
limit regarding testing the algo in earlier time periods

Brainstorming regarding improvement opportunities:
- different ETFs/ways to measure prices of resources (e.g.,
additional key resources such as oil), industrial goods (e.g., a
stronger focus on industrial capital goods), and bonds (e.g.,
corporate bonds instead of government bonds)
- additional aspects—other than resources, industrial goods, and bond
yields—that could provide early indicators of equity downturns
- improvement of settings (e.g., waiting period, %-points indicating
‘substantial’ drops)
- code improvements (errors/unintended outcomes, efficiency)

114 responses
context.out = context.in_out==0 & context.spy_in==0


This line does not what you expect, context.out is true allthough context.spy_in is 1 during 2009. See the attached backtest which records context.spy_in, context.out and the current weight for each asset. Either put parentheses around each condition or use 'and' instead of '&'. It lowers the performance quite a bit however.

May I ask why you count the days when spy_in is 1? Btw. the count gets reset everyday the condition is true, is that on purpose?

Here's my contribution. Allthough the code might look totally different, it basically does the same as the original.

My only addition is UUP, an ETF on the USD Index. Assuming that a stronger US-Dollar preceeds a market decline I added the condition that if it's up 7 % (in the same time span as the other ETFs) we also drop SPY.

Tentor, well spotted there regarding the code issue in the original and nice contribution, bringing in UUP and creating a more efficient/correct code!
The logic USD up -> equity under pressure makes a lot of sense = a flight into a safe haven currency. Nice!

Since my coding skills are still "under development", I hope you don't mind me asking: I understand that the code
context.out = context.in_out==0 & context.spy_in==0
is incorrect. Is there any valid interpretation for what this code snippet does? I am just trying to get my head around why there might be excess returns linked to writing the condition in this way. However, maybe there is no valid interpretation.

If I am reading your code update correctly, you found greater returns by dropping the spy_in condition all together (i.e. the 'once in a decade opportunity' idea). So, we can conclude that this is not a beneficial condition, or it would need to be brought back in in a different way.

Great stuff!

(Regarding your question concerning the spy_in condition and updating the daycount whenever the condition is true: For both conditions, concerning the in_out indicator and the spy_in indicator, I wanted to wait the full waitdays after the conditions have been true for the last time to change the course of action. That is, even when the out conditions are no longer met, the code stays out for three trading weeks. On the other hand, when a 'once in a decade opportunity' comes along (SPY -30% drop), I wanted to stay in for the full waitdays even though the in_out indicator might say 'out' (since the equity market often rebounds quickly in the first few weeks, which we don't want to miss). However, I reckon your tests have indicated that the returns are better without this 'once in a decade' market entry logic so that it's better left out of the algo.)

To be honest I also don't know exactly what the logic of that line does, I think the '&' only looks at the value directly after it. Usually I only use it for boolean arrays / data frames and try to alwas remember the parentheses. The reason why I dropped the spy_in was that the test with only

context.out = context.in_out==0


had pretty much the same result, just slightly better. Oh, and with parantheses (buying into the pullback) it was way worse, drawdown over 30 % and returns 700 something %.

I would say this is a very good algo. Low vola, low dd, high sharpe ratio, the return increases steadily. well done!

Very nice thanks for sharing

I replace the order_optimal_portfolio() with the "old timer" order_target_percent(). The result is almost the same. But the trades are much much less. The original algo has 1448 trades (buy and sell, 2005 - 2020). By using the order_target_percent() there are only 173 trades. Is it better or not? :-)

I would say perfect for the 'lazy trader' mentioned in the intro text :) That's about one trade per month. I reckon there are substantial savings in terms of commissions which leaves more money in terms of total equity return, which is nice!
People integrating into the algo their own stock selection strategy that may require more frequent portfolio reshuffling can easily alter the trading frequency to suit their needs, so that's all good.

Not sure whether you had a chance to check: were all the ETF prices available from 2005 onwards -- I reckon the system would have complained otherwise (?).

As an additional possible indicator, I have tested the oil price using the ETF USO (United States Oil Fund).
It tends to reduce the total return significantly if used at the drop size set for materials and industrial goods (-7%) . This may be due to a higher typical volatility of the oil price. I found that only when the drop is set at a very high percentage level (-25% over the three month window), additional returns can be realized, see the attached backtest. It is likely that these returns are due to a singular event period for which it has been better to be out of instead of in the market. The returns are also small relative to the length of the assessment period.
Overall, the oil price seems to provide little additional information for a superior entry/exit decision.

I have worked with my earlier copy of Tentor's code above. For possible additional savings (and less hassle) from fewer trades, see Thomas's code version.

@Peter:
1.

...Not sure whether you had a change to check: were all the ETF prices available from 2005 onwards...
you are right. I should begin from 2008. But this doesn't change the fact that the trades are much much less by using the order_target_percent() than that of order_optimal_portfolio(). This strangers me a little bit since I thought the order_optimal_portfolio() should be better since it is optimized.

1. I do some backtesting on your algo and find, if I change the values of some parameters a little bit such as the waitdays (use 14 or 16) and the "58" (use 57 or 59), the result will be reduced quite a lot. I wonder how much time you have spent to figure out the value of 15 and 58? :-)

If you use this for live trading, I am not sure you can get what you want. :-)

@Thomas, hivemind, I've done the same to check for overfitting.
Just used steps of 5 days to use whole number of weeks (e.g. 53 is a quarter which seems less engineered than 58, the same applies for 20/22 vs 15)
Noticed a drastic drop in returns.

Will rewrite in alphalense-digestible form to check for the consistency of returns.

@Thomas Chang

I totally agree with your first post in this thread, not completely with the last one.
Nobody now asking question why J. Welles Wilder chose default period for RSI 14 or
Gerald Appel choose default parameters for MACD 12-26-9.
We have been using these magic numbers for many many years.
Just take MOM = 57 and WAIT_DAYS = 15 as magic numbers for this strategy.
Backtest it on dozens of different US equity ETFs at different times in the market cycle.
I am sure you will get similar or better results.
Past performance is the best predictor of success.
There are millions of ways to improve the strategy without changing the magic numbers.

I was not wrong.

mom = hist.iloc[-1] / hist.iloc[-58] - 1


is actually 57 day momentum

Correct me, if not.

Nobody now asking question why J. Welles Wilder chose default period for RSI 14 or
Gerald Appel choose default parameters for MACD 12-26-9.

The reason hy now body asks is because the RSI14 is for half month. The number by MACD have certenlly their meanings. This could be googled. I don't think they are magic numbers.

Besides, the RSI and MACD are indicators, not backtesting optimation. This is the big difference.

If you use the RSI14 in your algo, I don't think the RETURN will have a drastic change if you chose 13 or 12. The similar by MACD.

Surely this is a very interessting algo.

@Peter Guenther Kudos! Very Interesting algo. I like the way it's founded in underlying economics. I've reworked it a bit to use entirely pipeline with a single factor/signal for being in or out of the market. The original thought was, that way, it could then be analyzed using Alphalens. Not sure that's feasible but that was the intent. I also added recording for the four signals. It's interesting how often the signals come into play but rarely is there more than one or two.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here is the same algo as above but, instead of going 'out' of the market and into bonds, it goes short SPY using SH. Returns are not nearly the same, but moreover, volatility and drawdown are much higher. If the signals were finding when to get 'out' of the market, one would expect better performance? Just musing, but it almost seems the signals are really just finding when to get into bonds?

I feel that I am falling behind a bit. These are all great comments :)

@Dan: very nice work on the algo code and much appreciated regarding uncovering the surprising finding regarding taking a short position in the SPY. A great new puzzle! Regarding the point you raise, intuitively, I was expecting that there is hardly any money in bonds, i.e. a bit like holding cash (= zero return) and then a bit. So, this will definitely be interesting to tease out, i.e. what is the contribution of bonds in all of this. I have to do more thinking/testing on that.
One speculation though: what we might have done by swapping in the short SPY is to increase the ‘cost of being wrong’. My assumption is that the algo is moving out of the market often, and often it is wrong (the few times it’s right really pay off and that’s why we are happy to be nervous and often go out unnecessarily). When holding bonds, the cost of being wrong may be negligible (if they hardly move); when holding the short SPY these costs may be much more material and reduce the total return substantially.

@Thomas, Dmitry and Vladimir: Great comments regarding sensitivity, and I agree, this will also be a big issue to investigate. Intuitively, I reckon this strategy will turn out to be quite sensitive, since it tries to ‘time’ market entry/exit and, particularly, it tries to get out when things go pear shaped and then usually deteriorate quickly. So, I reckon individual days will matter a lot.
I have created a table regarding what Thomas was referring to, changing the wait days to 16 or 14 (from 15) and the shift days to 57 or 59 (from 58):

parameter Δ | % tot | "missing" factor | Δ% p.a.
default (incl. oil) | 942%
wait = 16 days | 814% | 1.16 | 1.2%
wait = 14 days | 858% | 1.10 | 0.7%
shift = 57 days | 676% | 1.39 | 2.6%
shift = 59 days | 712% | 1.32 | 2.2%
SPY | 194%

The sensitivities may tell us something about which ‘timing’ (in or out) matters most
I was trying to quantify the changes somehow and was thinking along the lines of what is the ‘missing factor’ i.e. what would I need to multiply the returns with to get to the total return of the default strategy. Looking at these numbers, I thought that one aspect to also consider is the very long timeframe (almost 13 years) we are looking at, meaning that small yearly percentage differences can build up to very substantial differences in the end. Looking at these number, these may be some conclusions:
1. the wait days sensitivity may be OK, judging on the basis of the 0.7%-1.2% p.a. drop (although a wider range could be checked)
2. the shift days sensitivity is comparably much more substantial
3. hence, a well-timed ‘going out’ (related to shift days) seems to matter much more than a well-timed ‘going in’ (related to wait days)

Are we looking at the right benchmark?
The 942% look nice and I think it can be worthwhile trying to push that further up—based on indicators that economically make sense, as Dan put it nicely in his post. The assumption would be that even if the future does not play out exactly like the past, with a strategy that has proven more successful in the past, we may have a better chance to get through it all in the best possible shape. Of course, nobody knows for sure.
However, I was recalling for myself that this algo is trying to beat the market by ... trading the market. No long hours to research the best possible stocks, no sophisticated algo to select stocks, no intricate company fundamentals etc. 942% may be a shameless return against this background :)
Against this backdrop, my thoughts were, maybe the correct benchmark is the market; SPY 194%. That is, using the sledge hammer on sensitivities, how often does the return drop below 194%, i.e. what is the risk that we have all this strategy going, the trades, the commissions, the time and then end up worse than if we just would have stayed in the market and not tried to time it.

Thoughts on testing parameter sensitivities
I reckon that we somehow have to consider that the parameters are not independent. E.g., a 7% drop over 58 days (the shift days) is not equivalent to a 7% over 40 days (the latter would be a much more dramatic drop and thus may occur less often). An idea might be to use a proportional % drop. Yet, if we reduce the shift days, we go out earlier and then the wait days may not be sufficient to sit it out when things really go pear shaped.
Also, seeing that the shift days are so sensitive in the table above, maybe the parameter operationalization could be improved to make it less sensitive? At the moment, it compares the current observation with the one that occurred exactly 58 (or 57) days earlier. This might not be ideal. Not sure whether this would change anything, but might a max function with a rolling window help here?

@Peter Guenther

I have backtested this algorithm several times using Dan Whitnable code and found that performance
is highly dependent on the 'DBB' momentum.
Performance will drop significantly if you replace 'DBB' with 'XME' 20 times more active base metals ETF ($40M per day). I think it inappropriate to depend heavily on this very thinly traded 'DBB' ($ 2M / day with only 3 positions)
and its market maker, so I downgraded my rating on current version “In & Out” from Gold to Silver.

@Vladimir: Nice one there regarding sensitivity and good point about liquidity of the ETF with regard to providing a timely and accurate price signal!

Open for discussion: What I like about the DBB is that it trades key industrial metals (aluminum, copper and zinc) directly via future contracts. So the ETFs price is very much a function of these commodity prices and potentially can give a very direct signal of anticipated changes in demand.
If I understand it correctly, the XME trades companies--metals and mining. It could be diluted since it also includes, for instance, coal mining companies (= energy instead of metal). Since the underlying are companies, the signal could be slightly more lagged (I am thinking: translation of commodity prices into firm performance). The typical fluctuation might be different (e.g. using a 7% drop may be too small/large for the ETF that trades companies vs commodities directly).
ETF.com notes: "XME is the only fund that tracks the US metals and mining segment. That's also its major issue: it winds up being a poor representation of the metals and mining industry. First, it includes coal companies, which we believe belong in energy ..."

Maybe we could try it with a different ETF that is more liquid than DBB but also trades metals directly?

Since we don't want to trade DBB, why should we care about the volume?
It's an ETF which tracks an index that consists of copper, zink and aluminum in equal parts. I don't think you can get more direct signals for these metals another way. JJM would be an alternative but was incepted 2018.
Again, why replace it at all?
We don't want exposure to the commodities, just to know the price differences.

Product Details

The Invesco DB Base Metals (Fund) seeks to track changes, whether positive or negative, in the level of the DBIQ Optimum Yield Industrial Metals Index Excess Return™ (DBIQ Opt Yield Industrial Metals Index ER or Index) plus the interest income from the Fund's holdings of primarily US Treasury securities and money market income less the Fund's expenses. The Fund is designed for investors who want a cost-effective and convenient way to invest in commodity futures. The Index is a rules-based index composed of futures contracts on some of the most liquid and widely used base metals — aluminum, zinc and copper (grade A). You cannot invest directly in the Index. The Fund and the Index are rebalanced and reconstituted annually in November.
This Fund is not suitable for all investors due to the speculative nature of an investment based upon the Fund's trading which takes place in very volatile markets. Because an investment in futures contracts is volatile, such frequency in the movement in market prices of the underlying futures contracts could cause large losses. Please see "Risk and Other Information" and the Prospectus for additional risk disclosures.

@Tentor: good point!; @Vladimiar: thanks for the details, that's great!

Against the background of the discussion, I am wondering whether we might be able to offer the following preliminary conclusions:

DBB provides a timely and precise signal of important industrial metal prices
The ETF tracks the metals prices directly (via future contracts), which is an advantage vs tracking companies. We are currently not aware of an equally precise alternative that has a long enough price history for proper backtesting. The JJM ETF could eventually become an alternative but currently only has a short price history since it has just been started in 2018.

For our ETFs providing the price signals, precision beats liquidity/volume
Of course, the ETFs should provide a signal that can be justified on economic grounds, e.g. see Tentor's UUP signal ('flight to safe haven') and other signals discussed earlier. Moreover, the signal should be as precise/clean as possible. The minimum requirement is that an ETF provides a price update on a daily basis, i.e. liquidity/trading volume has to allow for this minimum requirement to be met but, beyond this, is of no critical concern since the algo does not aim to trade the price signal ETF.
This means that the algo can use price signals from a very broad universe of available ETFs, providing rich opportunities to identify and add additional signals that improve and/or stabilize the algo's performance.

@Peter Guenther

DBB is not pure Industrial Metals Index ETF.
It holds aluminum, zinc and copper futures contracts 50% - 60%,
money market funds and T-Bill ETFs 30% - 40%,
US Treasury securities 10% - 20%.

It collateralize its futures positions primarily with US Treasuries,
money market funds and T- Bill ETFs.

The Invesco DB Base Metals (Fund) seeks to track changes, whether positive or negative, in the level of
the DBIQ Optimum Yield Industrial Metals Index Excess Return™ (DBIQ Opt Yield Industrial Metals Index
ER or Index) plus the interest income from the Fund's holdings of primarily US Treasury securities
and money market income less the Fund's expenses.

I disagree with both of yours preliminary conclusions until you prove in a notebook at least 98%
daily return correlation between DBB and S&P GSCI Industrial Metals Index for last 10 years.

@Vladimir: Oh I see now, sorry I was misreading your point. For the money market instruments and Treasury bills, I always impute 'zero return plus a bit' = 'not much going on', but this may not be correct and, indeed, they do seem to constitute a substantial fraction of the DBB. Agreed, correlations would provide a clearer picture here.

That's impossible because the S&P index also has nickel and lead in it ;)
Also, I'm too lazy to search for historical data for the index, prepare it and upload it to Q.

But here's a notebook with correlations for several ETFs and futures (for the timespan prices are available for every asset). The only industrial metal future available in Q seems to be copper, so I included palladium and platinum since their main usage is in the industry sector. I compared the correlations for DDB and XME with the other symbols.
It seems to confirm my claim that DBB is higher correlated to industrial metals than XME.

Another theme that I wanted to discuss; it may have some fit with the question listed earlier "Are we looking at the right benchmark?":
"In & out"-type of strategies are not unknown to us, indeed they are often an element (sometimes a bit 'hidden') in our algos. One of the key points of this thread and the discussions here might be that there can be value in disentangling an algo's "in & out" component from the "stock selection" component to understand what each of these components are contributing. It can be worthwhile to assess and optimize each component individually.

Benchmark: The death cross
It won't be news for many here: the fantastic “Quality Companies in an Uptrend” strategy has an integrated "in & out"-type component in it, which is the death cross, i.e. a drop of the S&P 500 (SPY) 's short-term moving average below the long-term moving average.
I hope that my coding skills did not let me down and that I am working with the latest version of the algo. However, if everything worked out, the attached backtest should show the returns related to only the "in & out" component (acknowledging that this component is actually more interwoven in the algo since momentum stocks are held beyond 'out' conditions).

Implications for our discussion here
In addition to critically assessing sensitivity and the precision of price signals (see earlier posts), it may be worthwhile to benchmark performance against other popular "in & out" tactics to see whether we add any value at all and, if so, how much.

By imposssible I meant the 98 % correlation. Didn't see Peters post yet, now it seems out of context. I can also imagine that 2 ETFs tracking one and the same index can have a lower correlation because of fees and whatnot.

To be honest I don't really understand that part of the DBB fact sheet about collateral. However, the correlations I just posted seem to indicate that it's not the case that half of the performance stems from Tbills and the like. If it were, how can there be a 89 % correlation to copper while the correlations to bond ETFs and futures range from -30 % to -14 %?

@Peter:
MA crosses are always and necessarily lagging, that means when the signal comes to get out you already suffered a drawdown and you might as well have just placed a stop loss instead. In my opinion that's the value of your approach: most of the time we get out before the drawdown.

A note on the concerns about the sensitivity (I had them too):
I'm working on a strategy trading only futures with only futures and a few stocks as signals (what's available on the quantiacs platform).
I had to change A LOT of the original algo but the ideas behind the indicators are exactly the same. Of course I did a little tweaking, for instance I got rid of the magic 58: the lookback period for the momentum is determined entirely by SPY's 126-day volatility (like many changes that one did't work so well on the stock version). My point is, that allthough there appears to be some sensitivity, the main idea behind the signals looks stable. My best result so far (tested 2008-01-01 to now, slippage+commission = 5 %):

Total Returns: 1083.75 %
Sharpe Ratio: 1.76
Volatility: 0.12
Max Drawdown: 9 %

You can imagine that I'm not that concerned about the sensitivity any more :)

@Tentor Testivis,

You can download S&P GSCI Industrial Metals Index here just choose 10 years then export.

Deutsche Bank DBIQ Optimum Yield Industrial Metals Index Total Return symbol DBCMYTIM

The correlation is .93 - if you add a 1 to it ;)
But XME's correlation is even less existent...

Fascinating, this index is correlated to nothing, what's supposed to be in it again?
Look at those p-values...
Wait, what's the null hypothisis again? "There is no correlation" or "This test for correlation has no meaning"?
If it's the latter, is there another one? The results for Spearman correlation look very similar...

Ok, scipy says:

The p-value roughly indicates the probability of an uncorrelated system
producing datasets that have a Pearson correlation at least as extreme
as the one computed from these datasets.

So basically, the S&P GSCI Industrial Metals Index is not correlated to copper, palladium, platinum or any of the 2 metal ETFs.

@Tentor Testivis,

Isn't -0.067893 weird?
There may be a problem with the date synchronization.
Try S&P GSCI Industrial Metals Index from www.spglobal.com.
Found another metals ETN 'RJZ'.
Yahoo symbol for S&P GSCI Industrial Metals Index:^SPGSINTR

I think I'm done with the correlating for now. But I also think I got it how the thing with the collateral works. It's the same as when you want to short a stock: you ask your broker if you can borrow it from him. He says "sure, but I want some collateral in case your position goes bust and you can't buy it back." Then you give him money, get the stock and sell it. When you want to close the position you buy the stock, give it back to your broker and get your collateral money back. The only difference to DBB is, that they offer T-Bills as collateral and when they get it back they earned some interest. That's also why when you look at the weighting on their homepage the weights add up to 200 %. So bonds dont't have anything to do with the performance of DBB except for a tiny amount of interests added.

@Tentor: Excellent work there on multiple fronts!
Also, great hints and comments regarding your revised strategy "In & out -- the back to the futures edition" :)
Amazing returns and nice takeaways, regarding:
- cleaner price signals are available via futures (e.g., for copper)
- the cut-off values can be determined empirically (e.g., your point concerning SPY volatility)

I particularly like the clean copper signal. I think that's how I ended up with the DBB in the first place, since it includes a substantial fraction in copper futures. I think that copper is in high demand, particularly in China, so the futures can be an early signal regarding industrial production there, which usually has global consequences for other countries' economic growth. So, it might be worth a try to increase the reliance on (weight of) the copper signal and see whether it creates additional returns.
In the DBB, I was always missing an iron or iron ore price signal. Not sure whether this would add anything, though.

Determining critical price drops empirically
Regarding the cut-off values and determining them empirically: If I remember correctly, I was determining the 7% drop in the DBB (and XLI) empirically (and then tweaked it a bit) via analyzing percentiles of historic 58-day returns in Excel. I think that the 10% bottom percentile gave me something like an 8.X% drop and this was what I started off with. So this approach might be a way to also empirically determine these cut-off values related to our price signals. I will definitely give it a shot and see what comes out.

Improvement opportunity: 'In' signals
This is a whole different construction site and we do not have to get into this right now, but I was always a bit unhappy that we have these great 'out' signals but then the 'in' action is based on wait days. So one additional opportunity for improvement may be the quest regarding finding precise signals that tell us when it's safe to get back into the market. I guess the 'death cross' provides this signal via the short-term moving average breaking above the long-term moving average, but Tentor (see an earlier comment) is absolutely right that the death cross's 'out' signal is not particularly useful since it comes too late and then also the 'in' signal is probably similarly lagged.

@Peter:
Great idea about copper! In my futures version I used the mean of the returns for the Bloomberg Commodity index and for copper. Just now I ran it with only copper and got a Sharpe of 1.81 with 1125 % returns, same DD and vola. So, yes I think copper gives a cleaner signal. But my disclaimer is again: many things that worked on the futures version had an opposite effect on the stocks version.

There's definitely something weird going on. I just plotted the relative prices for DBCMYTIM and copper, they look and move very similar. The prices have a correlation of 98.3 %. So it looks like there is no problem with the date synchronization. I have no idea why the correlation of the returns is so low, I also tried log-returns but it looks similar.
So I downloaded the data for DBB, XME, DBCMYTIM and copper from Yahoo and uploaded it to Q. Here's the correlation table for the returns, looks more realistic ;)

@Dan:
Perhaps you have an idea what the problem with the custom data and correlations to Q prices might be?

About the changes I made in my futures algo that didn't work on equities. My main focus at that time was to improve the futures algo, so I didn't spend any time on tweaking the same ideas in the equity algo but simply copy-pasted the code there. In case someone wants to try and tweak them, here's what I did:

Using SPY's volatility to replace the magic numbers:


# assuming you've added SPY to the history request
vola = hist[SPY].iloc[-126:].pct_change().std() * np.sqrt(252)

# lose the magic 15 for the waitdays. Basically the higher the vola, the longer we wait to get in.
# The idea: high market vola --> dangerous to get in, wait longer!
waitdays = int(vola * 100 / 3)

# replacing the magic 58 as lookback period. The higher SPY's volatility, the shorter our lookback period.
# The idea: high market vola --> prices move faster, so there might be bigger moves in shorter periods.
per = int((1 - vola) * 50)
mom = hist.iloc[-1] / hist.iloc[-per] - 1



The multiplication by 100/3 and by 50 are just the result of running some tests with different values and seeing what worked best. However, they are less sensitive than the 58. Changing them slightly doesn't result in drastic changes of the returns.

The idea about oil was, that falling oil prices can indicate a market decline - less production, less demand for oil. I thought that this also can work the other way round: Sometimes oil prices rise for political or other reasons like burning oil fields, disputes, wars or whatever. If the demand for oil in the industry remains the same, this means a higher cost for porduction and less earnings.
Here's what it looks like in the futures algo:


# ... <-- here would be all the other conditions
or mom[OIL] < -.28
or mom[OIL] > .35



Again, the numbers are the result of what worked best.

One thing I haven't tried on equities yet is to add VIX to the alternatives. When I did the first run with equal weights, my returns tripled - but also DD and volatility went through the roof. So I ended up with a tiny fraction as weight for VIX and it gave the result a little boost both in Sharpe and returns without increasing vola or DD.

What could also be interesting to know is what I used to replace the ETFs:

DBB - since the last post just the future on copper (before that, the mean returns for copper and the Bloomberg Commodities Index)
XLI - in the quantiacs toolbox you have prices for 500 something stocks, I use the mean returns for those that are in the sector 'Industrials'
BIL - future on the 2-year Treasury Note
UUP - future on the USD index
OIL - future on WTI crude oil

@Tentor Testivis I looked at your notebook for correlations between the DJCI Industrial Metals index and other ETFs (most notably DBB). The problem is the alignment of the self-serve pipeline dates to the get_pricing dates. Remember the pipeline index dates are shifted from the actual 'asof dates' and are the date one could have acted on the data. We want to compare the index values with the associated ETF prices on corresponding 'asof dates'. So, simply set the pipeline dataframe index to the 'asof_date'.

By aligning dates, the correlation between the DJCI Industrial Metals index and DBB is quite high, and closer to what was expected, at 0.87.

See the attached notebook.

BTW, keep up the good work! Great input from everyone here. A lot of very actionable stuff.

@Dan:
Thanks, good to know. So I basically did a shifted correlation then. I think I'll explore that further because such a definite non-correlation looks unexpected to me even for shifted returns. But who knows, perhaps I also messed up something with the data set. Anyway, if I find something interesting in terms of auto- and shifted correlations, I'll post it here.

Digression: In & Out with Silver Lining
It's a bit of a digression here but I thought I share it anyway; the plan was actually to experiment with silver and gold as additional signals. They don't really work or at least I didn't get them to. However, I discovered a pattern that is bringing the 'once in a decade opportunity' (30% drop in SPY) back in, although in a different way.

The mechanics: After the SPY drops by 30%, then once our signals say 'in' we invest in silver (via the SLV ETF) instead of the SPY and hold silver until the SPY recovers to its pre-drop level. It's a digression since the In & Out strategy is actually a clean 'market vs bonds' play.

Possible rationale: After a significant market drop, investors may expect that the recovery will take some time and that equities won't generate substantial or reliable returns over some time. However, at this point, investors are likely to be sitting on cash (pulled out of the market earlier) that they want to invest somehow. They then may turn to precious metals as an alternative investment.
By the way, when large amounts of cash that was spread out over a wide universe of equities flows into a comparably small stock of a precious metal, such as silver, it may be like funnelling huge amounts of water from a very broad pipe into a relatively narrow pipe, with the resulting immense 'water pressure' driving up the price/returns :)

Earlier theme: Contribution of bonds
This is related to Dan's interesting question above (9 Oct 2020), noting: “Just musing, but it almost seems the signals are really just finding when to get into bonds?”

It looks like bonds are definitely part of the secret sauce that generates the total returns, see the attached backtest. The contribution is positive and at about 123% or 6.50% annually according to the tear sheet. This is much more than I would have intuitively guessed. Interesting!
So, I think the total return of the full strategy was at about 915% (i.e., factor = 1+9.15 = 10.15), the bond factor is 2.23 (i.e., 1+1.23), meaning the factor measuring the contribution of holding the SPY is 4.55 (10.15 / 2.23) or 355%. So, if I am calculating this correctly, it is an approx. 1:3 ratio in terms of bond contribution versus market contribution, i.e. definitely an unexpectedly substantial contribution from bonds here.

Earlier theme: The cost of being wrong
Related to the previous post, Dan (9 Oct 2020) then made an interesting test, swapping bonds with a short on the SPY and the total return came out much smaller. I speculated that this might be due to the 'cost of being wrong', i.e. that we go out of the market but were wrong in doing so and the market moves against us in all of its might.

The attached backtest shows that this speculation is incorrect. The 'cost of being wrong' are relatively negligible (-13%) considering the length of the backtest period (12+ years). However, there is a bit of a negative trend, meaning that some cost of being wrong accumulate as we run this in & out strategy over time.
An additional insight is that the 'out' signal does not have a large enough precision that we should specifically bet on it via a short on the market. The maximum return that we generate from this bet over the backtest period is at about 60%. Any gains are lost relatively quickly.

Conclusion: 'out' = signal for return opportunities in alternative assets and possibly investors' crystal ball for market downturns, although these may not actually materialize
Open for discussion: Overall, taking this and the previous post together, I think that Dan's point is very true; the in & out strategy appears to provide a great timing regarding when to take the money out of the market and invest it in alternative assets (bonds). It is not necessarily the case that immense downturns would have awaited us in our market investment since then the short on the SPY would have produced substantial positive total returns. It seems like we are banking in on capturing a psychology change ('jitter') early that drives up the returns in the alternative assets. The algo appears to be riding this psychological jitter wave :) Greater returns seem to be possible via the jitter wave benefitting alternative assets than via an equity market investment which would only produce small returns during 'out' periods.
(In this regard, also see above the 'In & Out with Silver Lining' which is riding the jitter wave by investing in silver and for a more extensive period of time than the original In & Out strategy would do.)

This is not a modification of Peter Guenther “In & Out” but another algo using Peter Guenther's
wait days approach.
I am trying to not use seven hard coded variables: (DBB = symbol('DBB'); context.WAIT_DAYS = 15;
context.RET = 57; thresholds(-0.07, -0.07, -.006, .07)),
because I was taught that "each undisclosed coefficient is the coefficient of our ignorance".

The goal is to get similar results using as few variables as possible.
I modified Tentor Testivis recommendation to use volatility adaptive calculation of WAIT_DAYS and RET.
I find that union of four factors used in original algo is less reliable as the output is dependent on
any single factor and there may be many of false exit signals.
Therefore, I used price relative ratios for each two factors (bull and bear) and signal as intersection
of them, where the results depend on all four factors (signal conformation).
The factors and variables are not optimized as a strategy itself so you can get better results.
You can also add another pairs of bull and bear factors.

Here is the "Price relative ratios (intersection) with wait days".

Fantastic! This sounds much more reasonable and robust. This one can be used for live trade.

I will even replace the QQQ with QLD for live. :-)

Great works. Efficient and tidy code. Any reason why the algo works better during 08-09 than 18-19?

Very cool stuff, this is a brand-new "in & out"-type strategy! Thanks for sharing, Vladimir!

I have just swapped in the SPY as the market to cleanly isolate the in & out returns vs returns related to the stock selection (i.e., QQQ = Nasdaq = tech selection). The algo does a good job in terms of generating excess returns, i.e. we are better placed going in and out of the market following the algo than being always in (465% vs 194%). So this looks very good.

Now, I understand from Thomas's post that the 'vital signs' look good too (i.e., similar to a good heart rate, lung volume etc.). So that is great, thanks for sharing Thomas!

If you are happy to—and others may chip in too—could we also develop a bit of a narrative regarding the 'soul' of the algo? What is the conceptual mechanism at work here? Why do these particular price signals/ratios work? For example, why does the ratio of silver returns versus gold returns tell us that we should go out of the market? I know that some traders look at the gold versus silver prices to understand which metal is relatively undervalued and then they go either long or short in gold or silver. Yet, what is the mechanism that translates this dynamic into what will be going on in the equity market?

Just to clarify, this is important: these are not at all meant to be probing questions, instead it's about discovery. We clearly see that the signals are working in the backtest, so there is no question about that (empirical robustness). Now, we also need a solid narrative regarding why the signals are working (conceptual robustness). That is, why are they early signals of equity downturns or underperformance? Any help developing this conceptual reasoning is greatly appreciated! I do think that this rationale exists, so it's possibly just a question of spelling it out explicitly.

Here a mini change just want to reduce a lot of "unnecessary" trades. :-)

@Peter Guenther,

Yet, what is the mechanism that translates this dynamic into what will be going on in the
equity market?

The algorithm does not predict what will happen on the stock market,
rather, it determines what stage of the market cycle we are at.

It is well known that gold is considered a safe-haven asset and many investors turn to it when
the economy starts to struggle.
Gold value usually increases when the market goes down.
Whether it is futures, bullions, coins etc., gold is the go-to asset in times of economic stress.

Gold daily moves has slightly negative correlation to equity market daily moves especially in bear market.
Silver daily moves has more than 80% correlation to Gold as they are in the same asset class.
But silver daily moves has positive correlation to equity market daily moves.

When gold goes up and silver goes down their price relative ratio changing faster and may signal
regime change earlier then momentum of any of them...

@Thomas Chang,

As I mention above: The factors and variables are not optimized as a strategy itself so you can
get better results.

Thank you for sharing.

Many a little makes a mickle.

Everyone, again great work and truly appreciate the collaboration and sharing of ideas. Peter, you deserve a big thank you and 'well done' getting it all started.

Here is another iteration which incorporates Tentor Testivis's market volatility to determine when to get back into the market. It also builds on Vladimir and Thomas Chang's ideas of ratio's to eliminate some of the constants. However, I made a simplification, and rather than ratios, I use a basic comparison. In the end it reduces to three simple rules:

    bear_signal = (
(gold_returns > silver_returns) and
(utility_sector_returns > industrial_sector_returns) and
(industrial_metals_returns < dollar_returns)
)



I think this is close to Peter's original intent just more distilled. It also aligns closely with conventional wisdom and includes both 'sentiment' and 'economic' indicators. Traders sentiment favors gold and utilities if the market isn't looking good. Industrial metal prices tend to go down with less demand which points to lower markets.

See attached.

All,
great tweaking to this algorithm! It's a killer! I'm still not "sold" on the DBB ETF though....it has almost no effect between 2015 and 2020 on returns...while during 2010-2015 one could double returns....

If one loved leveraged ETFs could have made a fortune with this :-)

Here is "Intersection of ROC comparison using OUT_DAY approach,
Based on:
Peter Guenther's OUT_DAY approach.
Vladimir's "Price relative ratios (intersection) with wait days".
Thomas Chang changed BASE_RET parameter and reduced number of "unnecessary" trades.
Dan Whitnable added another pair and changed Price relative ratios (intersection) with
intersection of ROC comparison what removed one more constant.

@Dan, Thank you for sharing.

Many a little makes a mickle.

I absolutely agree with what Dan said: great thinking, innovation, and collaboration in this thread. Very inspiring and dedicated, hard work toward the let’s get rich together objective, very nice! :)
There are interesting schools of thought emerging and, I reckon, a lot of potential for others to join in and help develop these further.

Intriguing idea to work with asset pairs that usually are highly correlated but have divergent correlation when things begin to go pear shaped in the equity market. See Vladimir’s gold/silver argument above and the industrials/utilities pairing. Very nice work, also in terms of developing the algo further!

Dan’s comparative returns across price signals
Great point that the ratios come down to being returns comparisons and also intriguing idea regarding playing the signals directly by comparing their returns, and to distil signals. The total return of the algo is smashing and the max drawdown is small, very nice!

In & Out with sampled percentiles
Very much inspired by earlier contributions and comments from Tentor, Vladimir, Dan, Thomas, and others, I am also experimenting with reducing the number of fixed parameters in the original code. In the attached backtest, the focus basically is on getting rid of the price signal cut-off values (-7% or 60 basis points) and replacing them with empirically determined cut-off values via statistical percentiles.
Mechanics: I hope that my coding skills did not let me down, but here is the intention; For the signals, the algo creates a sample of empirically observed returns over the past year (see returns_sample). The returns are determined via a price comparison of the current price and a shifted average price. The shift is about three months (5 trading days/week x 4 weeks/month x 3 months = 60 days) and the average is calculated using the prices in the +/- 5 day-window surrounding the shift (see hist_shift). The algo then checks whether a signal’s current return is an extreme return, which would provide us with an ‘out’ signal. Here we draw on statistics and science practice: Imagine the observed returns come from a distribution (e.g. a bell curve), then a typical cut-off value for a highly significant extreme observation is the 1% left-most tail of that distribution (often referred to as alpha = 1%). Therefore, the algo checks whether a signal’s current return falls into the 1% bottom percentile (see pctl_b and extreme_b). Note, dollar returns are reverse coded (line 62) so that extremely large returns (= flight to safe haven) fall into the bottom and not the top percentile.
Wait days: I was also experimenting with updating the wait days empirically using a percentile logic. Extremely large and small returns may indicate ‘wild weather ahead’, so the algo works with the SPY returns’ absolute difference from the historic median returns (note that the median is equivalent to a percentile of 50; see spy_diff_from_median and the abs(percentile – 50) logic). This is only in the early stages, but the idea is to create a factor that then reduces (in the case of ‘calm weather’) or increases (‘wild weather’) the 15 base wait days (for a draft factor, see line 70), similar to Tentor’s volatility approach (also see Dan’s and Vladimir’s algos). I think that the current operationalization still stays very close to the initial 15 wait days. In fact, the returns currently may be better or not much worse if you hardwire the wait days to 15 days.
Lazy trader: Since the In & Out algo is intended to be a skeleton in which you can integrate your stock selection strategy (e.g., search for “Amazing returns = superior stock selection strategy + superior in & out strategy” in the “Quality Companies in an Uptrend” thread), and because I am a lazy trader (once a week for the stock selection), I have kept the ‘in’ schedule function at ‘end of the week/Fridays’ (line 52). Setting it to daily may slightly improve the returns, which is then the fair reward for more switched on trading.

@Tentor: I have expanded the signals by prices for natural resources (ETF IGE) and this has helped the total returns a bit. When using fixed cut-offs, I think that it worked fine with the -7%. It may be worth a try to add the signal to your futures strategy.

I implemented one of the momentum strategies from Quality Cos in uptrend and it performed worse than just holding SPY. Any ideas what's happening?

6 indicators: XLI, DBB, BIL, UUP, 15 Days waiting, SPY Drop 30%.

Why you use the 21 for Days waiting? It veries,

@Radu: Thanks for sharing this result! I reckon the stock selection component and in & out component can have synergies or there can be dissonance (what we observe here). Using the original In & Out code (beginning of the thread) pushes the "Quality Co in an Uptrend" strategy from 450% to 1060% (backtest to the post "Amazing returns = superior stock selection strategy + superior in & out strategy"). Your code still pushes from 450% to 600%, but indeed some returns got lost. It may be worth a try without the new natural resources signal (context.NRES).
What you also could try is to implement the "Quality Co in an Uptrend" strategy more fully: the original strategy holds stocks that continue to be in an uptrend even though the in & out indicator says 'out'. This might be an important secret sauce which a hard move 'out' of the market interferes with.

@Radu: One thing that's also true is that a tech stock selection would have generated superior returns in the past decade, see Vladimir's codes based on trading the QQQ. I ran three tests, all resulted in +1000% returns for the different in & out-type strategies that we currently have on offer (backtest timeframe 1 Jan 2008 to 16 Oct 2020):
In & Out with sampled percentiles: +1333%
Dan's smashing in & out algo: + 1751%

One idea: could you focus the universe for the "Quality Cos in an Uptrend" population on the tech sector, e.g. by using the appropriate Morningstar sectors? Would be interesting to see the returns of this "Quality Tech Cos in an Uptrend" strategy.

Great work by Peter and other contributors! As I'm new to Quantopian, take what I say with a grain of salt. With that said, I'm curious in where the determination of the lower percentile comes from @Peter Guenther. Would it be possible to plot the returns in a histogram without this filter and therefore determine appropriate precentile settings, or how did you get to this number of 1%? I'm thinking in regards to drawdowns, if this number could be optimized to further decrease the drawdowns, hopefully dynamically to adjust to the market, as you mentioned.

@Anton: thanks for joining in! The 1% comes from statistics and research practice regarding what is considered a highly significant observation or, in our case, an extremely negative return. Another popular threshold is 5% for a significant observation. Here is a website that discusses alpha (in terms of the statistical significance level) and distributions in the context of hypotheses testing. The algo is similar to the logic of a one-tailed significance test, asking: is the current return so negative that it falls into the 1% extreme bottom of the sampled returns distribution (= is it an extreme observation)?

Since we only want to go ‘out’ when things are really going pear shaped, the algo looks for highly significant negative return observations. When we use a larger statistical alpha, we will be going ‘out’ more often and the returns may suffer (although this would needed to be tested). I reckon considering one-tailed and two-tailed tests, one could possibly justify the following increments for the statistical significance level, alpha: 0.5 (=1%, two-tailed test), 1 (=1%, one-tailed test), 2.5 (=5%, two-tailed test), and 5 (=5%, one-tailed test). I suppose it would be difficult to justify alpha values in-between these increments.

Absolutely, one could experiment with switching the alpha level dynamically (ideally using the fixed increment levels above), being stricter during certain times and being more lenient during other times. Actually, that would be quite interesting to see some results based on such a switching logic!

@Peter, removing NRES lowered performance from 604% to 503% so it looks like a good signal.

I combined @Vladimir's algo with cos in uptrend and got 1500% returns with -18% max drawdown which is incredibly impressive.

Lastly, I created this frankestein of your approach, Vladimir's approach, and co in uptrend and managed to get -15% drawdown with 972% returns .

In your opinion, how important is 4% less drawdown, 0.33 lower beta, and a bit less volatility? Is it worth the tradeoff from 1500% to 972%?

@Radu: Great, thanks for sharing these results! It sounds like Vladimir's algo perfectly combines with the Quality Co in Uptrend stock selection, producing amazing returns. I reckon for this difference in returns (1500% vs 972%) I would not worry too much about the drawdown difference. A lower beta is great, of course, since the algo is more resistant to market swings, but one seems to 'pay' for this via a (substantially) lower return. So, it depends on your risk/return preference: a one third greater vulnerability to market swings for an additional 3.7% return reward per year ... may be worth the risk.

Thanks Peter! Really appreciate your guidance.

I tried two more things, based on Dan's suggestions above. Removed bonds to see how much that affects returns and that yielded 615%. Then shorted SPY and that yielded 327%.

Increase in bonds seem to be a big part of the algo. Is it safe to ssume treasuries are going to act in a similar in the future, or are we living in a new world where bonds don't work anymore?

Hi! I'm new to Quantopian, and still trying to understand how things works. Just read through this whole thread.

@Radu: Your last results seem quite interesting. Great work. Would you mind sharing your combination of "@Vladimir's algo with cos in uptrend" (1500% return)? I'm not yet confident to put them together by myself...

@Leandro: welcome on board!

@Radu: very valid question, I reckon for backtests that is the question: are we sufficiently confident that the future will look like the past? There can only be opinions about this since no one can predict the future, so here is my opinion: the past decade was characterized by cheap debt capital, which fuelled company growth and margin-based trading (trading with borrowed money). Due to the Covid crisis my assumption would be that the next, at least, half a decade will look very similar. So, it could be valid to say that the (near) future is likely to look like the past. Now, the bonds basket generated excess returns in the past despite it being the environment that is was (e.g., low interest). So, I would assume that this basket can also generate excess returns in the (near) future.
In terms of testing to increase our confidence, you could look only at the recent past (e.g. 1-3 years) and see how the bonds basket is performing. This might give a more relevant indication regarding the likely near-term performance. Also, in the "Quality Companies in an Uptrend" thread Frank Sch indicates (his post on 17 Oct) that adding 10% gold to the basket improves performance, so there is definitely optimization potential regarding what alternative assets we hold when being 'out' of the equity market.

Hi all,

I have trouble understanding the following line of code:

vola = data.history(MKT, 'price', VOLA + 1, '1d').pct_change().std() * np.sqrt(252)


where np.sqrt(256) equals 16.

Why not multiply directly by the value 16.

What does this value of 16 mean?

thx

252 != 256 :-/

just Thomas Chang.
sqrt(252) = 15.8745079

So, what mean to use the value 15.8745079
What does this value mean ?

I think the 15.8745079 has little meaning but the sqrt(252) does have meaning.

Maybe you could google somwhat like "how to calculate the anual volatility"? The trade days in a year is normally 252 days. You could use 250 or 260. But this makes little difference.

Attached is a backtest for the latest version of the In & Out algo. You will notice that it's a real 'fusion kitchen', combining the percentiles, return comparisons, and relative ratios in one algo.

In & Out with sampled percentiles, return comparisons, and relative ratios
The new algo components are the return comparisons between the pairs gold vs silver and utilities vs industrials (lines 70-73), see Vladimir’s and Dan’s algos. These provide additional ‘out’ signals if the return difference of the pair is highly significant (1% percentile; the return differentials are reverse coded so that large excess returns end up in the bottom percentile).
The flexibly defined wait days are now also determined by the pairs, specifically by the pairs’ return ratios which are multiplied with the initial 15 wait days (line 80). The maximum of 1, the gold/silver return ratio, and the utilities/industrials return ratio determine the multiplier. So that there is some ‘memory’ in the multiplier (i.e., to allow for some persistence when ratios have been elevated), 50% of the prior period’s multiplier is fed back in (see 0.50*context.adjwaitdays). This allows for a somewhat slower decay of elevated levels, since we may want to be cautious during such times and stay out of the market for that little bit longer.

For a tech stock selection strategy (QQQ), the algo yields 1383%.

Hi @Vladimir, your latest algorithm holds TLT and IEF from Feb to July in 2020.

One could easily push on this strategy. Its protective measure is built in its switching to safety procedure. By allowing some leverage (gross leverage: 1.48x), the strategy could end with 600x its original investment. Here is what I got starting with 1 million:

Settings: From 2008-01-01 to 2020-10-16 with $1,000,000 initial capital Total Returns 61617.68% Benchmark Returns 205.64% Alpha 0.68 Beta 0.32 Sharpe 1.74 Sortino 2.80 Volatility 0.32 Max Drawdown -26.89% The added performance is more than enough to pay for the leveraging fees. The average trade duration is about 7.2 months. The higher return would come with the acceptance of a higher drawdown. However, higher drawdowns are relatively short-lived. Average drawdown is more in the vicinity of -10%. @ Guy, Welcome to the club. Which strategy (there are at least 5 versions) and which tradeable symbols did you use? I can see that your Sharpe ratio is 1.74, and it doesn't depend on leverage. What else did you do? @Guy: Fantastic, thanks for sharing these results. Also, great questions there from Vladimir! I agree that leverage can be an efficient way to boost returns. I suppose it depends on people's return/risk preference whether this is a suitable way forward for them. It also depends on how confident we are that the future will look like the past. Any breaks mean that leveraging could painfully backfire. Of course, then again, 1.5 leverage does not sound like going completely overboard. Updated: In & Out with sampled percentiles, return comparisons, and relative ratios Today, I was trying a new pairing: safe-haven currency vs risk-on currency. The Swiss Franc (ETF FXF) / Australian Dollar (ETF FXA) pair produced some useful results, so I have added the pair to the returns comparison (line 75) as a separate possible 'out' signal and as a ratio determining the wait days (line 83). The total returns have officially passed the 1,000% mark, see the attached backtest :) Using a tech stock selection strategy (QQQ) yields 1,541%. Note: with the strategy we are 'out' of the market for quite a bit. It seems to be OK when our stock selection is solely to hold the market via the SPY. If you have a more sophisticated stock selection strategy, it now really comes down to testing regarding how the 'in' and 'out' moves interfere with, or complement, your selection strategy. In comparison, Dan's algo and Vladimir's algo(s) are 'in' the market more extensively and this may in fact increase the returns of your stock selection strategy. For instance, see above, Radu reported returns of 1,500% from combining Vladimir's ROC, as the in & out component, with the "Quality Companies in an Uptrend" stock selection strategy. @Peter tried your latest version with quality cos in uptrend and starting from 2012 and it seems to underperform. @Vladimir's algo's running right now starting from 2012 and it's outperforming. @Vladimir's version would be out of the market until July 2020 though. Just some observations. What is interesting is that @Vladimir's version, which is supposed to be more aggressive, has less drawdown starting from 2012. @Peter Guenther, In & Out with sampled percentiles, return comparisons, and relative ratios. Very interesting way of calculating adaptive momentum. Very good performance metrics. In my opinion, the range of context.adjwaitdays is too wide (15-80000), is this by design? Look at Custom Data in attached backtest. @Vladimir, I used your version (starting with the best). I added stocks, part of QQQ, that I have traded prior to the simulation's start date making them in some way acceptable choices. As you know, QQQ is averaging stuff out. It is technology weighted (close to 70%) and includes highly liquid stocks (needed to get in and out on a dime), especially when the equity grows larger with time. I added 4 stocks and DIA to corroborate QQQ. You could add more. The expected impact was to add volatility to the overall portfolio. I also added code to evaluate leveraging fees which were set at 4%. (IB charges 0.78%). The strategy did not need a stop loss since it is technically built-in. It switches to bonds at the first hint of market turmoil. The reason why I used unbalanced leveraging. Here is the outcome of my last version, also with 1 million initial capital. Total Returns 73212.77% Benchmark Returns 205.64% Alpha 0.70 Beta 0.35 Sharpe 1.76 Sortino 2.80 Volatility 0.32 Max Drawdown -29.61% This new version has a gross leverage of 1.43x with an average trade duration of 7.4 months. Notice the relatively low beta for such a high CAGR scenario. Total profits came in at:$713,603,533. Therefore, the strategy ends with over 700x its original investment.

The estimated cost of leveraging was: $17,409,116 which is about 2.4% of the generated profits. As was said, there was more than sufficient profits to pay for the added leveraging costs. One could do even better with little effort. Finished the last post saying: “One could do even better with little effort.” To corroborate this, here is the next test performed. Total Returns 82966.05% Benchmark Returns 205.64% Alpha 0.71 Beta 0.38 Sharpe 1.75 Sortino 2.75 Volatility 0.33 Max Drawdown -31.89% This version also had a gross leverage of 1.43x with a slight increase in its average trade duration to 7.6 months. Total profit came in at:$812,383,496. A $98,779,963 increase compared to the previous version. The estimated cost of leveraging was:$17,539,744, an increase in fees of $130,628 to gain$98,779,963. More than an acceptable compromise.

@Guy: How did you pick those four stocks? The reason I added quality cos in uptrend was to avoid an assumption that QQQ will have similar returns for the next decade. Meaning, is there a belief that those four will continue to go up, or something else?

Vanilla QQQ vs quality cos in uptrend has super similar returns, and perhaps holding QQQ has partial long term tax benefits.

On a separate note, does anyone have any suggestions on how this strategy could be used on a long term portfolio? Was thinking that since financial advisors take 1% or similar would it be possible to use that 1% to perhaps buy options on TLT/IEF to avoid large drawdowns? Any suggestions in that direction?

can somebody explain all these dumb single -2 +2 trades is it trying to rebalance? Seems kind of useless and annoying.... Also anybody thinking of porting this to QC and running this live?

Hi @Guy,
The extraordinary performance makes me doubt if the stock selection is falling into the forward looking bias pit.

something seems mind boggling weird, when you change rebalancing for the bond funds when "Out" of market to weekly instead of daily, the returns YTD drop by 48%...... something seems off. How can daily rebalancing a few shares between TLT and IEF contribute to almost 99% of the returns?

@All: great tests and info here, thanks for sharing!

@Elsid: I am speculating, however I reckon that you have set the 'out' scheduling function to weekly instead of daily and this does not only affect the rebalancing between the bonds. Instead it also determines when we go 'out' of the market in the first place. With a weekly scheduling function for 'out' this means that, in some cases, we might wait for up to a week after a signal has said 'out' before we actually sell our equity holdings and go into bonds. Since, for the 'out' part, time is of the essence because things often deteriorate quickly, I would not recommend to change this to a slower reaction time than daily. So, with this strategy, even the 'lazy trader' (like myself) would need to check the signals daily and, when the signals say 'out', at least hit the 'sell all' button and buy the two bond ETFs. In contrast, to execute a sophisticated stock selection strategy (= more time consuming), this can be done on a weekly basis.

@Vladimir: great observation. One could indeed work with an upper bound. However, what I came to realize is that these continuously updated wait days anyway have no lasting meaning since they are updated daily and then even very large realizations come back 'down to earth' extremely quickly. So, in a way, the adjusted wait days could be seen as capturing an initial shock ('safe havens suddenly begin to outperform similar other assets') in form of a spike, but this shock quickly becomes a new accepted reality and then also the adjusted wait days begin to relax again :) So, the variable does not really capture the days that we are truly planning to stay out of the market (unless we code it so that we commit to the wait days that were valid when we made the 'out' decision; however, I have tested this 'commitment' idea with other algo versions and found that it rather tends to underperform).

@ Aliaj
Yes those -2 +2 trades is trying to keep the leverage always = 1. But one can filter them out. Look at the version of mein:

"Here a mini change just want to reduce a lot of "unnecessary" trades. ..."

But in this case the total return could be smaller.

@Peter Guenther,

these continuously updated wait days anyway have no lasting meaning since they are updated daily
and then even very large realizations come back 'down to earth' extremely quickly.

Not extremely quickly because you are using smoothing :

context.adjwaitdays = int(max(0.50*context.adjwaitdays, ...


You may avid this if you will use not

returns_sample[context.UTIL].iloc[-1]...


but

ratio_sample[context.UTIL] = returns_sample[context.UTIL].iloc[-1] + 1.


@Vladimir: Thanks for sharing the improved code, much appreciated!

In hindsight, I should probably have given an example regarding the smoothing and how it 'moves back down to earth'. If the wait day indicator jumps to 2000 one day, then if no new spike occurs, the following values are fed back in:
day1: 2000 -- day2: 1000 -- day3: 500 -- day4: 250 -- day5: 125 -- day6: 63 -- day7: 31 -- day8: 16 -- day9: 8
It's basically 2000 x 0.5^(day# - 1) and the relatively quick move 'down to earth' within 7-8 days is driven by the exponential-type of decay. So extreme values do not tend to last particularly long. Even a 20000 spike is reduced to 20 wait days at day 11.

Just a small update (line 83-84) regarding adding an upper bound to the wait days (60 max, i.e. about 3 months) in the In & Out. I know, it's just cosmetics but it may make for a better view on the wait days and what we are actually dealing with. One insight may be that the extreme spikes in the adjustment variable have no bearing on our 'in' and 'out' trades (due to the exponential decay), so the test may be useful to improve that confidence.

@Peter Guenther,

I have manually calculated context.adjvar for GOLD and SILVER ratio of returns to show inconsistency of results with slight movement of SILVER price.

context.INI_WAIT_DAYS * max(1,returns_sample[context.GOLD].iloc[-1] / returns_sample[context.SLVA].iloc[-1]


returns_sample[context.GOLD].iloc[-1] = 10%

returns_sample[context.SLVA].iloc[-1] = 0.1% --> context.INI_WAIT_DAYS* 100 --> 15*max(1,100) = 1500
returns_sample[context.SLVA].iloc[-1] = 0.0% --> context.INI_WAIT_DAYS* inf - -> 15*max(1,inf) = inf
returns_sample[context.SLVA].iloc[-1] = -0.1% --> context.INI_WAIT_DAYS* -100 --> 15*max(1,-100) = 15

@thomas @Peter Thanks

Also I have a programmer that I have worked with in the past that can move this over to QuantConnect if anyone is interested cost would be around $250-$300, we can all share it whomever is interested, and we can run it automated or manually have it spit out data.

Let me know if anyone is interested.

Also guys I noticed last version of Peter's performs almost double of Vlads and Thomas, also Vlad's the code seems written completely different. for YTD 2020, Peter's is 50% vs Vlad's 25%.

@Radu, the stock selection can be as simple as picking stocks within QQQ that continuously outperform QQQ. Meaning those above market average since QQQ is a market average proxy. This is visible every day of the week over the simulation's duration, and it will also be visible daily going forward.

In the results shown, leveraging fees of 4% were applied and they only had a minimal impact on overall results. Nonetheless, it is a compounding game. Those fees will not look that big in the beginning, but as the portfolio grows they will still be a burden on final results. They ended up being 17 times larger than the initial capital.

I design long-term trading strategies with a retirement fund-like objective where you could periodically withdraw funds after having reached a future date milestone (like retirement date or some other reason).

If your portfolio is growing, on average, at 10% and you extract 5% per year, the portfolio is still improving at a 5% rate. $$F(t) = F_0 ∙ (1 + 0.10 - 0.05)^t$$. It also means that your withdrawal is indexed to your portfolio performance: $$W(t + \tau) = F(t + \tau) ∙ (0.05)$$.

The portfolio equation is: $$F(t) = F_0 ∙ (1 + r_m + α_t - exp_t)^t$$ where $$r_m$$ is QQQ's long-term return. Your alpha is above QQQ's returns and it can have multiple sources. You could say: $$F(t) = F_0 ∙ (1 + r_m + α_1 + \cdots + α_n + op - fees_t - exp_t)^t$$ where $$op$$ is the return from your option's program. As you can see, this could produce much more than what I have presented on the condition that $$op$$ be greater than the added fees: $$op > |fees|$$ and that the sum of alphas was greater than zero.

What I find interesting in this strategy is its BULL definition (making it a switcher to safety). It is not always right or at the right time, but it is where it counts. So, my next step is to look closer at that definition since I had no particular need to do so during my acid tests phase.

For those interested, the strategy does scale down. Here are the results using the same version as the last test with $100,000 as initial capital. Total Returns 159495.63% Benchmark Returns 205.64% Alpha 0.83 Beta 0.33 Sharpe 1.78 Sortino 2.79 Volatility 0.36 Max Drawdown -31.71% Leveraging fees were:$4,791,144.

@Tentor:

To run the algorithm I would upload the script to PythonAnywhere and schedule it to be executed once a day, their free plan is sufficient for this.

Quite interessting. It seems the PythonAnywhere is a cloud? You can run your algo on the PythonAnywhere and connect to a broker such as IB or Alpaca?

Have you ever tried the IBridgePy.com?

@Tentor
Hi! Sorry for the basic question. You mentioned trading manually symply adding the print statement. Wouldn't that mean that we'll always be one day behind the algo in terms of execution? How can I extract what the algo will do today as soon as it gets the data from yesterday?

@Guy Great explanation. I didn't understand all of it, but I really appreciate you writing it down. Would it be safe to say that I can check the Relative Strengh of all QQQ stocks vs QQQ and pick the top four, for instance?

On a separate note, should I assume that you don't care about taxes in your algos? You leverage up and that's what you keep track of.

@Elsid Vlad's algo stays out most of 2020, but it outperforms over a longer period of time. Attached backtest of Peter's algo from 2012 illustrates that. Really exciting reading the back and forth between @Peter and @Vladimir.

@Tentor

It seems way more annoying to set it up yourself and running your own server basically if you want it to run 24/7 and automated, then paying QC $10/ -$20/mo with the data included, I guess argument would be to avoid IB's commission fees.

Also QC has added some commission free brokers as well, seems like a pretty reasonable cost to host and run the algo with data, as opposed to like calling Yahoo's API or visiting yahoo everyday for data ect.

But I'm open to implementing it live however, would be cool to learn the different python libraries had a buddy who ran something live on his own.

Looked at IBridgePY can't sign up for an account but will try it out with IB, TD, or Robinhood if i Can. Says you can host on Amazon EC2 too, but there are enough cheap windows VPS out there also.

@Elsid Aliaj, all

if you are interested in how and where to live trade Quantopian algorithms, please open
a new thread and Tentor Testivis, Thomas Chang and others will be happy to answer your questions.

Here's the thread for live trading, sorry for the off topic posts

Hello everybody, I created a slack for discussion this strategy:

I don't post much but have really enjoyed following so many great algos over the years and reading the smart discussions among the contributors. When the news broke today about Q giving up on the community my initial thought was where will the community go next. I know nothing is owed to me because I haven't contributed much but if folks like:

@Tentor Testivis, @Dan Whitnable (Quantopian), @Vladimir, and @Thomas Chang

are willing to share their thoughts on where they are headed next and if their online handles are the same/different it would be extremely awesome. I absolutely love following yalls discussions and will follow yall to the next platform. The contributors like yourselves are the Q community leaders and so many of us that dont post much will follow yall to the next place.

@chris that link for the slack channel isn't working for me (asks for the url name of the channel). Can you share again?

deleted

Please everyone migrate to Quantconnect. This is the best option where many are live trading on QC for years! QC has full L1 equity data and their backtesting is probably the most realistic. Plus their community is highly vibrant too with lots of strategies.

This IN&OUT strategy will have great fun there.

Can anyone share the source codes of this strategy. Since the shuttingdown announcement, all codes are not in the pages anymore. Could you please share? Thanks.

Can anyone please explain this piece of the code that @Vladimir kindly shared with us?

if exit: BULL = 0; OUT_DAY = COUNT;
elif (COUNT >= OUT_DAY + WAIT_DAYS): BULL = 1
COUNT += 1


I am a bad coder, trying to re-run his strategy in Excel, but it's clearly not working as intended without this piece.

This is the entire code if anyone is interested:


# Price relative ratios (intersection) with wait days
import numpy as np
# -----------------------------------------------------------------------------------------------
STOCKS = symbols('QQQ'); BONDS = symbols('TLT','IEF'); LEV = 1.00; wt = {};
A = symbol('SLV'); B = symbol('GLD'); C = symbol('XLI'); D = symbol('XLU');
MKT = symbol('QQQ'); VOLA = 126; LB = 1.00; BULL = 1; COUNT = 0; OUT_DAY = 0; RET_INITIAL = 80;
# -----------------------------------------------------------------------------------------------
def initialize(context):
schedule_function(daily_check, date_rules.every_day(), time_rules.market_open(minutes = 140))
schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())
def daily_check(context,data):
global BULL, COUNT, OUT_DAY
vola = data.history(MKT, 'price',  VOLA + 1, '1d').pct_change().std() * np.sqrt(252)
WAIT_DAYS = int(vola * RET_INITIAL)
RET = int((1.0 - vola) * RET_INITIAL)
P = data.history([A,B,C,D], 'price',  RET + 2, '1d').iloc[:-1].dropna()
ratio_ab = (P[A].iloc[-1] / P[A].iloc[0]) / (P[B].iloc[-1] / P[B].iloc[0])
ratio_cd = (P[C].iloc[-1] / P[C].iloc[0]) / (P[D].iloc[-1] / P[D].iloc[0])
exit = ratio_ab < LB and ratio_cd < LB
if exit: BULL = 0; OUT_DAY = COUNT;
elif (COUNT >= OUT_DAY + WAIT_DAYS): BULL = 1
COUNT += 1
wt_stk = LEV if BULL else 0;
wt_bnd = 0 if BULL else LEV;
for sec in STOCKS: wt[sec] = wt_stk / len(STOCKS);
for sec in BONDS: wt[sec] = wt_bnd / len(BONDS)

for sec, weight in wt.items():
order_target_percent(sec, weight)
record( wt_bnd = wt_bnd, wt_stk = wt_stk )

def record_vars(context, data):
record(leverage = context.account.leverage)



Hello everybody, I created a slack for discussion this strategy:

@all: what a shocker, I was just about to write a note regarding this thread reaching the 100th reply (!) and congratulating @Tentor on his new thread (Live/Paper Trade the In_Out Stragegy) which I am sure would have gone through the roof in terms of interest! And now the place is closing :(

I would have loved for this thread to continue on Quantopian and really appreciated the opportunity for this discussion and the input from Dan!! Hope that the Quantopian team will be alright, but I reckon you guys have amazing, sought-after skills, so that there are many options.

So that the show might be able to go on, I would be happy to move this discussion to another place where backtesting is possible and further innovation and improvements can be assessed. QuantConnect is mentioned fairly often, so I just wanted to get a sense regarding whether people are thinking about migrating there and would be interested in continuing the discussion (e.g., @Tentor, @Dan, @Vladimir, @Thomas, @Radu, and others). I am sure that there are still plenty opportunities for improvement and innovation in the in & out-type of strategies and then plenty opportunities in terms of combining them with great stock selection strategies.

Hi Peter,

The game should be gon on. :-)

I was by QuantConnect about two years ago as QuantOpian close the live trading. But after I found the platform of QC quite differen tfrom that of QuantOpian, I gave up and do backtesting here by QuantOpian but live trade by using the IBridgePy.

If QC will migrate the QuantOpian to there platform, I will think about if I will comeback to QC.

I see this:

https://factset.quantopian.com/

Maybe we will use this later and meet again here in Quantopian? :-)

check out www.cloudquant.com or reach out to me to talk about our platform: [email protected]

if you cloudquant quantopian like or use zipline?

Hey guys,
when I saw that Vladimir was already working on an implementation for In & Out on QC I created a thread there to continue this one:
The In & Out Strategy - Continued from Quantopian

Hope to see you all there!

Amazing post from Quantconnect, what a Perfect replication. And results from QC seem more realistic since it is based on minute resolution and L1 data, plus all the slippage modelled. Very real.