Back to Community
A Cloud & AI Strategy

Imagine its 2005 and you somehow knew that cloud, data, AI computing will be a thing for the next 10 to 20 and even more years.
You handpicked a number of big companies and you ran your backtest.

Or may be, you want to run this strategy for the future, and wonder how those companies performed in the past.

Selected Stocks

  • MSFT, for Azure AI Services
  • ADBE, for its Analytics Suite
  • AMZN, for AWS AI Services
  • IBM, for IBM Watson
  • GOOG, for Google Cloud AI Services
  • CRM, for buying AI startups
  • SAP, for participating in AI research

Results

What do you get: above 1300 percent return, low variance, tax efficient portfolio.

Details

  • Hold only stocks that are in the pre-selected list and Q1500 universe
  • Rebalance, if a new stock enters or exits, or every 270 days after the last rebalance
  • Rebalance by splitting the cach or the value of the portfolio equally

Caveats

  • With the benefit of hindsight everyone can pick stocks
  • There is no clear rule when to ditch a stock, e.g. if it falls too much or it does not make a lot of progress
  • Weighting is equal, which means when you rebalance, you give more to stocks that underperformed

Disclaimer

This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by the author or anyone else. Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. This disclaimer was adapted from Quantopian's own disclaimer.

Clone Algorithm
19
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.factors import AverageDollarVolume, SimpleMovingAverage
from quantopian.pipeline.factors import CustomFactor
from quantopian.pipeline.data import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS, StaticAssets
 
import numpy as np
import pandas as pd
 
from quantopian.pipeline import Pipeline
 
from quantopian.pipeline.filters import Q1500US
 
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.data.builtin import USEquityPricing
 
def initialize(context):
    context.i = 0
    context.changes = 0
    # parameters:
    # which cloud stocks we want to use as our universe
    context.cloud_tickers = symbols('MSFT', 'ADBE', 'AMZN', 'VMW', 'IBM', 'GOOG', 'CRM', 'SAP', 'RHT', 'ORCL', 'RAX')
    # data & AI, no infra
    context.cloud_tickers = symbols('MSFT', 'ADBE', 'AMZN', 'IBM', 'GOOG', 'CRM', 'SAP')
    
    # how many days back we use average dollar volume as filter
    context.average_dollar_volume_formation_period = 270
    
    algo.schedule_function(
        rebalance,
        algo.date_rules.every_day(),
        algo.time_rules.market_open(hours=1),
    )
 
    algo.schedule_function(
        record_vars,
        algo.date_rules.every_day(),
        algo.time_rules.market_close(),
    )
 
    algo.attach_pipeline(make_pipeline(context), 'pipeline')
 
 
def make_pipeline(context):
    q1500us = Q1500US()
    dollar_volume = AverageDollarVolume(mask=q1500us, window_length=context.average_dollar_volume_formation_period)
   
    cloud_tickers = StaticAssets(context.cloud_tickers)

    screen = (cloud_tickers & q1500us)
    pipe = Pipeline(
            columns = {'dv': dollar_volume})       

    pipe.set_screen(screen)
    return pipe
 
 
def before_trading_start(context, data):
    """
    Called every day before market open.
    """
    context.output = algo.pipeline_output('pipeline')['dv']
    context.security_list = context.output.index
 
def rebalance(context, data):
    context.i += 1
    selected_stocks = []
    for stock in context.output.index:  
        v = context.output.at[stock] 
        if v > 0:
            selected_stocks.append(stock)
    for stock in list(context.portfolio.positions.iterkeys()):
        if stock not in selected_stocks:
            print("closing position for " + str(stock))
            order_target_percent(stock, 0)
    
    old_stocks = list(context.portfolio.positions.iterkeys())
    cond_1 = set(old_stocks) != set(selected_stocks)
    cond_2 = (context.i > 270)
    if cond_1:
        print("old stocks != new stocks")
    if cond_2:
        print("rebalance after 270 days")
    cond = cond_1 or cond_2 
    if cond:
        context.i = 0
        context.changes += 1
        for stock in selected_stocks:
            v = 1.0/len(selected_stocks)
            print("open position for " + str(stock) + " " + str(v))
            order_target_percent(stock, v)
             
def record_vars(context, data):
    """
    Plot variables at the end of each day.
    """
    record(leverage=context.account.leverage)
    record(changes=context.changes)
 
 
def handle_data(context, data):
    """
    Called every minute.
    """
    pass
There was a runtime error.
13 responses

I got interested in Stefan's trading strategy after seeing the “Cumulative Return on Logarithmic Scale” in a tearsheet. It showed alpha generation. This is represented by the steady widening of the spread between the algo and its benchmark.

I understand that this is a niche trading strategy specifically oriented on cloud and AI computing. Nonetheless, we should look at the stock market with a long-term perspective. And forecasting that we will need more from our machines should be considered as an understatement. With the advent of G5, this trend will accelerate and enable all new kinds of devices (IoT) requiring even more storage and services. Therefore, such a niche market should continue to prosper over the years.

Stefan's analysis and disclaimer are accepted.

Nonetheless, I opted to reengineer the strategy as I did in the Robot Advisor thread. I started by appraising a Buy & Hold scenario for the same stocks over a slightly different period: from 2003-09-12 to 2018-09-14, in order to compare the program's modifications over the same duration. Also upped the initial capital to 10M.

This scenario produced the following results:

However, this should be compared to letting the strategy do its trading thing. This to answer the question: was the Buy & Hold more or less productive than the original trading strategy with the added capital?

As can be observed, the Buy & Hold outperformed the 13 months rebalancing. It should be noted that no leverage was being applied and that all Q default frictional costs were included.

This trading strategy provides a trading environment which chooses periodic rebalancing as its main course of action. And as Stefan explained, the equal weights forced the selling of part of the rising stocks to increase positions in the lagers. Which also explains why the Buy & Hold outperformed the strategy.

Now the reengineering part.

Even if it was interesting, the rising equity curve, the outperformance of the benchmark, I still did not find it enough especially since the strategy did not outperform a simple Buy & Hold in a niche market that was made to prosper and will continue to do so.

I'll skip some of the steps since just by reengineering the code you have to make quite a few simulations just to verify that the code does not crash, does what it is supposed to do and that there are no bugs. I reengineered twice since I was not satisfied with the limitations of the first method even if it provided more than interesting results.

Whatever trading strategy you have you need to have it financed and have it scalable. The code was there to make the strategy scalable. But still, finance was limited to the 10M of initial capital. In order to increase the impact, some leveraging would be needed. I know some hate leveraging but it has its use in scenarios where you can outperform the averages. In a way putting more pressure and improving one's trading edge which was already built-in the stock selection itself.

So, I put in some downside protection and added some leveraging to the mix. Changed the orientation of the strategy itself and allowed it to go short at times. The payoff matrix of a trading strategy can be resumed in two numbers: the number of trades over the trading interval and the average net profit per trade. Both numbers are provided when using the “round_trips = True” option of the tearsheets.

Therefore, the objective is to increase either or both of these numbers in order to increase overall CAGR performance. Other numbers might not have an impact on final results, and in a compounding game, the final outcome, the end-game is what really matters. Evidently, you will try to make it within acceptable volatility and drawdown constraints, making them as low as you can without destroying your strategy's potential. To reduce volatility and drawdowns, I increased the number of stocks to 25, thereby reducing the bet size.

With the first method, it generated the following:

Interesting, but not enough. Not enough trades:

I wanted more.

I changed the trading methods in order to accelerate the thing. It also meant increasing the use of leverage in order to supply more funds to the methodology and accepting shorts.

It generated more trades, provided a higher overall return. Beta decreased, Both Max Drawdown and volatility slightly increased. The average net profit per trade decreased, but this was compensated by the higher number of trades resulting in more generated profits. The average gross leverage came in at 1.48.

I opted to be more aggressive. It raised the gross leverage from 1.48 to 1.57. At the same time, I was lowering the max drawdown, the volatility and the average beta which came in at 0.48 on average. An overall improvement.

This was counterintuitive. To do so, I allowed a lot more shorts in the picture in a rising market. To the point that shorts dominated in numbers of trades. As if allowing more shorts was enabling the longs to profit more.

Not only did the number of trades increased considerably, but the average net profit per trade also went up resulting in much higher profits.

There is the cost of leveraging that has to be addressed, but I will keep that for later on. Nonetheless, the formula would be for the last test: 1.57∙F_0∙(1+0.629 – 0.57∙0.08)^15, or 1.57∙10M∙(1+0.6285)^15, on the basis that leveraging fees would be at 8%. It will make a difference over the long term, but it should be considered as some kind of frictional cost. You want more, you have to do more than your peers, and there is a cost to it.

By accepting more in the gross leveraging department, you could push for even more with not that big a deterioration in the volatility, beta and max drawdown figures, as illustrated in the following:

The portfolio metrics for the above came in at:

Gross leverage went from 1.57 to 1.62. Average beta went for 0.48 to 0.53. While max drawdown went up from -30.5% to -31.4%. All these measures were more than acceptable for the added overall performance. Nonetheless, some might not like operating at these levels, but it should be noted that even if we started with an average beta of 1.07, we still ended with an average beta of 0.53. Meaning that the overall strategy is swinging less than the average market as a whole. The statistics for this version of the program came in at:

The point being made in this demonstration is that a trading strategy can start with a specified architecture, then it can be modified to do even more by putting the emphasis on two numbers: the number of trades and the average net profit per trade. A strategy can evolve to do more, it is up to us to make it do so. And as said before: if you want more, you will have to do more.

I would point out that it is not the numbers that were the most important, it was the process itself. The trading methods used to survive in a sea of variance. Hope it was instructive just to study the process described in this post.

Thanks Guy,
Very interesting. Just wondering if you can share the code for your backtests. I did not get how you choose the short positions.

Thanks,

Stefan

@Stefan, sorry about your request, but at this level, the trade mechanics becomes IP.

However, of note, your trading strategy started scalable by design. You could push on its pressure points in order to increase the number of trades and the average net profit per trade. These were modulated. Most of it was done by leveraging and adding protective measures for when the equity line decreased by either reducing positions sizes or going short.

The reengineered strategy mechanics were added to see how far I could push the outcome before it blew up for some reason or other. I have not reached that limit yet since there is still room to increase performance even further. Because the strategy is scalable, all I need to do is increase the leveraging a little bit more. The leveraging is compounding which leads to higher performance results over the long term. Evidently, there is a limit to this, but I have not pushed that far yet.

Notwithstanding, I also did the test of removing all the leveraging with the following results:

The above compared to the last equity chart presented in my last post does show the impact the leveraging had to play on the strategy's outcome. This did not affect the protective measures which stayed in place. However, from this move, it showed that the high number of shorts were not there to make money, even if they did, but to protect the portfolio's downside as illustrated in the following chart:

Nonetheless, the strategy generated more trades simply due to its reduced bet size, but still managed to be interestingly productive profit wise.

The more relaxed trade mechanics of the strategy can also be seen in the portfolio metrics as illustrated below:

As expected, after removing the leveraging, gross leverage went down to 0.97. But this also had an impact on average volatility which went down to 19.2% while max drawdown also got lower to -25.0%. In all, interesting numbers. Portfolio stability is at 0.99 and the beta was down to 0.14.

If the theoretical CAPM expectation for a trading strategy is to get the long term market average as in: E[F(t)] = r_f + β∙(E[r_m] – r_f), we should expect that with a beta of 0.14, we should not get there. At the very least, I should not be able to present the above numbers.

But that is the whole point of trading. As if we should not use the math of a Buy & Hold scenario and force it on a trading system which has its own mechanics and which tends to recycle its ongoing profits as in this strategy. We should question the how and the why we force our programs to do things.

From these tests, I was able to push the performance limits due to the added trade mechanics, the protection measures and the leveraging. Once I'll know the limits, which I have not attained yet, I could scale back to whatever level I felt more comfortable, with the notion that I could still apply pressure where I wanted to if desired knowing that the strategy would support it. The whole process forced me to add better protective measures which the strategy could benefit from even if no leveraging was applied.

Notice the value of the leveraging in this case. The leveraging added 24.6 times more profits than the no leveraging scenario. And in numbers, it added $24.6B to the portfolio just because you scaled it up by using some leveraging. There would be more than enough to cover the leveraging fees.

Those are all choices we have to make. It is why we make these tests to see how far we can go, what are the mechanics of the trade, what is it you really want to do, and how much risk are you willing to take. One thing is sure, if you want more than your peers, you will have to do more than they do.

@Guy: It is totally up to you whether you decide to share your algorithm or not, it's your IP! That said, could you please start a new thread with your commentary and link back to this one? Given the length and level of detail of your comments, I think you may have inadvertently 'overtaken' this thread with your content and it becomes hard for anyone else to contribute, especially others who might want to iterate on Stefan's idea and publicize their version.

I'd just like to be respectful to Stefan who started the thread!

P.S. You can always share a pyfolio tearsheet by sharing a notebook without sharing the algorithm code. It's usually a more reader-friendly method than posting large screenshots.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Stefan,

Thanks for sharing. As you suggest, hindsight is 20-20. The ROI would also be phenomenal if only in hindsight we had chosen the correct lottery ticket as well. Alas, if that were possible, in practice the winnings would end up being split among so many winners as to leave us with a loss.

It's important to be aware of selection bias and recency/framing bias. It's not entirely outside of the realm of possibility that one year from now we'll all be laughing at the failure of the cloud computing investment thesis.

I think it's difficult to algorithmically decide which secular themes will win out and have not yet been "priced in," at least from a statistical perspective there simply wouldn't be enough data points to draw a significant conclusion. The literature would indicate that you can't turn to stock analysts for this prediction -- they're only right ~49% of the time. (But of course there are the Peter Lynches of the world.. but how are we to know who to trust?)

I think your example does show that the market did not fully appreciate the information it had in 2005. In hindsight, Bezos was pretty clear about his plan. So was the market pricing in a risk that didn't materialize? Or did it simply under-appreciate the market opportunity? Or what happened?

For me it's crystal clear that the next huge secular trend will be AI. Much of what we do -- including things we can't even imagine yet -- will be replaced or at least augmented by AI tools. This will be powered by both cloud AND edge computing. Does this make AI a good investment thesis? Is it already priced in? Many players are already very expensive. Though the market opportunity may not be fully appreciated yet. Hard to know. Are the risks large? There are geopolitical risks -- even the trade wars with China is having a huge negative impact on share prices of many AI names even though in the long-run it won't matter at all. There are market risks -- volatility, global economic health, etc. Ultimately, I do think it is a good investment thesis, despite the risks. However, I don't think it makes any sense in the context of a Quantopian long-short hedged algo strategy, because it's likely to incur short-term volatility (on top of the selection bias issues).

Your example clearly demonstrates that even with 20-20 hindsight, investing in the biggest secular trend of the last decade only resulted in a 0.89 Sharpe ratio. That's nothing to write home about. The drawdown is pretty rough (though the market drawdown is similar). I think 0.11 alpha is probably pretty damn good for a large cap universe, but if you removed market beta from the equation the returns would not be so utterly amazing. The 1300% returns are largely thanks to the power of time, modest outperformance, and the incredible power of compounding. And hindsight, of course.

@Viridian, you say:

It's important to be aware of selection bias and recency/framing bias.
It's not entirely outside of the realm of possibility that one year
from now we'll all be laughing at the failure of the cloud computing
investment thesis
.

How silly can one get? If I am allowed to use that word?

Isn't that like the statement made in 1895 by the director of the US Patent Office who quit his job on the grounds that everything had already been invented and therefore there was no more need for him or his job? Or something by the president of IBM in 1962 saying that they would probably sell at most 4 or 5 computers in the coming years.

Am I to understand you view computing as a fad and probably by next year it will all be over? Could I add the words: think again!

You do not need 20/20 hindsight to state, “guess”, or whatever, that in the coming years computing will continue to expand. Not only that, it will continue to expand at an exponential rate.

I don't want to get too far off on a tangent, but put simply: just because something is a popular product does not mean it is a good investment. Xerox is a great example. Changed society.. but not so much for its investors. Increased competition could lead to margin pressure. Commodization. Supply chain disruption. Rise of edge computing. Multiple compression. There are risks to any investment thesis, including this one. Read again what I wrote and you may realize your response is a total non sequitur.

I will not fall into that trap. I will let everybody determine their own views, expectations, and on what they should bet going forward. It is their choice, it is not mine to make.

Nonetheless, I do wish you extreme luck in shorting this non sequitur computing “fad”.

Nobody is talking about shorting anything. My contribution to the discussion was about (un/)certainty/risk and cognitive biases. Please put forth an effort, read carefully, and not misconstrue. Once again you are hi-jacking a thread.

I'm a great believer in the tech industry as well. Unfortunately my basket of tech stocks didn't do quite as well:

MicronPC
Netscape
Palm
Napster
TheGlobe.com
Compaq
InfoSpace
GeoCities
eToys.com
iOmega
AltaVista
Polaroid
Wang Laboratories
Pets.com
3dfx
MySpace
Commodore
Syquest
Gateway Computers
Iridium
Atari

Thanks though to @Stefan for the original post. I find it both interesting and helpful!

@Joakim, if you added just 2 years to Q's datasets, thereby, going back to 2000, you could add some 1,300+ stocks to your list.

Is there risk playing the markets? YES. Will the earth stop turning tomorrow? NO. Will all the people on the planet go into an economic coma tomorrow? NO. Will all cars or computers or whatever stop working tomorrow? NO. How long should that list be?

If something has a probability of occurrence of less than 0.000000000000000000000000000000000000001, how and incredibly why did it ever enter into your day-to-day trading considerations.

Nonetheless, the whole point is that it is our job to identify what we want to play with. And if we are playing long and cannot realize that our stock list is going down the drain, it is not the market that is to blame but our stock selection process and our trading methods.

For instance, take the next chart. Should you get out today, or would your trading program have acknowledged that it should have done so much much sooner?

Yes, easy! To quote Will Rogers:

“Don't gamble; take all your savings and buy some good stock and hold it till it goes up, then sell it. If it don't go up, don't buy it.”

@Joakim,

Will Rogers was right. I used that same quote on my website years ago, but I read it differently. And I think Mr. Buffett also adheres closely to that same pun.

We should indeed buy stocks that are going up, and not what is going down. Better yet, we should buy stocks that have shown they have been going up and continue to have future positive prospects. It is our business to find and isolate the stocks that will tend to go in the same direction as our bets, that these be long or short.

Berkshire Hathaway went from about $10 to about $320,000 over the past 50+ years while following in step with the vagaries of the market.

How long should it have taken to notice it was on the rise at some point along the way?

From 2002 to today, it went from 74k to 320k. That is an increase of $246,000 per share.

Yet, I have never seen it traded in any of the simulations presented on Q. Already, prior to 2002, it had gone up from $10 to $74,000, all the time making high after high over some 30 years. This gave anyone over 10,000 days to figure out that, well, “maybe”, it was going up. It has been going up now for over 18,000 days. Hope no one kept a mean-reversal thing on this one, or held long-term shorts.

Worse yet, the general market has been going up too, on average, for over 88,000+ days. Should you wait another 88,000 days because you have doubts about where the market will go over the next 10 to 20 years?

We need to observe that we should buy good stuff that is going up and wait for its price appreciation.

IF the price does not generally go up, you get rid of that stock.

Over the past 50 years, world population has grown from 3.6 billion to 7.5 billion. And over the next 30 years, we will add another 2.0 billion. The world will continue to prosper, companies will have to supply what people need and/or want. A lot of companies will prosper in doing so, and others won't. But, it has always been like that, it is the nature of business. So account for it, even in your trading strategies.

We are all doing backtests over past data in order to figure out what has a higher probability of helping us going forward, where it will count or not, in real money that is, and not simulation money. But, it will all depend on you and your trading programs.

It is not the market that is throwing you a curveball (it has been going up on average over the long term and most highly probable will continue to do so), it is your strategy design that might be ill-suited, misfitted, or poorly designed for its future job.

What I am waiting for is that you simply challenge the math I presented. I can take it. There are equal signs all over the place. You will need to demonstrate a not equal sign and provide the grounds on which it is based. Surprisingly, in the process, you will be helping me design even better systems. But, I think that is too much to ask? Anyway, consider it an open invitation. Mostly, I consider my math stuff to be more like: 2+2 = 4, or is it: 4 = 2+2.

My first post in this thread showed that the strategy over the same time interval did NOT even outperform a simple Buy & Hold scenario using the same stocks.

The initial trading strategy had nothing special. It lived in a simple and basic long-term Markowitz 54-week rebalancing act. The type commonly found on so many Quantopian strategies using its scheduled rebalancing function.

Therefore, whatever the stock selection, the bar was set, and my early simulations clearly demonstrated that it was preferable to adhere to a Buy & Hold scenario than to execute the initial strategy since: \( \sum (H_{(spy)}∙ΔP) > \sum (H_a∙ΔP) \).

The objective became to reengineer this code, resulting in this new strategy H_b, so that the following could prevail: \( Σ(H_b∙ΔP) > Σ(H_{(spy)}∙ΔP) > Σ(H_a∙ΔP) \) and thereby not only outperform the original trading script but also the Buy & Hold scenario which implicitly became the strategy's own benchmark.

The mathematical artistry came from generating alpha over and above this benchmark based on whatever trading methods I could use even if I needed to revamp almost the whole strategy over its initial Quantopian-like strategy template.

Just more stuff to think about.