Back to Community
Built Robo Advisor

To go ahead and adjust this by yourself, clone the algorithm and go ahead and start editing. To see my thought process, go ahead and check out the medium post regarding this Robo Advisor. Link to come soon!

Clone Algorithm
196
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Put any initialization logic here.  The context object will be passed to  
# the other methods in your algorithm.  
import datetime  
import pytz  
import numpy as np



def initialize(context):
    # Initialize algorithm parameters
    # setting symbol lists
    core_series = symbols('VTI', 'VXUS', 'BND', 'BNDX')
    crsp_series = symbols('VUG', 'VTV', 'VB', 'VEA', 'VWO', 'BSV', 'BIV', 'BLV', 'VMBS', 'BNDX')
    #universe risk based allocation
    core_series_weights = {0: (0,0,0.686,0.294),
                           1: (0.059,0.039,0.617,0.265),
                           2: (0.118,0.078,0.549,0.235),
                           3: (0.176,0.118,0.480,0.206),
                           4: (0.235,0.157,0.412,0.176),
                           5: (0.294,0.196,0.343,0.147),
                           6: (0.353,0.235,0.274,0.118),
                           7: (0.412,0.274,0.206,0.088),
                           8: (0.470,0.314,0.137,0.059),
                           9: (0.529,0.353,0.069,0.029),
                           10: (0.588,0.392,0,0)}
    
    crsp_series_weights = {0: (0,0,0,0,0,0.273,0.14,0.123,0.15,0.294),
                           1: (0.024,0.027,0.008,0.03,0.009,0.245,0.126,0.111,0.135,0.265),
                           2: (0.048,0.054,0.016,0.061,0.017,0.218,0.112,0.099,0.12,0.235),
                           3: (0.072,0.082,0.022,0.091,0.027,0.191,0.098,0.086,0.105,0.206),
                           4: (0.096,0.109,0.03,0.122,0.035,0.164,0.084,0.074,0.09,0.176),
                           5: (0.120,0.136,0.038,0.152,0.044,0.126,0.07,0.062,0.075,0.147),
                           6: (0.143,0.163,0.047,0.182,0.053,0.109,0.056,0.049,0.06,0.118),
                           7: (0.167,0.190,0.055,0.213,0.061,0.082,0.042,0.037,0.045,0.088),
                           8: (0.191,0.217,0.062,0.243,0.071,0.055,0.028,0.024,0.030,0.059),
                           9: (0.215,0.245,0.069,0.274,0.079,0.027,0.014,0.013,0.015,0.029),
                           10: (0.239,0.272,0.077,0.304,0.088,0,0,0,0,0)}
    
    #set universe and risk level
    context.stocks = crsp_series
    risk_based_allocation = crsp_series_weights  
    risk_level = 1
    #Saves the weights to easily access during rebalance
    context.target_allocation = dict(zip(context.stocks, risk_based_allocation[risk_level]))
    #To make initial purchase
    context.bought = False
    #Calculates the distance vector every day before trading starts
    schedule_function(
    func=before_trading_starts,
    date_rule=date_rules.every_day(),
    time_rule=time_rules.market_open(hours=1))


def before_trading_starts(context, data):
    #total value of portfolio
    value = context.portfolio.portfolio_value + context.portfolio.cash
    #calculating current weights for each position
    for stock in context.stocks:
        if (context.target_allocation[stock] == 0):
            continue
        current_holdings = data.current(stock,'close') * context.portfolio.positions[stock].amount
        weight = current_holdings/value
        growth = float(weight) / float(context.target_allocation[stock])
        #if weights of any position exceed threshold, trigger rebalance
        if (growth >= 1.05 or growth <= 0.95):
            rebalance(context, data)
            break


def rebalance(context, data):
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance > 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Selling " + str(int(amount)) + " shares of " + str(stock))
            order(stock, int(amount))
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance < 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Buying " + str(int(amount)) + " shares of " + str(stock))
            order(stock, int(amount))
            
 
def handle_data(context, data):
    #initial purchase of portfolio
    if not context.bought:
        for stock in context.stocks:
            #Allocate cash based on weight, and then divide by price to buy shares
            amount = (context.target_allocation[stock] * context.portfolio.cash) / data.current(stock,'price')  
            #only buy if cash is allocated
            if (amount != 0):
                order(stock, int(amount))
                #log purchase
            log.info("buying " + str(int(amount)) + " shares of " + str(stock))
        #now won't purchase again and again
        context.bought = True
There was a runtime error.
55 responses

For those listening - here's the medium post from hackernoon on how I built the advisor:link. I'd like to give a major shoutout to Quantopian for assembling the open source community that allowed me to find all the information I needed. Part of writing the post was to collect all the answers I found across the forum for anyone else stumbling through their first algo trading steps (like me)

Great work Rao! Can you please elaborate how the risk_based_allocation is calculated?

Hey Rob,

At this point currently, those are numbers taken directly from a Vanguard portfolio. Vanguard calculate that using mean variance optimization. My next step is to implement mean variance optimization into the algorithm itself, rather than just hard code it as I've done here.

Here's the link

I know there are quite a few people following this thread. I went ahead and ported my work from Quantopian to zipline, and expanded on the robo-advisor. Once again, I wrote about it. For all those interested, here's the link

Here's an updated backtest where the universe uses modern portfolio theory for allocation

Clone Algorithm
98
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import zipline
from zipline.api import (history, 
                         set_slippage, 
                         slippage,
                         set_commission, 
                         commission, 
                         order_target_percent)

import numpy as np
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
solvers.options['show_progress'] = False

def initialize(context):
    context.stocks = symbols('NVDA', 'AAPL', 'FB', 'NFLX')
    #context.stocks = symbols( 'VXUS', 'VTI', 'BND', 'BNDX')
    context.weights = False
    context.target_allocation = {}
    context.tick = 0
    schedule_function(
    func=before_trading_starts,
    date_rule=date_rules.month_end(),
    time_rule=time_rules.market_open(hours=1))
    
def before_trading_starts(context, data):
    #total value of portfolio
    value = context.portfolio.portfolio_value
    #calculating current weights for each position
    for stock in context.stocks:
        if (context.target_allocation[stock] == 0):
            continue   
        current_holdings = data.current(stock,'close') * context.portfolio.positions[stock].amount
        weight = current_holdings/value
        growth = float(weight) / float(context.target_allocation[stock])
        #if weights of any position exceed threshold, trigger rebalance
        if (growth >= 1.05 or growth <= 0.95):
            rebalance(context, data)
            break
    log.info("No need to rebalance!")

def rebalance(context, data):
    allocate(context,data)
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance > 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Selling " + str(int(amount * -1)) + " shares of " + str(stock))
            order(stock, int(amount))
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance < 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Buying " + str(int(amount)) + " shares of " + str(stock))
            order(stock, int(amount))

def allocate(context, data):
    prices = data.history(context.stocks, 'price', 500, '1d').dropna()
    returns = prices.pct_change().dropna()
    context.weights, _, _ = optimal_portfolio(returns.T)
    log.info(str(context.weights))
    context.target_allocation = dict(zip(context.stocks, tuple(context.weights)))
    
def handle_data(context, data):
    # Allow history to accumulate 100 days of prices before trading
    # and rebalance every day thereafter.
    # Get rolling window of past prices and compute returns
    if type(context.weights) == bool:
        allocate(context, data)
        for stock in context.stocks:
            amount = (context.target_allocation[stock] * context.portfolio.cash) / data.current(stock,'price')  
            log.info(str(context.portfolio.starting_cash))
            #only buy if cash is allocated
            if (amount != 0):
                order(stock, int(amount))
                #log purchase
            log.info("buying " + str(int(amount)) + " shares of " + str(stock))

def optimal_portfolio(returns):
    n = len(returns)
    returns = np.asmatrix(returns)
    
    N = 500
    mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
    
    # Convert to cvxopt matrices
    S = opt.matrix(np.cov(returns))
    pbar = opt.matrix(np.mean(returns, axis=1))
    
    # Create constraint matrices
    G = -opt.matrix(np.eye(n))   # negative n x n identity matrix
    h = opt.matrix(-0.15, (n ,1))
    
    A = opt.matrix(1.0, (1, n))
    b = opt.matrix(1.0)
    
    # Calculate efficient frontier weights using quadratic programming
    portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x'] 
                  for mu in mus]
    ## CALCULATE RISKS AND RETURNS FOR FRONTIER
    returns = [blas.dot(pbar, x) for x in portfolios]
    risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
    ## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
    m1 = np.polyfit(returns, risks, 2)
    x1 = np.sqrt(m1[2] / m1[0])
    # CALCULATE THE OPTIMAL PORTFOLIO
    wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
    return np.asarray(wt), returns, risks
There was a runtime error.

@Rao, you must be aware that your robot-advisor can do more by feeding more to the CVOPT optimizer.

One could say the stock selection suffers from a selection bias. Nonetheless, they might have been selected since those stocks had been quite visible at the time.

I haven't read all the code to see the extent of the methodology, only applied my preliminary acid tests on some of the pressure points to see if I would be interested to investigate further. A way of saying: is there something in there?

The attached tearsheet says there is.

Since this is a long only scenario, its objectives are not constrained by the contest rules. Changed some numbers and added a bit of leverage, fees which the strategy could pay off easily.

I think this thing can be pushed some more. So now, I will have to read the code.

Loading notebook preview...
Notebook previews are currently unavailable.

Especially for long-only strategy, in order to claim robustness, the algo should be test against period like late 2018.
Also as the market dynamics changes, also by the contest rule, it is worth comparing the latest 2 yr run to your simulation above.

@Lim, I think you are missing it on all points.

First, it was stated that the strategy was not intended for the contest and did not need to follow the contest's restraints. In fact, it gets zero on all constraint scores. The test lasted 5 and a half years which is still not much, but a lot better than just the last two.

It was mentioned that there might be a selection bias, but then, I do remember having suggested some of those stocks to a friend at the time. So, I do not see the selection as that biased. It is more like, yeah, those could have been selected.

Some do develop for other purposes than the contest. A strategy like the one presented could be mixed with other low volatility strategies to reduce the combined overall volatility and still reap the rewards.

In an exploration phase of someone else's trading strategy, you do want to know if there is something there. If not, you just say: next. It is why I first start with my own acid tests. If a trading strategy can pass those (few do), then I might explore some more.

A simulation is a “what if” my trading strategy did this or that. What would have happened? It does not say what it will do in the future since that is still to unfold. But it will give you all the averages you want to extract and on which you could apply pressure to amplify or contract specific behaviors.

Such changes have been applied to @Rao's strategy. Just changing a few numbers and it increased performance by almost a factor of 10. That is not a 10% increase, but a 1,000% improvement on what was.

One thing it does show is that a trading strategy's logic might not be enough when just changing a few numbers will produce a lot more than a change in the trading logic. And it puts emphasis on whatever those numbers were since other trading strategies might also be impacted by them in a similar way.

Whatever the trading strategy is, try to find what makes it tick. Then determine if anything in it could be of use in your own strategy design. There are a lot of people here doing a lot of great stuff. I thank them all for sharing so generously. That they tell me something is good or that something is bad, I appreciate both. Evidently.

Ok fair on not intending it for the contest, sorry to miss that. I still think my first point is valid and I do not think you need to overreact to my comment.
If you think your strategy generate the real alpha, I do not find reason to argue so strongly against testing it against other time range.
Or if you want to take other's opinion, once again there is no reason not to take critical advice from others.

Hmm, a strategy that only goes long 4 specific stocks that everyone knows performed quite well during this period... Fine, but the ‘benchmark’ should then be a long basket of these 4 stocks, not the SPY.

The past is easy to predict. The future on the other hand... different story.

@Lim, I do not think the strategy, as is, could be extended prior to the existence of FB. I assumed that was clear too since the stock selection was hard coded. Nonetheless, when you try, the program simply crashes. And since it was an exploratory phase, I had no motivation to even consider correcting it.

I did not design that trading strategy. I still have not read the code to better understand what it does.

However, the notebook results do provide some motivation to do so. Nevertheless, it is of absolutely no use for anyone not interested in what it does or has no intention of using such trading methods. Why should someone waste their time on that! We all know that stocks going up can make you some money should you go long.

I do not think it is the best one can do. But, I will only know that after pushing on other pressure points or modifying the program's trading logic. Maybe some of those procedures might apply elsewhere. Will need to find the time.

@Joakim, any portfolio manager at the end of 2012 knew about AAPL, FB, NVDA, NFLX, and many others. These were prominently in the news, doing well, were highly liquid, and had good prospects for the following years. Anybody following the market could have seen the same.

In retrospect, we can say nobody saw them coming all we want, and that whatever we select now is with hindsight.

But somewhere for whatever reason, we had to make choices and those choices were as good as any other comparable choices at the time.

My interest is not in the stock selection process but in the trading procedures used.

I can always change to another stock selection process later.

However, initially using only a few stocks speeds up the development process. It might be sufficient to show the merits of the trading methodology. If you see something interesting, then you can explore the limits of the trading procedures much faster. Later you can expand the stock selection to whatever. The big advantage of this method is the lower probability of over-fitting. If your trading script was developed using only 4 stocks and you switch to 100, then 96 of them will not have seen your program. That is a harsh acid test for any trading strategy.

As I said before, I did not design this trading strategy.

Nonetheless, changing a few numbers increased performance almost 10-fold. Now, that is of interest since those same procedures could be applied elsewhere and benefit other trading strategies too. It enables scaling a trading strategy at will and that is important.

It also makes the point that whatever we think initially of what our trading strategy can do performance wise, it could do even better with only minor changes.

I think we have different perspectives when looking at the trading problem. I do not look at the market for the next few days, it has little interest. I want to design trading strategies that will last years, and therefore, will concentrate on trading methods that can. I do not intend to play all the stocks out there either. It is not the mission. Too many of those stocks do not outperform the averages anyway.

One of the simplest ways to outperform a benchmark is to concentrate the stock selection on stocks that already outperform the market. Those stocks are easy to find, they live above the benchmark, say something like SPY for instance. It is something like: the average living above the market average is my friend. To stay selectable, each stock has to remain above the benchmark. If the average of your stock selection outperforms the benchmark, you are outperforming the market which should be one of the primary objectives of the game.

An update to the last tearsheet. Wanted to see if the alpha would be maintained if more time was given to the strategy.

Loading notebook preview...
Notebook previews are currently unavailable.

The program showed it could handle more time, another test was to show it could handle more funds. The purpose of the attached tearsheet.

Loading notebook preview...
Notebook previews are currently unavailable.

Changed a single character in the program knowing it would have a significant impact.

Here are the results.

Loading notebook preview...
Notebook previews are currently unavailable.

Again, changed a single character in the program knowing it would have a significant impact.

Here are the results.

Loading notebook preview...
Notebook previews are currently unavailable.

There are similitudes between the above tearsheet and the procedures discussed in the following thread: https://www.quantopian.com/posts/the-capital-asset-pricing-model-revisited

First, both scenarios relied on the CVXOPT optimizer to make trades. You had this black box that had for mission to take what it was fed and come out with a profit, if it could.

In the CAPM thread, the optimizer worked with randomly generated price series to which were added a long-term linear drift “factor”. Therefore, we could view price series as long-term trends applied over what should technically be considered market noise. Even a small long-term drift was sufficient to have the optimizer extract profits. If there were no long-term trends, there were no overall profit extraction. That was also sufficiently demonstrated.

Nonetheless, the more pronounced that trend was, the more the optimizer could extract profits. And for those considering that a simple long-term trend was not enough to generate profits, you have the notebook with the program where you could redo these tests at will. It is all controlled using just a few numbers.

On the other hand, the tearsheet in the previous post above worked on real market prices that had an upward trend with also a lot of randomness in their price movements. The optimizer could extract profits in the same manner, if it could, as it did over randomly generated prices.

By having a limited number of stocks at play, it enabled the study of what the strategy was doing as it progressed over its 14-year simulation. Thereby providing a better understanding of how the trading strategy behaved during up or down markets and over an extended period of time.

The same overall principles were applied to the real market data as they were over the randomly generated price series where we could simulate thousands of portfolios with hundreds of shares each.

Whereas with market data, you had this one shot, this one sample out of trillions and trillions of possibilities. The sample you had might not even be representative of the whole. But, that is not the point since we certainly would not dare make trillions of simulations. At times, you have to make choices with what is literally little information.

Where is the bacon in the above strategy? It is in the way it trades. And this, without really knowing how it trades since I do not think the way the strategy was built had those intended purposes. The strategy trades over what should be considered its core positions giving it the ability to profit from price swings and from long-term appreciation. You have a part of the strategy behaving like a Buy & Hold, and another part almost trading randomly over the process. Thereby generating more funds to trade with as it goes along. This has a snowball effect as the net bet size increases generating a kind of positive feedback loop. See the following tearsheet:

https://www.quantopian.com/posts/built-robo-advisor#5c687f392be7cd0e55dd3e7f

Note that the strategy has no protection. That point needs to be remedied. It would reduce volatility and drawdowns. The use of leverage was explored in order to evaluate how much could the strategy support before it becomes uncomfortable to then reduce it to a more comfortable level. You cannot know where the limits are until you push a strategy towards and sometimes beyond those limits.

Pushing The Envelope

The last comment in the previous post was: You cannot know where the limits are until you push a strategy towards and sometimes beyond those limits.

My next step was to add a few stocks. The stock selection was admittedly small but relatively easy to make. They were all stocks I followed or played during the 5 years prior to the simulation start date. So, to me, such a selection was admissible, they were part of my stocks of interest at the time. And the test would show what would have happened had I had the capital and such a trading strategy to manage it.

Loading notebook preview...
Notebook previews are currently unavailable.

The above trading strategy was changed one step at a time not only to operate differently but to also have a different mission. I do not see it as over-fitting for the simple reason that you could not make all those changes all at once. You could not have coded it in one shot either. I looked at it more like exploring the strategies potential and mostly its limits. Trying to find out the how, where and why it could break down?

The CVXOPT optimizer is at the center of this trading strategy. And at each step, it was gradually forced to reconsider its mission, by adding more time, or more funds, or requesting that it do more. It tried to maximize what it was given: a few price series going back 14 years. It is a trading interval long enough to show if a strategy has something in it or not.

The CVXOPT optimizer is not given away money for free. It is designed to find the maximum of quadratic functions, or find the best mix of a set of price series should they exhibit some kind of trend. It only solves a mathematical problem. See my other tests using randomly generated price series which showed that you did not need much to have the optimizer extract profits.

Finally Read The Code

I have finally read the trading script. Nice work. My first observation to a friend was: “I would not have dared to do those modifications had I started by reading the code”.

Prior to reading the code, I still had another test to make which was to hike the ante to answer the question: will it scale up to that level? So, the initial capital was raised to $50M.

Some observations: a long-term chart gives a sense of perspective. For instance, during the financial crisis the strategy had a -79% drawdown (see the 4th interval, just below the green 58645.22%). It is barely noticeable. Notwithstanding, the strategy maintained a part of its positions and was able to recuperate much faster having had those positions in place prior to the recovery. Whereas for the dips on the last two intervals, which just appear as small blips, are in the order of 100 to 200 million in size. However, note that 50M at 58645.22% is 29B to keep some sense of proportion.

At any time you could have quit the game, for whatever reason, and collected what the blue line was saying. A backtest is just a backtest. It is made to show what could have happened if. And once you know that, you can plan, based on the strategy's trade mechanics, for what you want your trading strategy or your set of trading strategies to do. It is always your choice.

Until I add the protective measures, such a strategy as illustrated above needs to be gamed or strategized with others of lower drawdowns. But that has already been covered in prior posts.

Now, back to the drawing board to design those protective measures. The objective will be mainly to reduce drawdowns. However, I still want the positive feedback loop to remain operational.

Is A Black Box Running The Show?

The trading strategy illustrated in my last post goes on a simple premise. If there is cash in the trading account, or, its equivalent, it stands ready to buy shares for its stock portfolio according to its CVXOPT optimizer recommendations. It would also do so if you increased available cash reserves by either selling some shares, adding extra funds, or using leverage.

When you look under the hood, you do not see why a trade is taken. All you see is that it was executed. The optimizer took care of it all. You had no control over the prices or the quantities to be traded either. What you knew however was that the optimizer had for function to optimize for the best outcome should it find actionable data.

The trading decision process was passed on to the optimizer which followed its own internal mathematical functions. In a sense, the program delegated all the trading activity to this black box. And therefore, for all intents and purposes, this black box was or is running the show.

The optimizer does not consult you or ask for your opinion or advice. It just executes, and based on its inputs, it will issue trades: longs, shorts or none at all. It is its choice, not yours. If the optimizer does not see anything actionable, it will not trade.

Notwithstanding, the optimizer has no notion of what you are doing, or of your intentions. It is just a mathematical contraption accepting some inputs and giving out its optimized solution that it be good or bad. You could feed it other time series than price series and it would provide some other output.

Hybrid Strategy

I transformed the original public strategy from its initial trading stance only to a form of hybrid: it trades and invests for the long term. Thereby profiting from its short-term trading activity and its longer-term holdings.

The whole operation is similar to my DEVX8 program. I view it as an evolutionary step in my program development. It is why I started calling this new program DEVX10 since it can go much farther than its sibling using the same concepts and trading philosophy.

This hybrid strategy, when it sees a profit, it might take it and return the cash to the account which will give it the ability to buy more shares later. What it will do over time is have this fluctuating inventory ratcheting upward all the while accumulating shares while the stock prices are going up. This feedback loop is enough to feed the strategy as if in a type of reinforcement process for its good behavior. The strategy buys more of the stocks that go up the most or are the most volatile. You can see this process explained in my book: Building Your Stock Portfolio.

A view of how this optimizer is used was also provided in another book of mine: Beyond the Efficient Frontier. In it, the same optimizer was fed randomly generated price series. In it was shown that even a small long-term upward drift was sufficient to extract profits. Adding some alpha to the mix would push performance levels even higher.

Simulations were done on thousands of portfolios containing hundreds of stocks each. Link to the original program was provided for anyone wishing to test the trade mechanics.

The tearsheets and charts presented are not illusions. They used the same tools as any other program executed on the Quantopian website. The architecture and structure of the program needs to be better understood before accepting such phenomenal performance levels.

There Is No Magic

This is not magic, there is no secret sauce, it is just a different way of looking at the portfolio management problem. Not uniquely as a trading program but as a hybrid able to trade and invest for the long term.

It is not by having the CVXOPT optimizer predicting where stocks are going that the strategy is making its money. It is mostly by holding tight and waiting. From the last tearsheet, the average holding time was about 6.16 years (1553 trading days) on its 14-year backtest.

The optimizer does not have that much of a forward vision. It is a short-term myopic predictor at best, and in this case, practically operating on quasi-random price fluctuations. A way of saying that trading decisions are made almost as if on market noise (based on parameter changes at the 8th decimal). This can be acceptable if the trading account is growing, meaning that the strategy is making money regardless.

In any strategy where you know you have a larger than 50/50 win rate, you could apply a Kelly or half-Kelly number to the process to enhance performance. Thereby, on average, making larger bets which on average should produce higher performance levels. The same kind of notion is applied here as well but without the use of the Kelly criterion.

The following chart shows the probability of making a profitable trading decision. And this even if it is understood that the strategy might be mostly playing on some market noise.

This trading strategy is a variation on a theme. It is part of those strategies that take core positions and trade over those same positions. There are a great number of those out there too. In fact, there are millions of methods out there, each having their own set of procedures and objectives. And there are still even more undiscovered or undisclosed methods that need to be explored.

Still, this one is different in its way of looking at things. It has a long-term perspective, it can be controlled from the outside just as the DEVX8 system can, and to top it off, you can make it really fly.

Related File:

Managing Stocks For The Long Term

A robo-adviser should be utterly simple if it is to be of any use to the masses. Robo advice usually consists of extremely simple and robust asset allocation strategies using various weighting routines and periodic rebalancing.

The name of this trading strategy has absolutely no relevance whatsoever. It is just a thread title. It could have been called: “Tiptoe In The Tulips”. How a strategy is named does not make you any money.

All I saw in the original program was that it did not crash but also did not have that much appeal. Nonetheless, I saw something in it I could use and therefore wanted to explore.

The original poster requested: ”go ahead and adjust this by yourself, clone the algorithm and go ahead and start editing”. Therefore, presenting modifications to this trading strategy stayed on the subject. The subject at hand was not the title but the strategy itself and how it behaved or how you could make it behave.

What I generated were not just unsubstantiated opinions, but actual simulation result with some corroborating tearsheets as evidence of what the program did. There is no way to generate those tearsheets without first executing a program on Q's website at whatever stage of development it might be.

The modifications brought to the original strategy make it a totally different animal. As I have said in my last post, those changes are so considerable that it changed the very nature of the program itself, and therefore, I started calling my modified version: DEVX10. The name will change again as I add its protective measures.

Still, what was presented was a step by step transformation of the original strategy into this DEVX10. Part of the same program structure has been kept, but its mission has totally changed. Some might like it, others not. For those that do not see the merits of such a thing, it is not my problem, but know that those tearsheets are real.

I would say: you can take what appears to be an ordinary trading strategy and make it do a lot better by guiding it to where you want it to go. And that has to lead to higher profits, otherwise, why would you put any work in it?

A program that plays with an initial stake of $50 million is most certainly not for the masses.

All the masses could do is grab some shares of whatever fund could operate such a strategy, at least, they would get part of the strategy's CAGR. But, I seriously doubt that any firm would be interested in sharing at that performance level.

This program is utterly simple, but then, few seem to understand how simple it can be. From my standpoint, its highest recommendation was: sit on your hands and wait but also trade over the process. My first DEVX series programs date back to at least 2011 (see those simulations on my website) and are based on programs dated as way back as 2003-04. I do not see the transformation I have made as being new, mostly old stuff. For me, the modifications I brought to this trading strategy are a confirmation of my previous endeavors along the same objectives.

The optimizer used (CVXOPT) is a black box.

As a matter of fact, so is the Quantopian Optimizer API. Therefore, most Quantopian users and all contest participants are operating using a black box at the core of their respective trading strategies. Here is a strange concept for some. If a trading strategy can last and prosper for years, any fund could be interested in it, whatever its composition, as long as it generates more than the long-term market averages. Over 75% do not even manage that. They all need help. It is why we have so many people on Quantopian hoping to be of help in that domain.

I reengineered the above strategy template to make it a long-term portfolio builder while still using the CVXOPT optimizer. It is now a hybrid trading strategy made to profit from short to mid-term trades as well as from long-term holdings.

There was so much to say about the transformations that I wrote a 199-page book to describe and show what it can do. The result is a highly profitable and innovative approach to managing one's portfolio.

The strategy is controlled by administrative procedures and entirely driven by equations which makes it a different breed of long-term trading strategy.

As a preview, here is an extract from chapter 8 of my book showing the outcome of the DX-08 version of the program with $10M as initial capital and showing a $23B total profit over the 14-year simulation period.

Since the strategy was designed to be scalable, starting with $100,000 as initial capital would have generated the following using the exact same program.

But that is not the most exciting. The following chart is, for what it implies:

The above chart puts the many improvements brought to the trading strategy into context with the traditional expected optimal portfolio residing on the Capital Market Line tangent to the efficient frontier. The green line, as it goes up, represents the successive modifications made by adding desired attributes, protective measures, and features to the controlling equation:

\(\displaystyle{q_{i, j} \cdot p_{i, j} = \frac{\bar \gamma_j \cdot F_0 \cdot \kappa_j \cdot (1 + \bar \gamma_j \cdot (\bar r_m + \bar \alpha_j + \bar \alpha_r + \bar \psi_j + \bar \varphi_j) )^t}{max(j)}}\)

The above expression (equation 12.12 from the book) shows how a position's value evolves over time while in the portfolio. Notice that it is an exponential function, not a linear one, nor is it a constant. And, as an exponential function, multiple position values will tend to generate a cumulative returns chart looking like the one below:

This makes the DX-08 version of that program an awesome trading strategy. And, I think, anyone with some programming skills could do even better.

Extracted from my new book: Reengineering Your Stock Portfolio. You want more. Then make it happen.

Hi Guy I will take this opportunity to wish you much success in your endeavour - as you may know it is a zero sum game and as someone mentioned on this forum it is also take no prisoners. Oddly enough I read another post today on a related subject https://www.quantopian.com/posts/how-much-money-is-an-algorithm-worth Good luck, Savio

Some might think that the above trading strategy is unrealistic or some kind of hype. It is not. Once understood, some people could do even better with this innovative approach. I know I can.

It is just a different kind of trading strategy that instead of using technical indicators, or some alpha factor(s), it is relying on equations, administrative procedures, and the CVXOPT optimizer to make its trading decisions.

This innovative trading strategy has a lot going for it. As a hybrid trading/investing strategy it has a long-term view of things even if it can trade short to medium-term. Its main function is to build a long-term portfolio by accumulating shares over the entire trading interval if the variance of those stocks permits. And it can do this by reinvesting the proceeds and profits resulting from its trading activity thereby creating a feedback loop giving the strategy a source of additional funding.

The optimizer used is totally trade and stock agnostic. It is there to crunch the numbers it is given. It has no sentiment, no preferences, no built-in alphas, no stress, no beliefs or disbeliefs, and no presumptions. It is neither for you or against you. It will just crunch the numbers that it will be given and give back its answer that it be good or bad. It will not know, care, or even be aware of the outcome of its trading decisions.

But, it will make them nonetheless.

Based on equation (12.12) provided, it could be said: evidently,

and I do say, evidently, if you do not use a \(\varphi_j \), then \(\bar \varphi_j = 0\). If you do not use \(\psi_j\), you have \(\bar \psi_j = 0\). If you do not consider \(\alpha_r\) , then \( \bar \alpha_r = 0\).

If you neglect the stock selection process, you might have the stock selection premium tending to zero \( \bar \alpha_j \to 0\). Should you not leverage the thing, then \( \bar \gamma_j <= 1.0\), meaning no leverage is used.

And, if you do not add \(\kappa_j \), then \( \bar \kappa_j = 1.0\) and will also have no impact.

You would get the same result as if you had not designed any one of these features in.

Overall, the total impact of equation (12.12) would be reduced to:

\(\displaystyle{q_{i, j} \cdot p_{i, j} = \frac{F_0 \cdot (1 + \mathsf{E}[\bar r_m] )^t}{max(j)}}\)

And the above translates to the position's value having the expectation of reaching market averages. It should not be what you want or are looking for. There is certainly more than that available. And the performed simulations are a demonstration that there is definitely more available. Like my book often says: it is a matter of choice: do you want it or not?

If you do, then make it happen.

Design your own version of the program using the same principles and make it your own. This way you will understand what you are doing and will gain the confidence needed to keep your trading strategy on course in bad and good times. Because it will be your own strategy design where you become responsible for what you do and what you achieve.

@Guy

Can you give us a base off of which to build, for me at least the maths you're using is almost completely obscure and as a result not of much help. I doubt very many people understand what exactly it is that you are doing, what would help though (in the spirit of Quantopian) is a basic version of what you've done. I at least find it far easier to understand code than maths :)

@Savio, in a zero-sum game, why on earth should anyone even bother playing the game?

There is a zero reward expectation, and therefore, a game not worth playing. Unless you like the entertainment value. Who knows, you might be lucky. Would you put your retirement fund or anybody else's funds, for that matter, on the table on such a bet?

The same goes for anyone developing stock trading strategies on this premise, again, why bother? You are doomed from the start since your long-term expectation is still zero. Why waste years of your time in such a pursuit? This is the same as some critics who advocate that all trading strategies will fail with time. What nonsense! If they all fail with time, why design a trading strategy to play the game in the first place?

Badly designed and misfitted trading strategies do fail simply because they are badly designed. If the developer of such a strategy does not see it, it is nobody's fault but their own. It is why you should do simulations over extended time periods to see how your trading strategy will behave under adverse conditions.

Over the long-run, those who lose are the gamblers. They design casino trading strategies and then they are surprised they lost their capital. Let it be said: you design a stock trading strategy as if in a casino game, do not expect more than if playing heads or tails (another zero-expectancy, zero-sum game).

The solution is to develop an edge, whatever it is. And it must be sustainable.

And once you do have that edge, it becomes a non-zero-sum game. The long-term player knows this. All he/she has to do is hold the equivalent of the market average for a long time, and they will win the game, almost assuredly.

If you have an edge, you can show and demonstrate that you do have that edge simply by winning the game. If you do not win, then you had that “secret sauce” of yours or were simply fooling yourself with some past anomalies that could only have occurred in the past and which might be non-existent going forward. Or simply almost impossible to detect when they do occur.

On the subject of how much a trading strategy is worth I would point out that Quantopian within its contest is ready to pay 10% of the generated profits for strategies that meet their selection criteria. This makes the price of those trading strategies worth 10% of the net profits.

In the trading strategy I presented, should it fit some future Quantopian criteria, meaning when they will have more relaxed rules, this would value the above strategy at 2.3B over a 14-year period. Should it be given away?

I would add: if you do the same thing as everybody else, you should not be surprised in achieving the same results.

@Jamie, clone the second strategy from the top. It is the basic template on which I built this strategy. My first observation after @Rao posted it was: “...you must be aware that your robot-advisor can do more by feeding more to the CVXOPT optimizer.”

Read my posts in this forum and in: https://www.quantopian.com/posts/the-capital-asset-pricing-model-revisited. Also, look at the attached tearsheets. It will help you understand what is being done in this unique strategy and how the optimizer is put to contribution. There are quite a few articles on my website explaining what is the trading philosophy behind all this.

Regardless, I will not be putting out the current code. It has value. However, I do think that anyone could reengineer, or reverse-engineer, a variant on the same theme which could do even better. My latest version still has more to offer, I have not reached its limits.

I would prefer people elaborate their own designs based on their own convictions. This way they would understand, be responsible for their own code, and be able to control it and manage it better.

Most of all, I do not want to be responsible for anyone misunderstanding whatever trading strategy I design and their further misuse of it. Whereas, if you reengineer your own version, it becomes your responsibility. You will be in charge of your own trading strategy and its misuse if any.

What I wanted to show is that it was possible, scalable, and doable. That is what is demonstrated in my latest book. Note that this strategy is simply putting in practice what was developed and also demonstrated in my prior book: Beyond the Efficient Frontier. Where jumping over the efficient frontier was illustrated using the CVXOPT optimizer on randomly generated price data.

The math thing is not new. It is about the same equation as published in my 2007 paper which operated on the same theme in order to control the portfolio's evolution. Note that this time it took a 199-page book to explain what was being done to this strategy and what all the math implied. I can hardly resume all of it in a few sentences.

You're hiding the positions, am I correct in assuming it's still the original basket?

@Jamie, the initially selected stocks have been changed. I could not put in more than 6 stocks, otherwise, the program would crash (I do not know why). But even with 6 stocks, it was enough to show how the trading strategy behaved. And there was no loss in generalities.

It is one of my tasks going forward to find ways to increase the selected number of stocks since that would help reduce volatility and drawdowns.

Three of the original stocks were kept. FB was dropped: no data before its IPO. And FDX, BKNG, AMZN were added since they were stocks I followed in years prior to the simulation's start date.

Note that my estimate for selectable stocks is about 800, and even there, my interest will be for the top 50 stocks or so if the selection method, the optimizer, and the program can handle it.

Disclaimer

Didn't think I would need to say this, but please don't clone the algorithm, it's supposed to be overfit! It won't necessarily work :)

@Guy

I'm a little confused about what it is you've done here, you're trading 6 stocks which all went on incredible runs. How does the strategy perform when you put in 6 stocks that were average performers? Or 6 stocks that underperformed. I can't help but feel that this strategy is massively overfitted. It's fairly easy to say that these were stocks you followed for years prior to the simulation start date, but as you say yourself:

[you might be] simply fooling yourself with some past anomalies that could only have occurred in the past and which might be non-existent going forward

There is no reason for AAPL, AMZN or any of these companies to necessarily continue their run, it could even become less likely for them to continue given their massive size. If the plan is to add in different stocks to get the same returns going forward, that is just stock picking and becomes subject to the same fallibility.

If I wanted to, I could go back and find the 10 stocks which have performed the best from 2003-2019 and create a strategy off of that, and I don't see too much of a difference between that and the 6 stocks you've picked.

Perhaps a good approach may be to have a pipeline create a rolling basket of 6 stocks (or more, I've managed to get CVXOPT to work with a significantly larger number, you just have to use try, except statements). That way you can guarantee there is no overfitting. I'd be very interested to see your results.

Obviously CVXOPT plus whatever you're doing is adding alpha, but it should be acknowledged that the basket you were trading also generated significant alpha.

Good luck with your research!

The strategy below was 1x Leverage, as compared to your 1.4x average on your latest post (which spiked to 2x plus I believe?). So I don't know how your strategy performs once leverage is brought back down to 1x.

Clone Algorithm
23
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import zipline
from zipline.api import (history, 
                         set_slippage, 
                         slippage,
                         set_commission, 
                         commission, 
                         order_target_percent)

import numpy as np
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
solvers.options['show_progress'] = False

def initialize(context):
    context.stocks =  [sid(19917), sid(16841), sid(19725), sid(23709), sid(2765), sid(24)]
    #context.stocks = symbols( 'VXUS', 'VTI', 'BND', 'BNDX')
    context.weights = False
    context.target_allocation = {}
    context.tick = 0
    schedule_function(
    func=before_trading_starts,
    date_rule=date_rules.month_end(),
    time_rule=time_rules.market_open(hours=1))
    
def before_trading_starts(context, data):
    #total value of portfolio
    value = context.portfolio.portfolio_value
    #calculating current weights for each position
    for stock in context.stocks:
        if (context.target_allocation[stock] == 0):
            continue   
        current_holdings = data.current(stock,'close') * context.portfolio.positions[stock].amount
        weight = current_holdings/value
        growth = float(weight) / float(context.target_allocation[stock])
        #if weights of any position exceed threshold, trigger rebalance
        if (growth >= 1.05 or growth <= 0.95):
            rebalance(context, data)
            break
    log.info("No need to rebalance!")

def rebalance(context, data):
    allocate(context,data)
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance > 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Selling " + str(int(amount * -1)) + " shares of " + str(stock))
            order(stock, int(amount))
    for stock in context.stocks:
        current_weight = (data.current(stock, 'close') * context.portfolio.positions[stock].amount) / context.portfolio.portfolio_value
        target_weight = context.target_allocation[stock]
        distance = current_weight - target_weight
        if (distance < 0):
            amount = -1 * (distance * context.portfolio.portfolio_value) / data.current(stock,'close')
            if (int(amount) == 0):
                continue
            log.info("Buying " + str(int(amount)) + " shares of " + str(stock))
            order(stock, int(amount))

def allocate(context, data):
    prices = data.history(context.stocks, 'price', 500, '1d').dropna()
    returns = prices.pct_change().dropna()
    context.weights, _, _ = optimal_portfolio(returns.T)
    log.info(str(context.weights))
    context.target_allocation = dict(zip(context.stocks, tuple(context.weights)))
    
def handle_data(context, data):
    # Allow history to accumulate 100 days of prices before trading
    # and rebalance every day thereafter.
    # Get rolling window of past prices and compute returns
    if type(context.weights) == bool:
        allocate(context, data)
        for stock in context.stocks:
            amount = (context.target_allocation[stock] * context.portfolio.cash) / data.current(stock,'price')  
            log.info(str(context.portfolio.starting_cash))
            #only buy if cash is allocated
            if (amount != 0):
                order(stock, int(amount))
                #log purchase
            log.info("buying " + str(int(amount)) + " shares of " + str(stock))

def optimal_portfolio(returns):
    n = len(returns)
    returns = np.asmatrix(returns)
    
    N = 500
    mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
    
    # Convert to cvxopt matrices
    S = opt.matrix(np.cov(returns))
    pbar = opt.matrix(np.mean(returns, axis=1))
    
    # Create constraint matrices
    G = -opt.matrix(np.eye(n))   # negative n x n identity matrix
    h = opt.matrix(-0.15, (n ,1))
    
    A = opt.matrix(1.0, (1, n))
    b = opt.matrix(1.0)
    
    # Calculate efficient frontier weights using quadratic programming
    portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x'] 
                  for mu in mus]
    ## CALCULATE RISKS AND RETURNS FOR FRONTIER
    returns = [blas.dot(pbar, x) for x in portfolios]
    risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
    ## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
    m1 = np.polyfit(returns, risks, 2)
    x1 = np.sqrt(m1[2] / m1[0])
    # CALCULATE THE OPTIMAL PORTFOLIO
    wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
    return np.asarray(wt), returns, risks
There was a runtime error.

@Jamie, the modifications I brought to the original program are substantial. They even changed the nature of the program and its mission.

First off, on the stock selection process, we could make the same argumentation for any trading strategy. We could classify any stock selection method as hindsight since prior to the selection we did not know where they were going, but presently, we do have their history. It is only after they have gone somewhere that we find some related past data to justify our selection. And this makes it a selection bias coupled with a survivorship bias. So, on that argumentation, we are on the same page. And I do understand your reasoning and objections. I had the same.

Here, the selection process was frozen over the same duration in order to evaluate the strategy's behavior. Thereby, insulating all the simulations from a dynamic selection process. This would make all performance comparisons on the same basis. A way of saying that if you added some alpha, for instance, over and above your initial selection, it would not have been due to the selected stocks, but to your portfolio management skills.

The strategy, with my modifications, became a long-term investment strategy that uses trading as an added funding mechanism. Meaning that the trading is to complement the acquisition process on which the strategy relies on to reach its superior alpha. And this implies that the stocks you might select do have long-term prospects. That they could survive and prosper for years and years. This simple fact will limit your stock selection process.

I estimate there are about 800 stocks that could be of interest in this type of trading strategy. That is about 10% of the USEquityPricing dataset. The rest is of no interest. The reason is simple. The program was reengineered to accumulate shares over the long term.

Certainly, it should not do that with stocks that are going nowhere or are going down the drain. If, for some reason, you do not find positive prospects for the next five years and more, then that stock should not be in your portfolio in the first place.

Furthermore, if the stocks you selected cannot show they can make new highs, they do not deserve to stay in your portfolio. You monitor that your selected stocks do indeed make new highs, and continue to have positive long-term prospects. This can be done on a daily basis.

It is like asking the question: can NVDA continue to prosper going forward? I have answered yes to that question for years and years. Can BB do the same? On that one, I answered no a year or so after the release of the iPhone. So, I would not accumulate shares in BB, it would simply go against the premise of this trading strategy.

Because the stock selection was fixed, I needed stocks that survived the last 14 years in order to demonstrate the trading mechanics of this strategy.

BTW, the selected stocks were given in the above tearsheet: https://www.quantopian.com/posts/built-robo-advisor#5c6d8a5c8d26700facd50187. Surprisingly, it was not APPL nor AMZN that turned out to be the best performers.

You have the program template. What is left is to modify the code to make it exceed the 200,000% total return. To accomplish that will be independent of the stock selection since it will be the same. However, you will be able to extract the why of the alpha and then be able to apply it to other stock selections going forward. One thing you will know is that it will not be the stock selection, the duration of the simulation, nor the initial capital that will generate those CAGR levels since the strategy is scalable. The difference will be the skills you put in. The simulations I have done are a proof of concept that it can be done and with relative ease.

My new book: Reengineering Your Stock Portfolio will cover all this in much more detail.

My new book is out (197 pages). Here is part of its intro.

Reengineering Your Stock Portfolio starts with a friend with whom I often discuss my trading strategies saying: why not write a book on this one? A few days later, I sat down and started writing without needing a plan knowing where I wanted to go and what I needed to do.

My take was, go ahead, simply do it. Modify the original program found on the web as need be and document what you see. At times, I even had a simulation running in the background while I was writing on the coming test result knowing they would be positive. I would then take snapshots of the results I found interesting and document what I saw.

Reengineering Your Stock Portfolio will take you from the simple to the more sophisticated as in a crescendo to the finish. Each step of the way giving you the building blocks to transform the given initial trading strategy which is free on the web, just as the open-source Python optimizer library CVXOPT used in these simulations.

The main innovation of Reengineering Your Stock Portfolio is giving the strategy the ability to jump over the portfolio efficient frontier as if it was a simple line in the sand. And thereby, generate higher returns, much higher than what would be expected playing this game. Jumping over the efficient frontier barrier will put us in unexplored risk-return territories and might force us to reconsider our attachment to old portfolio management theories.

Reengineering Your Stock Portfolio will be using the CVXOPT optimizer library. It will make it responsible for trading decisions. Based on its inputs, the optimizer will determine how many shares will be traded and when, if any. The strategy effectively transferring the trade decision making as well as the final outcome to this black box: the CVXOPT optimizer.

The singularity of this trading strategy will be that it is all controlled by equations. No technical indicators nor alpha-factors will be used.

By itself, the optimizer is totally trade and stock agnostic. It is there to crunch the numbers it is given. It has no sentiment, no preferences, no pressure, no stress, no beliefs, no presumptions. It is neither for you or against you. It will just crunch the numbers that it will be given and give back its answer that it be good or bad. It will not know or even be aware of the outcome of its trading decisions.

The initial trading strategy will be modified, reengineer, and controlled entirely by equations as if we added features or administrative procedures. And since we will be adding these equations ourselves, it will give us control over the trading strategy.

The latest version of this trading strategy is exceptional, not only performance wise but mostly considering what it relies on. You have this black box optimizer not knowing at all what you are doing, and yet, it is able to extract from the market outstanding results.

This goes beyond traditional trading methods.

I simply went for the practical side of things knowing beforehand that it could be done. This book will provide you with not only proof of concept but also the building blocks to reengineer this trading strategy and make it your own.

The most comprehensive equation is at the end, but most likely you will need to read how I got there in order to better understand where it came from.

Your task, should you accept it, will be to reengineer this strategy template to your own liking, making it do what you want it to do just as I did make it my own in these pages. The solution provided is not the only solution. There are a multitude of other strategies that can operate as variations on the same theme, and do as good if not better than this one.

@Guy,

Congratulations on your new book! What is this the third or fourth book you authored off of algorithms posted in the Quantopian Community Forum?

Anyway, I hope you have prudently included the standard disclaimers inside your book:

Important Risk Disclosure  
THE RISK OF TRADING CAN BE SUBSTANTIAL AND EACH INVESTOR AND/OR TRADER MUST CONSIDER WHETHER THIS IS A SUITABLE INVESTMENT. PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS.

IT SHOULD NOT BE ASSUMED THAT THE METHODS, TECHNIQUES, OR INDICATORS PRESENTED IN THESE PRODUCTS WILL BE PROFITABLE OR THAT THEY WILL NOT RESULT IN LOSSES. PAST RESULTS ARE NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. EXAMPLES PRESENTED ON THESE SITES ARE FOR EDUCATIONAL PURPOSES ONLY. THESE SET-UPS ARE NOT SOLICITATIONS OF ANY ORDER TO BUY OR SELL. THE AUTHORS, THE PUBLISHER, AND ALL AFFILIATES ASSUME NO RESPONSIBILITY FOR YOUR TRADING RESULTS. THERE IS A HIGH DEGREE OF RISK IN TRADING.

Risk Disclosure Statement  
THE RISK OF LOSS IN TRADING FUTURES, OPTIONS AND SECURITIES CAN BE SUBSTANTIAL. YOU SHOULD THEREFORE CAREFULLY CONSIDER WHETHER SUCH TRADING IS SUITABLE FOR YOU IN LIGHT OF YOUR FINANCIAL CONDITION.

PAST PERFORMANCE DOES NOT GUARANTEE FUTURE RESULTS. WE DO NOT PROMOTE ANY STOCKS, OPTIONS, OR COMMODITIES OF ANY KIND ON THIS SITE. THE PRODUCTS AND/OR SERVICES ON THIS SITE GENERATE MECHANICAL TRADING SYSTEM SIGNALS AND IS NOT INVESTMENT ADVICE. WE DO NOT AND HAVE NOT RECEIVED ANY COMPENSATION FROM ANY COMPANY WHOSE STOCK APPEARS ON THIS SITE OR IN OUR NEWSLETTERS. WE HAVE NO FINANCIAL INTEREST IN THE OUTCOME OF ANY TRADES MENTIONED HEREIN. THERE IS SUBSTANTIAL RISK OF LOSS TRADING FUTURES, STOCKS AND OPTIONS, YOU NEED TO DETERMINE YOUR OWN SUITABILITY TO TRADE FUTURES, STOCKS AND/OR OPTIONS AND THERE MAY BE TAX CONSEQUENCES FOR SHORT TERM PROFITS/LOSS ON TRADES. CONSULT YOUR TAX ADVISOR FOR DETAILS ON THIS IF APPLICABLE.

Hypothetical Performance Disclaimer  
HYPOTHETICAL PERFORMANCE RESULTS HAVE MANY INHERENT LIMITATIONS, SOME OF WHICH ARE DESCRIBED BELOW. NO REPRESENTATION IS BEING MADE THAT ANY ACCOUNT WILL OR IS LIKELY TO ACHIEVE PROFITS OR LOSSES SIMILAR TO THOSE SHOWN. IN FACT, THERE ARE FREQUENTLY SHARP DIFFERENCES BETWEEN HYPOTHETICAL PERFORMANCE RESULTS AND THE ACTUAL RESULTS SUBSEQUENTLY ACHIEVED BY ANY PARTICULAR TRADING PROGRAM.

ONE OF THE LIMITATIONS OF HYPOTHETICAL PERFORMANCE RESULTS IS THAT THEY ARE GENERALLY PREPARED WITH THE BENEFIT OF HINDSIGHT. IN ADDITION, HYPOTHETICAL TRADING DOES NOT INVOLVE FINANCIAL RISK, AND NO HYPOTHETICAL TRADING RECORD CAN COMPLETELY ACCOUNT FOR THE IMPACT OF FINANCIAL RISK IN ACTUAL TRADING. FOR EXAMPLE, THE ABILITY TO WITHSTAND LOSSES OR TO ADHERE TO A PARTICULAR TRADING PROGRAM IN SPITE OF TRADING LOSSES ARE MATERIAL POINTS WHICH CAN ALSO ADVERSELY AFFECT ACTUAL TRADING RESULTS. THERE ARE NUMEROUS OTHER FACTORS RELATED TO THE MARKETS IN GENERAL OR TO THE IMPLEMENTATION OF ANY SPECIFIC TRADING PROGRAM WHICH CANNOT BE FULLY ACCOUNTED FOR IN THE PREPARATION OF HYPOTHETICAL PERFORMANCE RESULTS AND ALL OF WHICH CAN ADVERSELY AFFECT ACTUAL TRADING RESULTS.  

@James, thanks. I always put disclaimers at the start of each book. Not in caps or as elaborated as you provided, but disclaimers no less. Also all over the text, you will find things like: common sense needs to prevail. Not only that, it is often said: it is your choice to do it or not.

The point is: if you cannot first convince yourself on the benefits of a particular long-term trading strategy, why on earth would you even try to apply it? It is why you do simulations in the first place, to see if at least over an extended past it could have been beneficial.

My modifications to the original program have made it probably the most amazing and innovative stock trading strategy on this site.

As said in a previous post: the strategy has become a long-term investment strategy that uses trading as an added funding mechanism. Meaning that the trading is to complement the acquisition process on which the strategy relies on to reach its superior alpha.

You have the program template (take @Jamie's version, and go from there). What is left is to modify that code so that you too could make it exceed the 200,000% in total return.

I made the mistake of commenting on this post - does anyone know how I can stop receiving updates by email?

@Savio, that is easy. You just press on “listening” at the top right of the thread.

Thank you Guy much appreciate and best wishes with your book

@Savio,

You can unclick 'Listening' at the top right hand side. That should do it I think.

@Guy,

Have any of your recent books been peer-reviewed and critiqued by finance professors or other quant finance experts? When was the last time you published any of your work in a major research journal? Doing a quick search on your name as the author on SSRN.com or arxiv.org doesn't return anything. There are other places though I suppose.

Past absolute returns don't mean much in my book. How confident are you that your strategy will outperform on a risk adjusted basis going forward? That should always be the question, and the only question. In my opinion anyway. Is it likely to return 200,000% total return in the next 14 years, and other than your nice backtest, what proof do you have that supports this? The stocks in your portfolio would be gigantic then if that was the case.

Backtesting is just a tool to see how well a strategy would have performed with trading friction on historical data. It's relatively easy to make a backtest look fantastic on historical data. That doesn't impress me one bit. Again, it's future performance that's all that matters.

Frankly and respectfully, I think some of your posts and the books you're plugging do more harm than good. That's my honest opinion.

Thank you Joakim worked perfectly. Much appreciate. The person who started this post Rao Vinnakota has been strikingly silent. Anyway, I best get back to work.

@Joakim, the modifications to the original trading strategy are all within trading rules admissible using the Quantopian software. There is no look ahead. In fact, based on my last tearsheet, the average holding period was about 550 days. The strategy is an EOD strategy that sends calculated weights to the optimizer which executes trades. This puts the trading logic in the hands of the optimizer. It will execute its market orders the next day at 10:30 am. It uses Quantopian's commissions and slippage default settings.

My modifications have been chronicled live in this thread using the basic template provided. It was progressively transformed to do something else than what it was designed for. Which is what everybody here does to any trading strategy they clone. They bring modifications of their own hoping that the backtest they do will perform better than the original version of the program.

I am not looking for peer-review. I am a practitioner. It is very simple: a trading strategy works or it does not. I do not expect the future to be the same as the past. But, I do expect that if I program a strategy to accept a 10% profit target on a trade that it be executed as well in the future as it was in the past whatever the stock might be or might do.

The trade mechanics of a trading program are important. The backtest is not the important thing. I agree. What is, however, is this mechanic. The what the program is programmed to do. And since you do not have access to future data, you simulate your trading procedures and software routines using historical market data. If your trading procedures have some value, they will show it over past data too by generating profits at whatever level. You are then the one to decide if it is worth it or not. This does not stop you from designing crap. But it does help you disregard and discard such strategies. I have seen a lot of those poorly designed strategies over the years.

You express some opinion when none of the things I do are harmful or not doable. I do agree that the charts presented are based on quite a different rationale than what we usually see on Quantopian. But, having a different kind of trading strategy is not wrong, it is only different. And if such a trading strategy can prosper better than others, the more this difference should be acceptable and sought-after.

I could ask: if you delegate trading decisions to the CVXOPT optimizer, is it cheating in some way? Or is it not perfectly acceptable whatever the mixture or the origin of your weighing scheme?

I do expect that the CVXOPT optimizer will do over future price data as it does over past market data. Is that wrong? I do not think so. If your trading strategy is different from everyone else, is it harmful? I do not think so. It is just different and it can prosper like any other strategy might.

You make the case that backtesting is technically worthless <...Past absolute returns don't mean much in my book>. You add: <...It's relatively easy to make a backtest look fantastic on historical data>.

You have a real opportunity here. Make this one fly! At least, I know I can.

This trading strategy has become the most unusual and innovative strategy on Quantopian. It does not fit in any of the trading categories on this site. It is not pairs trading, nor is it so much concerned with volatility reduction. It does not use factors. It ignores betas, sigmas and the very notion of risk. It is not market-neutral or sector-neutral. It does not rebalance all the time, meaning it does not trade all the time. It is controlled and managed by equations and administrative procedures. And, it is at the mercy of the CVXOPT optimizer, its proxy decision maker.

However, it does make this huge bet on America as Mr. Buffett puts it, in that as a premise, it expects a long-term continuation to the prosperity of a nation. As such, it will favor the buy-side of the investment equation. It is not nonsense to say: buy stocks that have positive long-term prospects. There are a lot more than 6 stocks that will qualify to such a selection criterion.

This trading strategy is so simple, that it can be reduced to 2 numbers: \(n \cdot \bar x \). One is a counter and the other the average net profit (loss) per executed trade.

The strategy's premises advocate to buy for the long term, which it does. But, there is a limit. If you buy for the long-term, you are in a buy & hold scenario and all you should expect are market averages. This is not that high. It is sufficient to look at the S&P 500 over the last 20 to 30 years. That would be your expected market return over the next 20 to 30 years which could be obtained just by buying an index tracker.

To change that outlook, you opt to trade, but with the twist of deliberately accumulating shares over the trading process. You want the trading to generate profits that will be reinvested in other trades. The trading profits provide you with additional funds that can be used to continue trading and accumulating more shares.

It is like injecting, all the time, new funds into your trading portfolio. And yes, it will increase your overall performance: you are adding extra cash all the time. What should you expect other than see your portfolio grow and achieving higher returns?

You have this trading strategy with a positive future outlook. It could just sit tight and wait for its long-term market average expectancy, or it could trade hoping to exceed such a limitation. However, we know that simply trading is not enough. Most of the traders and professionals at this game fail to even exceed the market averages. We should not expect to do better with our own variations on all those “professional” trading themes.

We need to be different in order to really differentiate ourselves from the pack. If you do the same thing as everybody else, or a variation thereof for that matter, should you not expect to obtain about the same results?

Why does this innovative trading strategy have such a high total return? That is simple to answer too.

It is all about the power of compounding. First off, it is a compounding game that is being played. Second, you are reinvesting the generated profits into other trades that are expected to also be profitable. You intend to do a lot of trades over the long term.

So the impact is that the profit from a single trade is reinvested into other trades which, in turn, will generate profits that will in turn be reinvested in still other trades and so on. It becomes a feedback loop where thousands of trades are generating compounding revenue streams.

You are compounding on compounding on compounding for years and years over thousands of trades. The impact is simple to observe. You get a compounded equity line as illustrated in the presented equity charts.

The high total return level is not even a surprise. It is the inevitable outcome of the trade mechanics and the strategy's long-term methodology.

I was going to hold off on replying again to avoid hijacking the thread, but I can't let you say:

This trading strategy has become the most unusual and innovative strategy on Quantopian

without a reprimand. The strategy is clearly overfit, you pick 6 stocks which have a spectacular track record and seem to think you've spun gold when those stocks with a spectacular track record have, believe it or not, a spectacular backtest!

If one could get someone from 2033 to come back and tell us which stocks performed best from 2019 - 2033, maybe this strategy would have merit, unfortunately however to the best of my knowledge the opaque maths you're filling the forums with doesn't hold the key to time travel. Maybe they do though, it's not like any of the variables have been defined. If that is the case this trading strategy is definitely the most innovative on Quantopian!

As it stands however, all that's been achieved is some mild optimisation on stocks with spectacular track records, which has been fitted over 30+ backtests into an attractive looking return curve so you can plug your book.

@Joakim, @Jamie

I am wondering how you can comment on the strategy if you have never seen the code?
May be it is really the most unusual and innovative strategy on Quantopian.
Guy Fleury has an unusual ability to make any strategy several times more productive by changing one or two parameters.

@Guy

Congratulations on your new book!
If the code is in your new book, I will definitely buy it for only $ 9.99 per month on Amazon.com.

@Vladimir

You have a fair point, but given that at base he is holding 6 stocks with exceptional returns, we don't really need to see the code to comment on 'overfittedness' which was really all Joakim and I have commented on.

But this is now hijacking the thread, so I'm going to leave it at that.

@Guy Have you applied the non-zero trading fee to your back test?
From my experience it can have significant impact on the performance of the algorithm.

@Jamie,

You tried to reproduce the strategy of Guy Fleury on the same stocks, but his results are 35 times higher than yours.with lower DD.
Isn't that unusual?

@Vladimir,

Not at all. Apply a bit of leverage (not so much that drawdown will kill it of course; 1.3 - 1.6x might be ok), as Mr. Fleury has done in his strategy, and allow the magic of compounding to do its thing.

From his latest tear-sheet:

Anyway, I'm done with this thread.

@Joakim,

It is not about Feb 16 2019 notebook

Total Return 32049%, Max DrawDoun 78%

That results I can easy beat myself.

But Apr 24 2019 post

Total Return 244862%, Max DrawDoun 27%

@Vladimir,

Easy. Right before the GFC, switch to stocks that held up reasonably well during the GFC, and then switch back to the 'winners' from March 2009 or there about. This way you can crank up leverage even further.

Hindsight is always 20/20. I have no interest in creating an overfitted, selection biased, and survivorship biased backtest just to prove a point though. That's just a distraction and a complete waste of time in my book.

Note, I'm only posting these 'critiques' of this 'amazing'strategy' mostly as a warning to new and/or somewhat naive users to not take everything they see on here as gospel. This frankly, in my view, is nothing but snake oil.

Alright, that's it, I'm done. No more posts from me on this thread, I promise.

I understand people could be skeptical. I would too.

It was my major concern even before writing the book. It took 197 pages to explain the development of this innovative trading strategy, and only part of it was chronicled live on this website. My very first comment was: “@Rao, you must be aware that your robot-advisor can do more by feeding more to the CVXOPT optimizer.

What I presented is a long-term trading strategy, and as such, it does not have a short-term perspective even if it will profit from it. Look at the equity chart. The first 3 years appear as almost flat while the last 3 are on an exponential curve. My interest is mainly on those last 3 years and beyond where it counts the most. But to get there, you had to be there all the way letting the strategy accumulate trading profits and shares.

@Vladimir, intentionally, the code is not in the book. The book's objectives are to provide the tools to transform your best trading strategies using controlling equations to guide your portfolio to higher levels. And not just blindly follow someone else's strategy, but transforming your own into something more. This way, you would better understand what your trading strategy will be required to do and gain the needed confidence to apply it since it would then be your own creation on your own machine where you could control it at will.

I do not want to be responsible for anyone misusing or misunderstanding my strategies. If people created their own on the same theme, it becomes their responsibility to manage them as they see fit, not mine. All I wanted to do was provide an example of what can be done which could be replicated in many other ways by anyone once they understood how it is done. I expect everyone's solution to be implemented differently, and that is great.

The presented strategy is governed by equations. I added components one at a time, at first with a total disregard for protective measures and later on gradually bringing on those protective measures designed to lower the total drawdown. But not necessarily the volatility or the beta, even if it did do that too, especially the beta.

@Jamie, I do not see your comments as hijacking. In a way, they turn out to be legitimate questions.

Is the trading strategy overfitted or not? I answer no. The optimizer is in charge of the trading activity. Therefore, the question should be: Can I overfit the optimizer? For sure, I can influence it, but is that overfitting? I still do not know when it will trade, which stocks will be affected, how many shares will be bought or sold and when. If I am in the dark as to what the optimizer will trade, if it does, am I overfitting?

The whole code section of the optimizer was not changed. And, as said, I have chronicled the additions of several equation components I used as part of the administrative procedures taken. I certainly could not have done that in a single step. But see my post: https://www.quantopian.com/posts/built-robo-advisor#5c6d8a5c8d26700facd50187 where the progression of some of the steps was provided. Add a feature, and if the program crashes, debug, and repeat.

These additions started relatively low and progressively as I added the driving components of the equations, performance went up. This was not optimizing parameters, it was adding functionality (features) expressed as long-standing equations. And I did them one at a time. The reason is simple, it is easier to debug if it is done as an iterative process. Again, you do some modifications, re-run the program to see it terminate without a crash. Then, you continue.

These equations are spanning the entire trading interval. It was not just a trade here and there that was affected, it was all of them, every single one, oftentimes, just by pennies.

Another point is: it did not matter which stocks were picked to show the trade mechanics of this strategy as long as they fitted my selection criteria. With the same basic version of the program which gave you 7,967% total return for the same six stocks using the same optimizer, the question is:

How can you add alpha to a group of stocks that already showed alpha? And how can you raise this alpha to reach 200,000%+ while keeping it scalable, sustainable, marketable, and executable?

What you learn in doing that will also serve you with other stock selection schemes. The real added alpha here is from the 7,967% total return strategy that is gradually being raised to 200,000%+.

Nobody here is trading using equations, or administrative procedures, for that matter. Nor have I seen anyone discussing controlling their trading strategies. And this is sufficient to call my version of the original program innovative and unique on Quantopian. I use regulators, amplifiers, dampers, and booster functions and equations. Never seen any of that here. I even have a cruise control made to override the system by accelerating or slowing down the whole stock acquisition process. Again, stuff never mentioned on Quantopian.

Does using an equation equates to overfitting? What a philosophical question? Is a positive monotonic equation a mystical creature because it only goes up? Or, because it only goes up, then there must be something wrong with it. How about you just designed it to do that?

This trading strategy, my version of it anyway, is operating in a totally different category. It exists beyond the efficient frontier where it is seen to thrive. All it did was jump over that barrier. When I will have corrected the 6-stock limitation, it will do as much with a lower volatility setting as demonstrated in my prior book which again was based on stuff found on Quantopian, but does provide the theoretical background for this strategy. This strategy is simply the practical application of what was presented in that prior book.

Here is a funny observation: because you found some stuff on Quantopian that you could use or transform, does it make it false?

No one needs to buy my book. I did not write it for the money. However, I think I have provided enough stuff here and on my website over the past 8 years for anyone to understand what is being done and do even better.

The last page of my book ends with:

Your task, going forward, will be to monitor your program and see that your trading strategy behaves correctly and according to your plan.

Since you are in control, it becomes the game you play, the one you constructed, within the game.

You want more, then prepare, and definitely do make it happen.

At some point, someone will understand what this strategy is all about and create their own to do even better. And I can assure you, they will not make it public.

Not to be misinterpreted or misunderstood, the last version used in my simulations did have outstanding results, but, personally, I was not impressed. The strategy simply did what it should have done and was directed to do. It was not my main concern.

What was, however, was the trading dynamics. And if the statistics of the mechanics of a trade were interesting, then the strategy overall would have to show a profit, no matter what its size. Evidently, the larger the better since it would also show a kind of proof of concept that this innovative methodology (guiding equations) was a viable alternative to usual long-term trading methods.

My first interest in that strategy was its architecture. I saw a template for doing more.

Right off, with no code modification, it was scalable. This is part of my collection of acid tests. If a trading strategy is not scalable, what real use can it have? You want your equity to grow with time and the strategy cannot handle it? Then, what are its future prospects?

Another of my acid tests is longevity. Can the strategy you are looking at, at the very least, withstand past extended trading intervals? Again, if it can not, then what are its chances going forward?

These are extremely simple strategy tests. The first is solved by throwing more or less money at it, while the second by giving it more time. Both these measures are independent of the trading strategy itself since they are set outside the strategy and prior to its execution. Yet, you need those answers before you even put any effort into improving the strategy's code or its trading philosophy.

They are rudimentary tests, and if you do not do them, you are not doing your homework, or you are ready to waste a lot of your time. However, I view that as your choice, not mine.

Some pointed out that the trading strategy operated from hindsight. All trading strategy simulations do. They operate on what is historical data.

Due to my old age and experience, I went back to my earlier simulations during the dot.com bubble era to find some of the same stocks being analyzed in trading strategies. Someone younger, not having lived that period, has less of an understanding of it. For me, most of the selected stocks have been in my selectable set for over the past 20 years. Why should it change now because I am doing a new simulation? At least, I now have a glimpse of what might have been if. Also, since I started my website in 2011, the same trading philosophy (accumulate shares over the long term and trade over the process), even if it was developed earlier, has been at the heart of every simulation made.

What is important is that a trading strategy has a signature. It is made to do what it does. Some internal dynamics that will trigger trades in a specific fashion. For instance, say you set a 10% profit target on a trade, irrelevant of which stock it might be. You expect your program to execute that trade that it be on past or future data. Going forward, you might not know which stock will hit its profit target or when, but it does not matter. Whenever such a target will be hit, you want your program to execute the trade. That is part of the strategy's mechanics.

You could make another stock selection, it would not change the trade dynamics. It would certainly change the overall results since the daily opportunity set would be completely different. And whichever tradable stocks you might want to use, they should follow your own selection criteria set.

Innovations, at first, are usually tossed aside. But this, I can assure you, is not rocket science. Slowly, with time, it will become more acceptable to more people.

My question is: has anyone here using the given strategy, with no change to the optimizer, found a way to reach the 50,000%, or 100,000%+ total return mark using the same initial capital, the same stocks, over the same 14-year time interval?

To answer some of the questions that have not been asked, here is an extract from my book.

Is this strategy for real? Yes. Definitely. Furthermore, those performance levels could be duplicated by anyone. It uses the same tools as available to any other Quantopian user. The original script, which you can also modify, is available free.

Can the strategy last? Yes. This has been demonstrated by its 14-year backtest where you see it improving all the time. And it is built to go even further.

Why not publish your version of this trading script for all to see? No way. You must be kidding. The answer to that should have been obvious! Also, the few times I have tried that, within hours, they were misused, and then I was blamed for “them” not doing their homework. The program is an intricate and sophisticated trading strategy. Reengineer it yourself. This way, you will understand better what it does and be responsible for your own work.

Is it new? No, not at all. You will see the same underlying trade mechanics as I have proposed over the last 12 years or so. The structure of this program is similar to my DEVX8 program which has been illustrated on my website since 2015 with its origin going further back. Since it uses the same underlying trading philosophy, I started to call this one DEVX10 as if a continuation and improvement over DEVX8.

Can it trade long and short? Yes, if the CVXOPT optimizer wants to. It is the optimizer that determines the trading activity. And it was shown at least once (on the Quantopian website) that it could even prefer not to short at all. It remains that it is the optimizer that makes the calls, and it will go long or short if needed.

Does it use leverage? Yes, since it has a positive upward bias, the strategy will place larger bets on the stocks performing the best. Leverage on the last simulation presented averaged 1.27. Meaning 27% of the equity, on average, was subject to leveraging fees. At the CAGR level we see this trading strategy at, it can more than afford paying those fees. Nonetheless, the fees will leave a trace by slowing down the CAGR by some 2 to maybe 4%. Each scenario will be different. Going forward remains an unknown.

Can it use all available capital? Yes, all the time, and more. It is one of its best features. The strategy has full market exposure. And in most cases presented it has over-exposure since some leverage was used.

How can it flip that much volume? It only flips part of its inventory as it progresses in time. This way generating cash that can be used to buy more shares later on. As one's portfolio grows, it becomes harder and harder to flip large quantities of stock without having a price impact. The strategy can bypass much of this flipping simply by not flipping that much. As the portfolio grows, even the partial flipping is reduced. The average turn-over is less than 1%.

Does it have weaknesses? Yes. Some of which have not been taken care of as yet. For example, it still needs more downside protection. It needs to handle more than the 6-stock limitation encountered. But all that can be corrected. It is part of the next stage in this development process. I first wanted to know how far it could go before installing better protective measures and solve the 6-stock limitation.

Does it pay a lot of commissions? It does pay commissions, but not as much as would be expected. A large part of the positions are still in inventory, therefore, commissions have been paid once for the entry. For the full trades, commissions were, at most, marginal expenses compared to overall returns. As said before, in these simulations, commissions and slippage were all accounted for since the Quantopian default settings were used. Also, as noted, the average turn-over is less than 1%.

Can it still grow? Yes. It is a simple matter of control. You want it to grow more, push more on its pressure points. Trade more (increase n), increase its trade aggressiveness, force it to accumulate more shares for the long term, increase its profit margin. You see the point.

How is it controlled? On this one, it is like asking me for the trading script. And that will not happen. I would prefer you figure it out. Since once you do, you might be more inclined to accept what you see on your own machine. I can hardly expect you to put millions on the table without you knowing what you could see with your own eyes when doing your own simulations.

What is the use of such a trading strategy? The strategy is made to build large long-term portfolios. It is using trading as a funding mechanism to accumulate large long-term stock inventories. The same principles could be applied by anyone building their own portfolios. Any organization or individual could use such a portfolio management methodology to increase their long-term performance. This is further discussed in my book: Building Your Stock Portfolio.

It this trading strategy for everyone? No. First, it is made to handle large sums and for a long time. However, the principles used, even after being down-scaled, even by a factor of 100, will still be more than useful since the strategy is scalable. This is demonstrated in the book in the scaling down section. It is not everyone that is looking for long-term trading/investing strategies. We all have our own preferences. Nonetheless, even if I say so myself, this strategy with its modifications can really fly.

About Testing the CVXOPT Optimizer

My previous to last post ended with a question: “has anyone here using the given strategy, with no change to the optimizer, found a way to reach the 50,000%, or 100,000%+ total return mark using the same initial capital, the same stocks, over the same 14-year time interval?”

I opted to make a new simulation based on the last reengineered version of that program (ver.: DX-08) using the same 14-year time interval with the same initial capital.

However, this time, I would change all the stocks in the portfolio.

Before even attempting this, it raised some questions. For one, I have not yet solved the 6-stock limitation encountered by that program. And second, whatever selection I would make, it would also be quite unique. How unique you asked?

Taking 6 stocks, as if at random, out of some 8,300 (USEquityPricing dataset), has a probability hit of one in 4.4 ∙ 10^20. If done out of some 2,000 stocks (QtradableStocksUS dataset), the answer would be 8.8 ∙ 10^16. And if out of 800 stocks, based on some selection criteria (my own), you would still have one chance in 3.5 ∙ 10^14 to pick a particular set of stocks.

This is saying that whatever would be picked would be so unique that you will always have someone saying that those stocks had some sort of selection biases, or were handpicked for the purpose of the simulation, just as was expressed by someone in an earlier post.

There does not seem to be a compromise somewhere other than do the 3.4 ∙ 10^14 possible portfolios and analyze their respective outcomes and get some statistical data out of it. But then, that would be a total waste of time, because even if you did this, going forward you would still be facing the uncertainty of the right edge of a price chart.

You could opt to first solve the 6-stock limitation which would give you the ability to sustain a larger number of stocks in your portfolio, and thereby, at the same time, reduce the portfolio's overall volatility. Say you solve that problem and select the top100 stocks out of your selection criteria which produced about 800 selectable stocks. That 100 stocks would have one chance in 3.4 ∙ 10^129 to occur. From such a number, whatever picks you might make becomes so unique that you could dedicate a million lifetimes using all the computing power on the planet and not even make a dent in such a gargantuan number.

Yes, your stock selection matters. Yes, the way you will trade also matters. But, underneath it all, there are some mathematical considerations that will prevail no matter what you intend to do.

A major consideration is that a simulation can only give you some indication of what might be, no guarantees, and no certitudes. Nonetheless, those trading procedures in your program would still apply going forward giving whatever answer they are programmed to give.

For instance, you design a procedure to accept a 10% profit target. It is what the program will execute, that it be over past, present, or future price data. Your program does not know anything else but to take that 10% profit when reached.

Your problem becomes a statistical one. What you want to know is how often will it occur over some time interval? You can get those statistics from your simulation over past data, but you have no indication the stocks you selected would do as many going forward. But you can be assured that if your simulation did some 10,000 such 10% profit target trades over its trading interval, it most certainly will not drop to zero going forward.

If your stock selection is a one of a kind in 3.4 ∙ 10^129, you can also be assured that all the 3.4 ∙ 10^129 portfolios will give different answers. Therefore, my suggestion is to just make a choice and then live with it. You know it will be unique, but you also know that you cannot test all the possibilities. Will your selection be above or below average? There is absolutely no way of knowing beforehand.

What will really matter is how your program will handle all its trading procedures no matter what your stock selection may be. I find it preferable to refine the trade mechanics of a trading portfolio management system than to worry about what was the stock selection method.

It is why I reengineered that trading strategy to change its short-term perspective into a long-term portfolio builder.

Selecting Stocks For A Trading Strategy

We should separate the problem into two parts. One for selecting over historical data and one where the data is forthcoming (some future data). These two will turn out to be quite different problems. Simulating the future should be viewed as either a walk-forward or some form of paper trading. Both of which do not produce any money and therefore are just other forms of simulations. You could paper trade for years if you wanted to. But, in the end, you would still find yourself at the right edge of a price chart with an unknown future.

All historical data is, de facto, known. It is recorded history. Whereas, all future data is yet to unravel. Assumptions you make based on historical data might not carry forward that well, especially if they have little economic foundations relative to a future trading environment.

For instance, you might elect to choose the top 100 market value stocks each time you rebalance your portfolio. On historical data, there is no problem. But, you would get only one answer: the actual top 100 stocks at that time.

In fact, you would have settled for a single occurrence scenario where everybody else would get the same answer had they also picked the top 100 by market value. And from that data, you could add any other criteria you want. In a way saying that everyone using that selection process would be designing strategies based on the same theme, based on the same 100 stocks.

Whereas, if you looked at the possibilities, your selection universe would be much much larger. So large, in fact, that even a 100,000 Monte Carlo simulation would be totally irrelevant. I would add no matter how large the number of iterations or your available computing power.

The number of combinations for taking 100 stocks, as if at random, out of some 8,300+ (USEquityPricing dataset) turns out to be 4.7 ∙ 10^233. And therefore, picking the 100 top market value stocks is only one solution out of 4.7 ∙ 10^233 -1 other possibilities.

Could we say that that selection process did not cover the data so well. Yet, it was a reasonable assumption to make. Especially for trading purposes since for trading efficiently you need liquidity on both sides of the trade. High market value stocks in most cases do provide this liquidity.

From time to time, especially when using an optimizer, you will rebalance your portfolio. This means that each time you rebalance you will be faced with a new one-shot thing out of 4.7 ∙ 10^233 possibilities.

Say you opt to diversify more and take the top 200 stocks using some criteria. The number of combinations from the same selectable universe would be 7.4 ∙ 10^407. It does not reduce the size of the problem, it amplifies it to much larger proportions. If you increased further the number of stocks for your portfolio to 300 or 400 stocks in order to diversify risk even more, you should get 7.3^558 and 3.8^694 combinations respectively. These are extremely extremely large numbers. The last is one chance in 3.8 ∙ 10^670 trillion trillions.

If you picked one combination out of 4.7 ∙ 10^233, 7.4 ∙ 10^407, 7.3 ∙ 10^558 or 3.8 ∙ 10^694 possible combinations, it still makes it so unique that as a sample out of its population (USEquityPricing dataset), we cannot say it was even close to representative of what was available. We cannot even use the words: “on average”, the selected stocks did this or that from a selection perspective.

This should raise a lot of questions.

Will the one selection process you took out of 4.7 ∙ 10^233 other possible choices over some historical data pan out going forward?

How representative of the market is such a selection?

Will your selection method behave the same going forward?

What kind of comparative justification can you give to your selection method?

If the more you diversify, the more you amplify the selection problem, then how will your selection method compare to others? Will you even be able to enumerate those other methods?

It is why I concentrate on the math and the mechanics of a trade since that can be carried forward. It is by reengineering the mechanics of the trade that you can force your trading strategy to produce more as was demonstrated in previous posts.

No matter what the trading method, it will have the following payoff matrix equation: F(t) = F(0) + Ʃ(H∙ΔP). With one caveat: Ʃ(H∙ΔP) > q_0 ∙ (p_t – p_0)_spy. Meaning that whatever your trading strategy, it should at least outperform holding SPY for the duration or beat the quasi buy & hold scenario of a low-cost index fund. Otherwise, why bother trading?