Back to Community
Best performing algorithms

Hi all,

I am new to Quantopian and deciding whether I invest my time alot further. I had some questions which maybe the community could help with?

  1. The most successful algo, what type of margin would they expect 10% per annum? Obviously this depends on whether we are in the bull or bear market how would they then to perform vs the market?
  2. How long does it take to get something "ready" that is profitable. What do you think the minimum would be?
  3. I saw some numbers that Grant K posted but I guess it differ if you have privately backed fund with £1m+

Thanks in advance!

Cheers,

Chee.

47 responses

First off, welcome!

As you probably already guessed, there's no single absolute answer to "whether I invest my time a lot further". My view is that the single biggest factor playing into a decision whether to take the leap into Quantopian (or algorithmic trading in general) is if you enjoy it. Do you enjoy coding, creating algorithms, analyzing problems, researching and developing ideas, going where no man has has gone before (and so on)? If software and finance aren't something you already do and enjoy doing and have an aptitude for, then you may find the learning curve vs the rewards is too high and will probably loose interest fairly quickly. I'd speculate (but would really like to know if Q would share) that the 'turnover' on Quantopian is very high. Of the 100,000 purported users only 1000 or so stick to it for more than 6 months. Of those, maybe half participate in the contests and/or live trade their algorithms. I'd venture the main driver for those individuals is a Sheldon-esque passion for the process (though maybe I'm just speaking for myself).

The second biggest factor that plays into "whether I invest my time a lot further" however, is one's capital base. How much money do you have available to invest? If you have $5000 and really enjoy it, then by all means, jump in! You probably won't get rich but you'll have fun and maybe make a bit of profit along the way. If you DON'T really enjoy it, then go do something you love. One won't succeed where there isn't passion.

However, if we're talking $500,000+ and one is expecting to live off their investments at some point, then the calculus changes a bit. It's my opinion that there are only three reasonable approaches to investing. 1 passive investing (buy a stock and a bond ETF and rebalance quarterly or something similarly passive), 2 actively managed investing (hire a money manager to more actively manage your funds but just make sure they are netting you more than the market with less volatility and their name isn't 'Bernie'), or 3 algorithmic trading. If you have the capital and are expecting to earn a large portion of your income from that capital, then it may make sense to explore option number 3 if one is at all inclined towards programming and finances.

So, with that in mind, here's my answers...

  1. The most successful algo, what type of margin would they expect 10% per annum? Obviously this depends on whether we are in the bull or bear market how would they then to perform vs the market?
    I read your question as "what is a reasonable expectation for annual return? Is 10% reasonable?" and "how do algorithms perform vs the 'market'?". What's missing in the question is volatility and drawdown and how long that drawdown lasts. Without ones expectations for volatility, then returns are meaningless. If one is willing to put up with a 50% drawdown for a year (ie start with $100,000 and see it drop down to $50,000 and not be back to breakeven for a year) then you could get a 50% average annual return. (Simply invest in XIV). However, most mortals don't have the stomach for that and would have thrown in the towel after a couple months of losses. So, I'll posit a more palatable drawdown of 12%. That's less than the volatility and drawdown of the S&P500 over the past 5 years. I'd say one should expect to do about 50% better than the market (ie SPY). Over the past 5-7 years the market has averaged about 12%. A reasonable algorithm should return about 18%. Attached is a VERY simple algorithm which hits these targets in a bull market like we've had lately. Containing the downside risk when the market isn't doing so well is the challenge however. Here is a link to a contest winning algorithm https://www.quantopian.com/posts/contest-8-winner-robert-shanks. Look at the notebook in the post which shows an annual average return of 16% and volatility of 11%. That might be a good benchmark.

  2. How long does it take to get something "ready" that is profitable. What do you think the minimum would be?
    My advice is to start trading as soon as possible with a simple algorithm (maybe similar to the one attached) to get the 'feel' of trading. Then incrementally make improvements. First focus on reducing the volatility, max drawdown, and drawdown time. Then focus on increasing returns. Look through these forums for ideas and help. You will find that returns are often the easy part (just add a little XIV or leverage to any strategy :) ). If you have a bit of aptitude and spend a few hours a couple times a week, my guess is that you'll feel comfortable with the tools and have made some solid headway in 6 months.

Good luck.

ps: I can post a 'Robinhood friendly' version of this algorithm which you could use out of the box. No guarantees but I have traded with it. Let me know.

Clone Algorithm
2976
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
A simple algorithm showing how diversification and rebalancing
can make dramatic improvements to volatility and returns.
Note that this trades in 3X leveraged ETFs to get increased returns.
The diversification however, keeps the volatility in check.
'''

def initialize(context):
    """
    Called once at the start of the algorithm.
    """   
    
    # Here are any algorithm 'constants' we'll be using
    context.target_leverage = 1.0
    
    # Here are the ETFs we want to trade along with the weights 
    # Ensure they add to 1.00
    context.etfs = {
        symbol('TYD'): 0.1, # Daily 7-10 Year Treasury Bull 3X Shares
        symbol('TMF'): 0.2, # Daily 20+ Year Treasury Bull 3X Shares
        symbol('EDZ'): 0.2, # Daily MSCI Emerging Markets Bear 3X Shares
        symbol('SPXL'): 0.5, # Daily S&P 500 Bull 3X Shares

    }
    
    # Set commision model for Robinhood
    set_commission(commission.PerShare(cost=0.0, min_trade_cost=0.0))
 
    # Rebalance our portfolio to maintain target weights
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes = 35))


def rebalance(context, data):
    for stock, weight in context.etfs.items():
        order_target_percent(stock, weight*context.target_leverage)
                

There was a runtime error.

Hey Dan,

Firstly thank you for taking the time to write some a detailed post. It is very much appreciated.

So I have always had an interest in share trading and I work with data more from an analytical perspective rather than coding (although I have coded some moons ago). My work in the industry I am in (digital marketing) is about to end and looking to investigate new opportunities and how to invest money in different ways. The plan is actually a combination of things:
1. Lower risk options (property, managed funds)
2. Hedge fund opportunities / algo trading

One area that interests me is combining regression tested quantitative data with additional qualitative data for extra signals vs what pure algo trade would provide. I am trying to understand what platforms are best. Q seems to have alot of the architecture you would need so would be really a kick start for algo trading part. I am also have investors who would be interested in investing in other peoples algos which would make this platform interesting.

The ROI actually seems reasonable, although I assume the risks are slightly elevated depends on how the drawdown is managed.

One other question I had was a lot of the posts I have seen with forecasts tend to show performance against a bull market. Do others compare performance in bear or flat markets and achieve positive ROI through short selling?

Thanks,

Chee.

Some thoughts on the questions above...

One other question I had was a lot of the posts I have seen with forecasts tend to show performance against a bull market.
I'm guessing you are referring to the posted backtests and the fact that they may typically only go back 5-7 years. You are correct that one should probably backtest over a longer period and ideally through the 2007-2010 downturn to get a better idea of performance.

Don't just look at the raw backtest results though. Quantopian provides a nice tool called 'Alphalens' to slice and dice the results and show performance during various bull and bear time periods. See https://www.quantopian.com/posts/alphalens-a-new-tool-for-analyzing-alpha-factors

Do others ... achieve positive ROI through short selling?
Short selling is certainly one very solid strategy for weathering a bear market. Swinging in and out of bond funds or inverse ETFs can also work.

How long does it take to get something "ready" that is profitable. What do you think the minimum would be?

It depends on what that "something" is, but generally, I think 6-12 months of out-of-sample backtesting/paper trading, followed by perhaps 6-12 months of real-money trading, to actually know if the thing works. If you are wanting to attract outside capital, you probably need 2-3 years of real-money trading at a relevant scale (this, I believe, is why Quantopian needs Point72 and their $250M seed-money commitment--otherwise, they might never get off the ground and reach the $10B level).

My sense is that Quantopian is solely about the hedge fund, so if you are interested in just jumping in and learning, and having the greatest overlap with their training and support, see:

https://www.quantopian.com/allocation
https://blog.quantopian.com/a-professional-quant-equity-workflow/

It may be a long-shot, but if you are just wanting to get into the field, trying to write algos for them, versus for yourself, might be the best way to go. If you have something that looks pretty good, you can contact them, and they might give you some feedback. Or you can post a pyfolio tearsheet to the forum, without revealing the code. Personally, for a variety of reasons, I don't trade my own money; I just fiddle around trying to learn, as a hobby, and get on Quantopian's nerves from time-to-time with my opinions and off-topic meanderings.

Interesting discussion here. I would also challenge the assumption "Obviously this depends on whether we are in the bull or bear market . . . ."

Here at Quantopian, we are looking for algorithms that are indifferent to the market's movements. We are looking for low-beta algorithms with a moderate risk-adjusted return. The algorithms we are making allocations to are ones that find success in both bull and bear markets.

Note that I'm not saying that there is only one "right" way to invest - but there is only one way to get an allocation from us.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@ Dan D. -

It would be interesting to see some data detailing the extent to which the algos you've funded and have in the evaluation queue conform to:

https://www.quantopian.com/allocation
https://blog.quantopian.com/a-professional-quant-equity-workflow/

I'm particularly curious if there has been any tangible success in users writing the kind of multi-factor, cross-sectional, giant, swirling, super-duper strategies that Jonathan describes, or if you see more single-factor type algos. My original understanding of the Q fund was that you'd be combining lots of single- or few-factor type algos (and perhaps this was the original concept), but we've landed on your needing self-contained scalable, institutional-grade algos from individual users, implementing perhaps 5-10 factors, combined with ML and then run through the optimizer API--a bit daunting, in my opinion, for non-professionals. I get the impression that it is the sort of thing that, within a traditional hedge fund, would not fall on one individual, but a team (but maybe I'm mistaken here--perhaps it is common for individual coders/traders/managers to write and deploy soup-to-nuts strategies at tens of millions in capital single-handedly). And in some respects, you've introduced external competition, by offering signals, such as https://www.quantopian.com/posts/alpha-vertex-precog-dataset . It feels like a pretty high bar.

Dan Whitnable,
I definitely fall into the watching and asking category........
If you don't mind I would like to have the RH friendly code. The code above is simple enough for me to conceptualize, tinker with, and implement if warranted. I work on a couple of asset allocation models on thinkscript but don't see a real future with their integration on that platform.
Thanks and happy trading.

@Hunt

Here's the Robinhood friendly code for above. Basically it places the sells before the buys so there is money available to buy (Robinhood requires you have enough available cash on hand). It also controls cash better by placing limit orders for specific quantities of shares rather than relying on market orders or the built in 'order_target_percent' method.

One note of caution. A big reason why this algo does well in the backtest is that both bonds and stocks did well from 2011-2016.

Clone Algorithm
806
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
ETF trading algorithm for Robinhood
v1.6 - changed MIN_ADJUST_AMT from a percent to fixed dollar amount
v1.5 - changed adjust_buy_orders_per_available_cash to account for potential negative cash
v1.4 - changed program to account for added gold buying power
v1.3 - cleaned up some code to fix the copy warning
v1.2 - added retry for canceled orders
v1.1 - added logging to view available cash
'''

# import pipeline methods 
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline

# import built in factors and filters
import quantopian.pipeline.factors as Factors
import quantopian.pipeline.filters as Filters

# import any datasets we need
from quantopian.pipeline.data.builtin import USEquityPricing 

# import numpy and pandas just in case
import numpy as np
import pandas as pd


# Set any algorithm 'constants' you will be using
MIN_CASH = 25.00
MIN_ADJUST_AMT = 200.00
ROBINHOOD_GOLD_BUYING_POWER = 0.00


# Here we specify the ETFs and their associated weights
# Ensure the weights sum to 1.0
MY_ETFS = pd.DataFrame.from_items(
        [
        (symbol('TYD'), [0.1]), # Daily 7-10 Year Treasury Bull 3X Shares
        (symbol('TMF'), [0.2]), # Daily 20+ Year Treasury Bull 3X Shares
        (symbol('EDZ'), [0.2]), # Daily MSCI Emerging Markets Bear 3X Shares
        (symbol('SPXL'), [0.5]), # Daily S&P 500 Bull 3X Shares
        ], 
        columns = ['weight'], orient='index')


def initialize(context):
    """
    Called once at the start of the algorithm.
    """   
    
    # Create a list of daily orders. Initially it's empty.
    context.todays_orders = []
    
    # Set commision model for Robinhood
    set_commission(commission.PerShare(cost=0.0, min_trade_cost=0.0))

    # Ensure no short trading in Robinhood (just a precaution)
    set_long_only()
    
    # Create and attach pipeline to get data
    attach_pipeline(my_pipeline(context), name='my_pipeline')
    
    
    # Try to place orders
    schedule_function(enter_sells, date_rules.every_day(), time_rules.market_open(minutes = 10))
    schedule_function(enter_buys, date_rules.every_day(), time_rules.market_open(minutes = 30))

    # Retry any cancelled orders 3 times for 10 minutes
    schedule_function(retry_cancelled_order, date_rules.every_day(), time_rules.market_open(minutes = 35))
    schedule_function(retry_cancelled_order, date_rules.every_day(), time_rules.market_open(minutes = 45))
    schedule_function(retry_cancelled_order, date_rules.every_day(), time_rules.market_open(minutes = 55))
    
    # Record tracking variables at the end of each day.
    schedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())

    
def my_pipeline(context):
    '''
    Define the pipline data columns
    '''
    
    # Create filter for just the ETFs we want to trade
    universe = Filters.StaticAssets(MY_ETFS.index)
           
    # Create any factors we need
    # latest_price is just used in case we don't have current price for an asset
    latest_price = Factors.Latest(inputs =[USEquityPricing.close], mask=universe)
       
    return Pipeline(
            columns = {
            'latest_price' : latest_price,
            },
            screen = universe,
            )


def before_trading_start(context, data):
    
    # Clear the list of todays orders and start fresh
    # Would like to do this 'context.today_orders.clear()' but not supported
    del context.todays_orders[:]
    
    # Get the dataframe
    context.output = pipeline_output('my_pipeline')
    
    # Add other columns to the dataframe for storing qty of shares held, etc
    context.output = context.output.assign(
        held_shares = 0,
        target_shares = 0,
        order_shares = 0,
        target_value = 0.0,
        order_value = 0.0,
        weight = MY_ETFS.weight,
        last_price = 0.0
    )
    
                 
def enter_sells(context, data):
    
    # get the current prices and calculate order values and shares
    update_stock_data(context, context.output, data)
    
    # If order shares is negative (ie a sell) and value is greater than our min adjust threshold
    rules = 'order_shares < 0 and order_value > @MIN_ADJUST_AMT'
    sells = context.output.query(rules).index.tolist()
    
    for stock in sells:
        order_id = order(stock, 
              context.output.get_value(stock, 'order_shares'),
              style=LimitOrder(context.output.get_value(stock, 'latest_price'))
              )
        # store the order id in case we need to retry the order
        context.todays_orders.append(order_id)


def enter_buys(context, data):
    
    # get the current prices and calculate order values and shares
    update_stock_data(context, context.output, data)
    adjust_buy_orders_per_available_cash(context, data)
    
    # Order shares is positive (ie a buy) and value greater than our min adjust threshold
    rules = 'order_shares > 0 and order_value > @MIN_ADJUST_AMT'
    buys = context.output.query(rules).index.tolist()
    
    for stock in buys:
        order_id = order(stock, 
              context.output.get_value(stock, 'order_shares'),
              style=LimitOrder(context.output.get_value(stock, 'latest_price'))
              )
        # store the order id in case we need to retry the order
        context.todays_orders.append(order_id)


def update_stock_data(context, output_df, data):
    
    '''
    Get the current holdings and price. Then calculate target portfolio weights and 
    the quantity of each ETF to buy or sell to hit those targets.
    '''
   
    # Update the shares held for any security we hold
    # If we don't hold a security held_shares keeps the default value of 0
    for security, position in context.portfolio.positions.items(): 
        output_df.set_value(security, 'held_shares', position.amount)
        
    # Get the latest prices for all our securities
    # May want to account for possibility of price being NaN or 0?
    output_df.latest_price = data.current(output_df.index, 'price')
    
    # Determine portfolio value we want to call '100%'
    target_portfolio_value = context.portfolio.portfolio_value + ROBINHOOD_GOLD_BUYING_POWER - MIN_CASH
    context.target_portfolio_value = target_portfolio_value
 
    # Calculate amounts (note the // operator is the python floor function)
    output_df.target_value = output_df.weight * target_portfolio_value
    output_df.target_shares = output_df.target_value // output_df.latest_price

    output_df.order_shares = output_df.target_shares - output_df.held_shares
    output_df.order_value = output_df.order_shares.abs() * output_df.latest_price


def adjust_buy_orders_per_available_cash(context, data):
    '''
    This adjusts the order shares for any buys to stay within the available cash.
    There may not be enough cash to buy everything we want. This will happen if a previuos 'sell'
    order didn't execute (so no cash available from that sale).
    This scales all buys down equaly.
    '''
    # Calculate the sum of all the order values for any buys we want to make.
    required_cash = (context.output
                     .query('order_shares > 0')
                     .order_value.sum(axis=0)
                     )
    # Calculate how much cash we have to spend.
    net_cash = context.portfolio.cash + ROBINHOOD_GOLD_BUYING_POWER - MIN_CASH
    
    # Check if we have enough cash
    if required_cash < net_cash and net_cash > 0.0:
        # We're good to go
        pass
    
    elif required_cash > 0.0 and net_cash > 0.0:
        # If we got here then we don't have enough cash.
        # Reduce all the order shares by the amount of cash we have.
        reduce_by_ratio = required_cash / net_cash 
        context.output.order_shares = (context.output.query('order_shares > 0')
                                       .order_shares // reduce_by_ratio
                                       )
    else:
        # Net cash is negative so we can't buy anything. Set order shares to 0.
        context.output.order_shares = (context.output.query('order_shares > 0')
                                       .order_shares * 0.0
                                       )    
    
    # Calculate the new order_value since we changed the order shares.
    # Do the calc for the whole dataframe but really just need to do for 'order_shares > 0'
    context.output.order_value = context.output.order_shares.abs() * context.output.latest_price

    # Here we just do a check that our math worked.
    # Calculate the sum of all the adjusted order values
    adjusted_cash = (context.output
                     .query('order_shares >= 0')
                     .order_value.sum(axis=0)
                     )
    # Check to make sure we now have enough cash
    # This should never be true and is really for debuggubg.
    if adjusted_cash >= net_cash:
        log.info('got a problem %f  %f' % (adjusted_cash, net_cash))
        
        
def retry_cancelled_order(context, data):
    '''
    Every once in awhile Robinhood orders get mysteriously cancelled. This checks the order status
    and re-submits any cancelled orders.
    '''
    for order_id in context.todays_orders[:]:
        original_order = get_order(order_id)
        if original_order and original_order.status == 2 :
            # The order was somehow cancelled so retry
            retry_id = order(
                original_order.sid, 
                original_order.amount,
                style=LimitOrder(original_order.limit)
                )
            
            log.info('order for %i shares of %s cancelled - retrying' 
                         % (original_order.amount, original_order.sid))
            
            # Remove the original order (typically can't do but note the [:]) and store the new order
            context.todays_orders.remove(original_order)
            context.todays_orders.append(retry_id)


def my_record_vars(context, data):
    """
    Plot variables at the end of each day.
    """
            
    record(cash=context.portfolio.cash)
There was a runtime error.

If you want an indication how it did during 2007-08 you can take a look at this backtest. I swapped to equivalent non-leveraged ETF's (to get a longer history) and leveraged them 3x to simulate the performance of the original algorithm. You can see it takes >50% drawdown. (drawdown is worse with 3x ETF's than with 1x ETF's leveraged 3x by the way)

Clone Algorithm
184
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
A simple algorithm showing how diversification and rebalancing
can make dramatic improvements to volatility and returns.
Note that this trades in 3X leveraged ETFs to get increased returns.
The diversification however, keeps the volatility in check.
'''

def initialize(context):
    """
    Called once at the start of the algorithm.
    """   
    
    # Here are any algorithm 'constants' we'll be using
    context.target_leverage = 3.0
    
    # Here are the ETFs we want to trade along with the weights 
    # Ensure they add to 1.00
    context.etfs = {
        #symbol('TYD'): 0.1, # Daily 7-10 Year Treasury Bull 3X Shares
        #symbol('TMF'): 0.2, # Daily 20+ Year Treasury Bull 3X Shares
        #symbol('EDZ'): 0.2, # Daily MSCI Emerging Markets Bear 3X Shares
        #symbol('SPXL'): 0.5, # Daily S&P 500 Bull 3X Shares
        
        symbol('IEF'): 0.1,
        symbol('TLT'): 0.2,
        symbol('EUM'): 0.2,
        symbol('SPY'): 0.5,
    }
    
    # Set commision model for Robinhood
    set_commission(commission.PerShare(cost=0.0, min_trade_cost=0.0))
 
    # Rebalance our portfolio to maintain target weights
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes = 35))


def rebalance(context, data):
    for stock, weight in context.etfs.items():
        order_target_percent(stock, weight*context.target_leverage)
                

There was a runtime error.

Here is "My all weather trio" very similar to Dan's diversified portfolio.
It does not need to be leveraged 3x or rebalanced daily, unless you want that.

Clone Algorithm
188
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# My all weather trio 
# ---------------------------------------------------------------------------------
ASSETS = {symbol('QQQ'): 0.5, symbol('TLT'): 0.4, symbol('EFA'): -0.1,}; LEV = 1.0;
# ---------------------------------------------------------------------------------
def initialize(context):
    schedule_function(trade, date_rules.month_start(), time_rules.market_open(minutes = 65))

def trade(context, data):
    for sec, weight in ASSETS.items(): order_target_percent(sec, weight*LEV)
        
There was a runtime error.

@Vladimir great algo! Super simple and effective

@Vladimir How did you get the exact weights, could it be overfitted?

@Vladimir I used your All Weather Trio to make some modifications with rebalancing at the end of every month.
Returns are not really good, but the max drawdown was divided by 2 (10%).
I'd love to have your feedback.

ps: sorry for the code, I've just started to learn Python

Clone Algorithm
4
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    context.qqq=sid(19920)
    context.tlt=sid(23921)
    context.efa=sid(22972)
    schedule_function(trade, 
                      date_rules.month_end(), 
                      time_rules.market_open(minutes = 65))
    
def trade(context, data):

    price_qqq=data.history(context.qqq,'close',30,'1d')
    price_tlt=data.history(context.tlt,'close',30,'1d')
    price_efa=data.history(context.efa,'close',30,'1d')
    std_qqq=1/price_qqq.std()
    std_tlt=1/price_tlt.std()
    std_efa=1/price_efa.std()
    sum=(std_qqq+std_tlt+std_efa)

    order_target_percent(context.qqq,std_qqq/sum)
    order_target_percent(context.tlt,std_tlt/sum)
    order_target_percent(context.efa,-std_efa/sum)
There was a runtime error.

Hi @Vladimir,

Thanks for sharing, very interesting one. What's the economic rational of shorting against Europe Developed, UK & Japan?

@Vladimir, thanks for the advice!

Here is performance tearsheet of "Volatility adaptive all weather trio".

Click to load notebook preview

Hi @Vladimir,

Thanks a lot for your explanation. Now I understand it and makes totally sense. It was very useful.

Hi @Marc, please help me understand how does shorting 'EFA' make sense.

Hi @Ritam,

As @Vladimir explained this is a dollar neutral all weather portfolio but instead of shorting 50% of the portfolio, you buy bonds and just short a 15 - 20% of the portfolio. The reason to short is just to hedge and protect against market falls and as the allocation is not that big, it doesn't have that much impact which index you short (as long as you don't short something that contains most of the stocks of Nasdaq 100 in this case).

With that in mind, I guess that the more negative correlation or no correlation of Nasdaq 100 and the index you short, the better (@Vladimir might confirm that).

Variance of @Vladimir algo with the leveraged ETFs to seek higher return (comes with higher drawdown).
Full credit to work : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3549581

Clone Algorithm
45
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# ---------------------------------------------------------------------------------
ASSETS = {symbol('TQQQ'): 0.33, symbol('TMF'): 0.14, symbol('TLT'): 0.33,symbol('EFA'): -0.2}; LEV = 1.0;
# ---------------------------------------------------------------------------------
def initialize(context):
    schedule_function(trade, date_rules.month_start(), time_rules.market_open(minutes = 65))

def trade(context, data):
    for sec, weight in ASSETS.items(): order_target_percent(sec, weight*LEV)
There was a runtime error.

Hi @Vadim,

What's the point of using 20Y US Treasury levered 3x (TMF) and unlevered (TLT)?

@Marc,

I followed the paper
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3549581
Probably would be worth to check if unlevered TLT is really needed.

@Vadim,
If you accept Max Drawdown -30%, this setup is more favorable than discribed in the paper

ASSETS = {symbol('TQQQ'): 0.50, symbol('TMF'): 0.35, symbol('EDZ'): 0.15}; LEV = 1.0;  

Total Returns
1757.83%
Benchmark Returns
225.39%
Alpha
0.25
Beta
0.43
Sharpe
1.48
Sortino
2.14
Volatility
0.21
Max Drawdown
-29.87%

But for myself I would prefer this setup:

ASSETS = {symbol('TQQQ'): 0.35, symbol('TMF'): 0.25, symbol('EDZ'): 0.10, symbol('IEF'): 0.30}; LEV = 1.00;  

Total Returns
858.8%
Benchmark Returns
225.39%
Alpha
0.20
Beta
0.26
Sharpe
1.51
Sortino
2.19
Volatility
0.15
Max Drawdown
-22.15%

Or this:

ASSETS = {symbol('TQQQ'):0.40, symbol('TMF'):0.30, symbol('EDZ'):0.20, symbol('IEF'):0.10}; LEV = 1.00;  

Total Returns
902.43%
Benchmark Returns
225.39%
Alpha
0.24
Beta
0.02
Sharpe
1.37
Sortino
1.99
Volatility
0.18
Max Drawdown
-19.87%

Here is performance tear-sheet for
Robust Leveraged ETF Portfolios modified.

Click to load notebook preview

Hi @Vladimir,

In your setups your are not adjusting weights with inverse volatility and you are not hedging with a short right?

Thanks

@Marc,

In your setups your are not adjusting weights with inverse volatility and you are not hedging with a short right?

In all 3 setups above the weights are not adjusted by inverse volatility but I will try to do that later.
I am sure the returns will be higher and max drawdown smaller.

I hedge 3x TQQQ by long position in -3x EDZ (Direxion Daily MSCI Emerging Markets Bear 3X Shares)
which will allow to use the strategy in retirement accounts.

For Robust Leveraged ETF Portfolio, Volatility adaptive I used just 3 symbols(Trio).

ASSETS = symbols('TQQQ','TMF', 'EDZ');  

Here is performance tear-sheet for my Robust Leveraged ETF Portfolio, Volatility adaptive.

Click to load notebook preview

Hi @Vladimir,

Thanks for sharing. Do you look back 1 month to compute the inverse volatility to re-weight?

Thanks

ASSETS = symbols('TQQQ','TMF', 'EDZ');
But applied a mild leverage in average 1.25

Click to load notebook preview

Hi @Vadim,

At some point you are long 92.19% TQQQ. Apart from leverage, are you adjusting weights on inverse volatility here?

@Marc,

Yes, I apply adjusting weights on inverse volatility based on your algorithm.

@Vadim,

But applied a mild leverage in average 1.25

What is the reason?

If you would apply mild leverage 1.5 the results will be even better.

Total Returns
5697.78%
Benchmark Returns
206.77%
Alpha
0.42
Beta
0.21
Sharpe
1.48
Sortino
2.15
Volatility
0.30
Max Drawdown
-28.67%

To compare different algorithms, they should have the same leverage = 1.0

hi @vadim,

using std**2 would be better as the expected return by kelly criterion ;)

Hi @Louis,

Actually std**2 accelerates returns, but also drawdown increased

Click to load notebook preview

Further improvement.

TQQQ allocation is adjusted by recent highs
thats
raw_wt_qqq proportional to ((recent_high_qqq/current_qqq))/(R_qqq.std()**2)

Click to load notebook preview

Hi @Vadim

I believe your idea is more base on a mean reversion in

raw_wt_qqq proportional to ((recent_high_qqq/current_qqq))/(R_qqq.std()**2)

Then you can slightly increase the return but also drawback and volatility by reducing the hedge in bull market

raw_wt_edz = 0.15*(recent_low_edz/current_edz)/(R_edz.std()**2)

I saw your tearsheet result, but when i ran same idea, I had more drawback and less return, would you mind to share your code to lemme do some more contrast comparison?

Hi @Louis,

Sure. Here it is, it is a little dirty right now as I continue to hack it

Clone Algorithm
33
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
STD=14
LEV=1

def initialize(context):
    # date_rules.week_start(days_offset=0) 3102%, -23%
    # date_rules.every_day()
    
    set_commission(commission.PerShare(cost=0.00, min_trade_cost=0))
    schedule_function(trade,  date_rules.week_start(days_offset=0), time_rules.market_open(minutes = 33))

# ind base symbols (QQQ,TLT,EEM): 1/1/2011: 1642%, -20%
# lev 1.25 : 3255.2%, -24.57%    - 0.5/0.25/0.25
def trade(context, data):  
    
    qqq_symbol = symbol('TQQQ')
    tmf_symbol = symbol('TMF')
    edz_symbol = symbol('EDZ')
    spxl_symbol = symbol('SPXL')
    
    prices_qqq = data.history(qqq_symbol,'close', STD,'1d')  
    prices_tmf = data.history(tmf_symbol,'close', STD,'1d')
    prices_edz = data.history(edz_symbol,'close', STD,'1d')
    prices_spxl = data.history(spxl_symbol,'close', STD,'1d')

    R_qqq = prices_qqq.pct_change()[1:-1]
    R_tmf = prices_tmf.pct_change()[1:-1]
    R_edz = prices_edz.pct_change()[1:-1]    
    
    current_qqq = data.current(qqq_symbol, 'price')
    current_tmf = data.current(tmf_symbol, 'price')
    current_edz = data.current(edz_symbol, 'price')
    
    MAXH=252
    max_qqq = data.history(qqq_symbol, 'price', MAXH, '1d').max()  
    #print(data.current(edz_symbol,'price'))
   # 40 - 35 - 25 for lower drawdown
    raw_wt_qqq = 0.50*((max_qqq/current_qqq))/(R_qqq.std()**2)
    #if raw_wt_qqq < 20:
    #    raw_wt_qqq=raw_wt_qqq/2
    raw_wt_tmf = 0.25/(R_tmf.std()**2) # *((sma_tmf/current_tmf))
    raw_wt_edz = 0.25/(R_edz.std()**2)#*((sma_edz/current_edz))
    raw_wt_spxl = 0 #0.25/(R_spxl.std()**2)
    wt = abs(raw_wt_qqq) + abs(raw_wt_tmf) + abs(raw_wt_edz) + abs(raw_wt_spxl)
    
    wt_qqq = 1*raw_wt_qqq/wt 
    wt_tmf = 1*raw_wt_tmf/wt
    wt_edz = 1*raw_wt_edz/wt
    
    record(qqq_symbol= wt_qqq) 
    record(tmf_symbol= wt_tmf) 
    record(edz_symbol= wt_edz)
    
    if wt_edz > 0.25:
        wt_edz=2*wt_edz/0.25*wt_edz

    #wt_qqq=wt_qqq/0.5*wt_qqq
    #if wt_edz > 0.35:
    #    wt_edz = 1.2*wt_edz
    #if wt_edz > 0.45:
    #    wt_edz = 1.3*wt_edz
    wt = abs(wt_qqq) + abs(wt_tmf) + abs(wt_edz)
    wt_qqq = 1*wt_qqq/wt 
    wt_tmf = 1*wt_tmf/wt
    wt_edz = 1*wt_edz/wt


    order_target_percent(qqq_symbol, wt_qqq*LEV)
    order_target_percent(tmf_symbol, wt_tmf*LEV)
    order_target_percent(edz_symbol, wt_edz*LEV)
    
    record(lever=context.account.leverage)
There was a runtime error.

Hi @Louis,

I see that using variance instead of standard deviation speeds up returns and at the same time it increases drawdown but, how do you noticed it improved the Kelly Formula? (P*W-L)/P

hi @Marc

Actually i uses the stock version

(drift - r)/std**2

the above test has a typo, so i deleted it, now corrected with actually increased return by mean variance and increase hedge at bottom of market

i tried trend follow version by change max/current of tqqq to current/max but not as good

any ideas on drawdown control?

Clone Algorithm
16
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    schedule_function(trade, date_rules.month_start(), time_rules.market_open(minutes = 33))
    context.symbols = {symbol("TQQQ"):0.50, symbol("TMF"):0.35, symbol("EDZ"):0.15}
    
def trade(context, data):  
    wt = {}
    for tick, weigh in context.symbols.items():
        prices = data.history(tick,'open', 21,'1d') 
        wt[tick] = (weigh*(prices.max()/prices[-1])/(prices.pct_change()[-21:-1].std()**2) if tick == symbol("TQQQ") else weigh*(prices.min()/prices[-1])/(prices.pct_change()[-21:-1].std()**2))
    total_wt = sum([wt[x] for x in wt])
    
    for tick, weigh in wt.items():
        order_target_percent(tick, weigh/total_wt)
There was a runtime error.

Hi @Louis,

Here is my variation of your version:

Clone Algorithm
41
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    schedule_function(trade, date_rules.week_start(days_offset=0), time_rules.market_open(minutes = 33))
    context.symbols = {symbol("TQQQ"):0.50, symbol("TMF"):0.25, symbol("EDZ"):0.25}
    
def trade(context, data):  
    wt = {}
    for tick, weigh in context.symbols.items():
        prices = data.history(tick,'open', 21,'1d')
        pricesl = data.history(tick,'open', 252,'1d') 
        wt[tick] = (weigh*(pricesl.max()/prices[-1])/(prices.pct_change()[-14:-1].std()**2) if tick == symbol("TQQQ") else weigh/(prices.pct_change()[-14:-1].std()**2))
    total_wt = sum([wt[x] for x in wt])
    
    for tick, weigh in wt.items():
        order_target_percent(tick, weigh/total_wt)
There was a runtime error.

The next tuning

Click to load notebook preview

Hi there,

very nice paper, and even better symbol pick and discussion ;).

One quick suggestion so weights add up to 100%:

        #order_target_percent(tick, weigh/total_wt)  
        order_target_percent(tick, wt[tick]/total_wt)  

But more important: what else would we need to trade it live? E.g.
- Schedule on a daily basis to catch fast market movements
- Round the order target percent (e.g. 5%, 10%, ...) to reduce slippage and commission
- For each bull, bond and bear symbol, add a second symbol to broaden the basis even more

Hello everyone,

This discussion is really awesome. @Vadim I was wondering what you tweaked in your "next tuning". It's an interesting change considering the sharpe ratio dropped some but you increased returns with roughly the same amount of drawdown. If you wouldn't mind sharing the edit you made that would be awesome!

@Austin,

Here is the code. I would like to hear a feedback on how to improve

Clone Algorithm
128
Loading...
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
STD=21
LEV=1
MAXH=252

def initialize(context):
    # date_rules.week_start(days_offset=0) 
    # date_rules.every_day()
    
    set_commission(commission.PerShare(cost=0.00, min_trade_cost=0))
    schedule_function(trade,   date_rules.week_start(days_offset=0), time_rules.market_open(minutes = 33))

def trade(context, data):  
    
    qqq_symbol = symbol('TQQQ')
    tmf_symbol = symbol('TMF')
    edz_symbol = symbol('EDZ')
    
    prices_qqq = data.history(qqq_symbol,'close', STD,'1d')  
    prices_tmf = data.history(tmf_symbol,'close', STD,'1d')
    prices_edz = data.history(edz_symbol,'close', STD,'1d')

    W=14
    R_qqq = prices_qqq.pct_change()[-W:-1]
    R_tmf = prices_tmf.pct_change()[-W:-1]
    R_edz = prices_edz.pct_change()[-W:-1]

    current_qqq = data.current(qqq_symbol, 'price')
    
    max_qqq = data.history(qqq_symbol, 'price', MAXH, '1d').max()  

    raw_wt_qqq = 0.5*((max_qqq/current_qqq)**2)/(R_qqq.std()**2)
    raw_wt_tmf = 0.20/(R_tmf.std()**2) 
    raw_wt_edz = 0.30/(R_edz.std()**2)

    wt = abs(raw_wt_qqq) + abs(raw_wt_tmf) + abs(raw_wt_edz) 
    #print("raw vola %.5f, %.5f, %.5f"%(R_qqq.std()**2, R_tmf.std()**2, R_edz.std()**2))
    #print("raw wt %.5f, %.5f, %.5f"%(raw_wt_qqq,raw_wt_tmf, raw_wt_edz))

    wt_qqq = 1*raw_wt_qqq/wt 
    wt_tmf = 1*raw_wt_tmf/wt
    wt_edz = 1*raw_wt_edz/wt
    
    record(qqq_symbol= wt_qqq) 
    record(tmf_symbol= wt_tmf) 
    record(edz_symbol= wt_edz)
    
    if wt_edz > 0.25:
        wt_edz=2*wt_edz/0.25*wt_edz

    wt = abs(wt_qqq) + abs(wt_tmf) + abs(wt_edz) 
    wt_qqq = 1*wt_qqq/wt 
    wt_tmf = 1*wt_tmf/wt
    wt_edz = 1*wt_edz/wt
    
    order_target_percent(qqq_symbol, wt_qqq*LEV)
    order_target_percent(tmf_symbol, wt_tmf*LEV)
    order_target_percent(edz_symbol, wt_edz*LEV)
    
    record(lever=context.account.leverage)
There was a runtime error.

This algorithm works very well in the last 10 years.
But what if US market enters a 2-3 years bear market and not rally in the next 10 years?

Have you consider other trading vehicle etfs ? Will you still pick edz ?
Have you consider adding ugld into portfolio?

If I add ugld and allocate like this: tqqq-45%, tmf-25%, edz 15%, ugld 15%,
how do I change the program?

thanks

Vladimir added some comments here enter link description here, which may be of interest to the "My all weather trio" strategy. Among other things, the sensitivity of the choice of ETFs and the frequency of rebalancing.