Back to Community
First Pass from AI derived trading signals looking for multi sigma events

Using our AI, we applied deep learning to generate signals. First pass results using a naive trading approach

Clone Algorithm
128
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    context.log = optimizeLog()
    context.startDate = None
    
    context.spy = sid(8554)
    context.svxy = sid(41968)
    context.xiv = sid(40516)
    context.sds = sid(32382)
    # context.sh = sid(32268)
    # Rebalance every day, an hour and a half after market open.
    schedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open())
    # set_benchmark(sid(23921))

def handle_data(context, data):
    record(l=context.account.leverage)
    pass
    
# This function was scheduled to run once per day at 11AM ET.
def my_rebalance(context, data):
    now = get_datetime()    
    if context.startDate is None:
        context.startDate = now
    evt = findEvent(context.log, now, context.startDate)
    if evt is not None:
        log.info(evt["date"].strftime('%Y-%m-%d') + "  " + evt["position"])
        if evt["position"] == "L":
            order_target_percent(context.sds, 0.0)  
            xivPosition = 0
            if data.can_trade(context.xiv):
                xivPosition = 0.3
                order_target_percent(context.xiv, xivPosition)    
            order_target_percent(context.spy, 1 - xivPosition)
        if evt["position"] == "S":
            order_target_percent(context.spy, 0)    
            if data.can_trade(context.xiv):
                order_target_percent(context.xiv, 0)    
            order_target_percent(context.sds, 1.0)
        if evt["position"] == "B":
            if data.can_trade(context.xiv):
                order_target_percent(context.xiv, 0)    
            order_target_percent(context.sds, 0)
            order_target_percent(context.spy, 1.0)    

        evt["handled"] = True

                    

            
def findEvent(log, now, start):
    e = None
    for d in log:
        if d["date"] > now:
            break
        if "handled" in d:
            continue
        if start == now or d["date"] > start:
            e = d
    return e
            
def loadPredictionLog():
    return [{'date': '2002-01-02', 'position': 'B'},
 {'date': '2002-06-11', 'position': 'L'},
 {'date': '2002-06-17', 'position': 'S'},
 {'date': '2002-07-19', 'position': 'L'},
 {'date': '2002-07-20', 'position': 'L'},
 {'date': '2002-08-26', 'position': 'S'},
 {'date': '2002-09-24', 'position': 'L'},
 {'date': '2002-09-25', 'position': 'L'},
 {'date': '2002-11-06', 'position': 'B'},
 {'date': '2003-03-04', 'position': 'L'},
 {'date': '2003-04-13', 'position': 'B'},
 {'date': '2003-08-05', 'position': 'L'},
 {'date': '2003-09-05', 'position': 'B'},
 {'date': '2008-04-02', 'position': 'L'},
 {'date': '2008-05-04', 'position': 'B'},
 {'date': '2008-09-03', 'position': 'S'},
 {'date': '2008-10-09', 'position': 'L'},
 {'date': '2008-10-10', 'position': 'L'},
 {'date': '2008-10-13', 'position': 'S'},
 {'date': '2008-10-15', 'position': 'L'},
 {'date': '2008-10-17', 'position': 'L'},
 {'date': '2008-10-21', 'position': 'S'},
 {'date': '2008-10-27', 'position': 'L'},
 {'date': '2008-10-28', 'position': 'L'},
 {'date': '2008-10-29', 'position': 'S'},
 {'date': '2008-11-17', 'position': 'L'},
 {'date': '2008-11-18', 'position': 'L'},
 {'date': '2009-01-02', 'position': 'S'},
 {'date': '2009-01-07', 'position': 'L'},
 {'date': '2009-01-08', 'position': 'L'},
 {'date': '2009-01-28', 'position': 'S'},
 {'date': '2009-02-28', 'position': 'L'},
 {'date': '2009-03-01', 'position': 'L'},
 {'date': '2009-06-21', 'position': 'B'},
 {'date': '2009-06-22', 'position': 'L'},
 {'date': '2009-08-30', 'position': 'B'},
 {'date': '2009-10-28', 'position': 'L'},
 {'date': '2009-12-06', 'position': 'B'},
 {'date': '2010-01-29', 'position': 'L'},
 {'date': '2010-03-12', 'position': 'B'},
 {'date': '2010-07-02', 'position': 'L'},
 {'date': '2010-08-18', 'position': 'B'},
 {'date': '2010-08-20', 'position': 'L'},
 {'date': '2010-10-01', 'position': 'B'},
 {'date': '2011-07-12', 'position': 'S'},
 {'date': '2011-09-01', 'position': 'B'},
 {'date': '2011-09-22', 'position': 'L'},
 {'date': '2011-10-27', 'position': 'B'},
 {'date': '2011-11-25', 'position': 'L'},
 {'date': '2011-12-30', 'position': 'B'},
 {'date': '2012-11-09', 'position': 'L'},
 {'date': '2012-12-17', 'position': 'B'},
 {'date': '2013-06-30', 'position': 'L'},
 {'date': '2013-07-31', 'position': 'B'},
 {'date': '2014-02-05', 'position': 'L'},
 {'date': '2014-03-09', 'position': 'B'},
 {'date': '2016-01-27', 'position': 'L'},
 {'date': '2016-03-20', 'position': 'B'},
 {'date': '2016-05-19', 'position': 'L'},
 {'date': '2016-06-19', 'position': 'B'}]

def optimizeLog():
    import datetime
    import pytz
    log = loadPredictionLog()
    newLog = []
    for e in log:
        newDate = datetime.datetime.strptime(e["date"],'%Y-%m-%d').replace(tzinfo=pytz.utc)
        newLog.append({"date":newDate, "position":e["position"]})
    return newLog

                
There was a runtime error.
9 responses

Here's the notebook for above model

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Pej, thanks for sharing your work. If you are interested in being considered for an allocation to the strategy, I'd recommend:

  • Screen a large dynamic universe, using the Q1500
  • Reduce the position concentration, I'd suggest a threshold of 5% cap or below to start.
  • Update the strategy to be cross-sectional with low common factor exposure (sector, fama-french, beta)

Then I'd run analyses to find the optimal trading time. The market open window tends to have higher spreads and thus costs, which are not being modeled in the the simulation. I'd examine how does the algo perform at different trading intervals and different days.

Dan goes further into our selection criteria in this latest post: https://www.quantopian.com/posts/getting-an-allocation-june-2017-update. Good luck!

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Pej,

Thank you for sharing your work. Impressive stuff indeed. Did you consider using a larger universe and minimizing the drawback? I will probably use your algorithm as the base for my own AI algorithm.

Cheers,

Pieter

Hey Pej, very fascinating. Could you explain your hypothesis behind your algo? Taking a look through the algo and it looks like your looking for multi-sigma event across four specific ETF's:

My Questions:
1) What are multi-sigmas?
2) Why Deep Learning vs Regression vs. RL?
3) Why these four specific ETF's?

Ah I see what your strategy was, you're balancing between buying the S&P, shorting the S&P and the volatility. But it looks like most of the returns were coming from the XIV Inverse Volality ETF, which went belly up in 2018. Investors were flocking to it because of the 1200% returns, but it was highly risky. Its interesting that your beta remained as low as it did, technically the beta of your portfolio is accurate, however because the underlying ETF had other risk associated, this risk of this portfolio was even higher than the beta was letting on.

https://seekingalpha.com/article/4145353-xiv-sorry-tell-told

Can you get the same results just with stocks, no ETFs? It looks like XIV was very controversial and discontinued.

It looks to me that a simple rule based on momentum + stop loss may yield similar results compared to AI. Do you have a baseline that does that? Thanks

Sorry I hadn't paid attention to this thread because it was a while ago. This [multi-sigma predictor][1] was covered by Waters Technology Magazine way back when we first created it:

https://www.waterstechnology.com/3464301

The premise is to predict 'events' >= 3 sigma over 5, 10 and 20 day windows. This is not a stand-alone trading strategy. Volatility instruments and primary market etf's are used for signal validation as well as historical interpretation. In addition to building predictive models, Supervised ML is highly effective in other areas: algorithm feature selection (parameter tuning), universe selection and portfolio construction (offset implementation shortfall).

I run a high capacity market neutral portfolio with approximately 450-550 constituents selected from a pool of ~1000 candidates comprised of mid to mega cap companies with deep liquidity. Similar to Q1500 but based on my own criteria. Having tried to cross our universe with Q1500, I found far too many tradable names that are not LP's or ADR's missing. The absence of some of those names seems almost random, or at the very least, incongruent with the realities of running a market neutral book with a wide enough base such that simulated results are statistically relevant and realistic upon going into production (taking into consideration short locates and carry costs).

Running a two sided portfolio with a wide constituency presents logistical challenges as well as opportunities. Far too often, aspiring PM's neglect to consider strategies designed to manage cash flows at the portfolio level when running a true high capacity portfolio with hundreds of widely covered names. On the other hand, implementing a strategy overlay designed to harvest cash from existing inventory could reduce or entirely offset portfolio carry costs including short locate fees and dividends paid on short positions. Often, portfolio inflows (interest on credit balances, dividends earned) are not sufficient to naturally cover outflows. There are two ways to harvest cash: Lending stock which would require a substantial portfolio, and selling covered volatility against existing positions, potentially exposing the book to outlier risk.

This is where Multi-Sigma plays a key role. To harvest cash, I actively sell covered vola against many of the positions in inventory and use Multi-Sigma as an early warning radar. If the signal fires, the algo begins to close out short vola where the underlying could get called away or the option position is just slightly profitable because it was recently put on. Further, depending on the strength of the signal, I may buy vola to gain long gamma exposure. In this context, the multi-sigma predictor is used as an overlay strategy against a delta neutral portfolio.

Beta management is an entirely different process and unrelated to premia capture operations. This is another area where supervised ML generates alpha. First, my model deconstructs the market down to custom factors, and measures the beta load of each factor relative to portfolio beta and also vs. SP500 and MSCI Total World. In practice, however, a dogmatic adherence to "beta neutrality vs SP500" is not always in the best interest of the portfolio.

Then there's capacity: There are two ways to fill capacity, one of them being through the application of leverage. A strategy such as this is considered through a different risk matrix by prime brokers and capital providers. In the case of PB's, they may allow the use of their balance sheet to increase the size of the book, going as high as 20x and sometimes more. But just because it's on offer doesn't mean one has to use it. We haven't gone beyond 15x leverage.

Loading notebook preview...
Notebook previews are currently unavailable.

Remarkable! That's the highest Sharpe I've ever seen! I presume you're uploading custom data nightly, no?