Back to Community
First Pass from AI derived trading signals looking for multi sigma events

Using our AI, we applied deep learning to generate signals. First pass results using a naive trading approach

Clone Algorithm
Backtest from to with initial capital
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
def initialize(context):
    context.log = optimizeLog()
    context.startDate = None
    context.spy = sid(8554)
    context.svxy = sid(41968)
    context.xiv = sid(40516)
    context.sds = sid(32382)
    # = sid(32268)
    # Rebalance every day, an hour and a half after market open.
    schedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open())
    # set_benchmark(sid(23921))

def handle_data(context, data):
# This function was scheduled to run once per day at 11AM ET.
def my_rebalance(context, data):
    now = get_datetime()    
    if context.startDate is None:
        context.startDate = now
    evt = findEvent(context.log, now, context.startDate)
    if evt is not None:["date"].strftime('%Y-%m-%d') + "  " + evt["position"])
        if evt["position"] == "L":
            order_target_percent(context.sds, 0.0)  
            xivPosition = 0
            if data.can_trade(context.xiv):
                xivPosition = 0.3
                order_target_percent(context.xiv, xivPosition)    
            order_target_percent(context.spy, 1 - xivPosition)
        if evt["position"] == "S":
            order_target_percent(context.spy, 0)    
            if data.can_trade(context.xiv):
                order_target_percent(context.xiv, 0)    
            order_target_percent(context.sds, 1.0)
        if evt["position"] == "B":
            if data.can_trade(context.xiv):
                order_target_percent(context.xiv, 0)    
            order_target_percent(context.sds, 0)
            order_target_percent(context.spy, 1.0)    

        evt["handled"] = True


def findEvent(log, now, start):
    e = None
    for d in log:
        if d["date"] > now:
        if "handled" in d:
        if start == now or d["date"] > start:
            e = d
    return e
def loadPredictionLog():
    return [{'date': '2002-01-02', 'position': 'B'},
 {'date': '2002-06-11', 'position': 'L'},
 {'date': '2002-06-17', 'position': 'S'},
 {'date': '2002-07-19', 'position': 'L'},
 {'date': '2002-07-20', 'position': 'L'},
 {'date': '2002-08-26', 'position': 'S'},
 {'date': '2002-09-24', 'position': 'L'},
 {'date': '2002-09-25', 'position': 'L'},
 {'date': '2002-11-06', 'position': 'B'},
 {'date': '2003-03-04', 'position': 'L'},
 {'date': '2003-04-13', 'position': 'B'},
 {'date': '2003-08-05', 'position': 'L'},
 {'date': '2003-09-05', 'position': 'B'},
 {'date': '2008-04-02', 'position': 'L'},
 {'date': '2008-05-04', 'position': 'B'},
 {'date': '2008-09-03', 'position': 'S'},
 {'date': '2008-10-09', 'position': 'L'},
 {'date': '2008-10-10', 'position': 'L'},
 {'date': '2008-10-13', 'position': 'S'},
 {'date': '2008-10-15', 'position': 'L'},
 {'date': '2008-10-17', 'position': 'L'},
 {'date': '2008-10-21', 'position': 'S'},
 {'date': '2008-10-27', 'position': 'L'},
 {'date': '2008-10-28', 'position': 'L'},
 {'date': '2008-10-29', 'position': 'S'},
 {'date': '2008-11-17', 'position': 'L'},
 {'date': '2008-11-18', 'position': 'L'},
 {'date': '2009-01-02', 'position': 'S'},
 {'date': '2009-01-07', 'position': 'L'},
 {'date': '2009-01-08', 'position': 'L'},
 {'date': '2009-01-28', 'position': 'S'},
 {'date': '2009-02-28', 'position': 'L'},
 {'date': '2009-03-01', 'position': 'L'},
 {'date': '2009-06-21', 'position': 'B'},
 {'date': '2009-06-22', 'position': 'L'},
 {'date': '2009-08-30', 'position': 'B'},
 {'date': '2009-10-28', 'position': 'L'},
 {'date': '2009-12-06', 'position': 'B'},
 {'date': '2010-01-29', 'position': 'L'},
 {'date': '2010-03-12', 'position': 'B'},
 {'date': '2010-07-02', 'position': 'L'},
 {'date': '2010-08-18', 'position': 'B'},
 {'date': '2010-08-20', 'position': 'L'},
 {'date': '2010-10-01', 'position': 'B'},
 {'date': '2011-07-12', 'position': 'S'},
 {'date': '2011-09-01', 'position': 'B'},
 {'date': '2011-09-22', 'position': 'L'},
 {'date': '2011-10-27', 'position': 'B'},
 {'date': '2011-11-25', 'position': 'L'},
 {'date': '2011-12-30', 'position': 'B'},
 {'date': '2012-11-09', 'position': 'L'},
 {'date': '2012-12-17', 'position': 'B'},
 {'date': '2013-06-30', 'position': 'L'},
 {'date': '2013-07-31', 'position': 'B'},
 {'date': '2014-02-05', 'position': 'L'},
 {'date': '2014-03-09', 'position': 'B'},
 {'date': '2016-01-27', 'position': 'L'},
 {'date': '2016-03-20', 'position': 'B'},
 {'date': '2016-05-19', 'position': 'L'},
 {'date': '2016-06-19', 'position': 'B'}]

def optimizeLog():
    import datetime
    import pytz
    log = loadPredictionLog()
    newLog = []
    for e in log:
        newDate = datetime.datetime.strptime(e["date"],'%Y-%m-%d').replace(tzinfo=pytz.utc)
        newLog.append({"date":newDate, "position":e["position"]})
    return newLog

There was a runtime error.
7 responses

Here's the notebook for above model

Loading notebook preview...
Notebook previews are currently unavailable.

Hi Pej, thanks for sharing your work. If you are interested in being considered for an allocation to the strategy, I'd recommend:

  • Screen a large dynamic universe, using the Q1500
  • Reduce the position concentration, I'd suggest a threshold of 5% cap or below to start.
  • Update the strategy to be cross-sectional with low common factor exposure (sector, fama-french, beta)

Then I'd run analyses to find the optimal trading time. The market open window tends to have higher spreads and thus costs, which are not being modeled in the the simulation. I'd examine how does the algo perform at different trading intervals and different days.

Dan goes further into our selection criteria in this latest post: Good luck!


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Pej,

Thank you for sharing your work. Impressive stuff indeed. Did you consider using a larger universe and minimizing the drawback? I will probably use your algorithm as the base for my own AI algorithm.



Hey Pej, very fascinating. Could you explain your hypothesis behind your algo? Taking a look through the algo and it looks like your looking for multi-sigma event across four specific ETF's:

My Questions:
1) What are multi-sigmas?
2) Why Deep Learning vs Regression vs. RL?
3) Why these four specific ETF's?

Ah I see what your strategy was, you're balancing between buying the S&P, shorting the S&P and the volatility. But it looks like most of the returns were coming from the XIV Inverse Volality ETF, which went belly up in 2018. Investors were flocking to it because of the 1200% returns, but it was highly risky. Its interesting that your beta remained as low as it did, technically the beta of your portfolio is accurate, however because the underlying ETF had other risk associated, this risk of this portfolio was even higher than the beta was letting on.

Can you get the same results just with stocks, no ETFs? It looks like XIV was very controversial and discontinued.

It looks to me that a simple rule based on momentum + stop loss may yield similar results compared to AI. Do you have a baseline that does that? Thanks