Back to Community
Contest Results Plot

I just found out about Quantopian last week. This is an incredible place! I'm envious of those that are young and starting out...having a resource like this freely available to play with is invaluable. All one needs is a cheap Chromebook and an Internet connection to get going.

There is a steep learning curve and I'm wondering for myself if it is a worthy investment of my time learning this stuff and experimenting with algorithms or will I just be chasing after a Unicron.

I decided to look at how the real money ports are doing. They don't make it easy and some data mining is involved. Here's a notebook that plots out their cumulative returns using pandas and matplotlib. It's fairly dirty. I've done some python but have never used these modules. Supposedly there some way of running this stuff through Pyfolio, but I havent figured that out yet. All within due time.

From the looks of it, looks like Michael Van Kleeck is doing something right. And Spencer Singleton is hanging in there.

Loading notebook preview...
Notebook previews are currently unavailable.
13 responses

It looks like the winner of contest 3 Jeff Koch is missing

Man this whole beating the market thing is no easy task.

I'm not sure what happened to Jeff Koch's algo. He did post his winning entry: Contest 3

The contest performance stopped being updated on December 31. I wonder how the contestants performed during the first day of the 2016 trading year with the 1-2% decline in the broad market.

This has been updated to 1/12/2016.

Also included is a full tear sheet of one of the algorithms.

Loading notebook preview...
Notebook previews are currently unavailable.

Nice always interesting to see how other contest winners are doing. You should add in SPY buy and hold to add perspective as well. That would be cool in my opinion.

Here's an update to my winning algo. --Grant

Clone Algorithm
Backtest from to with initial capital
Total Returns
Max Drawdown
Benchmark Returns
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import pandas as pd

def initialize(context):
    context.stocks = [ sid(7792),
                       sid(1637) ]
    context.m = len(context.stocks)
    context.eps = 1.0028 # change epsilon here
    context.b_t = np.ones(context.m) / context.m
    schedule_function(trade, date_rules.every_day(), time_rules.market_open(minutes=60))
def handle_data(context,data):

def trade(context,data):
    prices = history(20*390,'1m','price')
    prices = pd.ewma(prices, span = 390).as_matrix(context.stocks)
    # skip bar if any orders are open
    for stock in context.stocks:
        if bool(get_open_orders(stock)):
    sum_weighted_port = np.zeros(context.m)
    sum_weights = 0
    for n in range(0,len(prices[:,0])+1):
        (weight,weighted_port) = get_weighted_port(data,context,prices,n)
        sum_weighted_port += weighted_port
        sum_weights += weight
    allocation_optimum = sum_weighted_port/sum_weights
    rebalance_portfolio(context, allocation_optimum)
def get_weighted_port(data,context,prices,n):
    # update portfolio
    for i, stock in enumerate(context.stocks):
        context.b_t[i] = context.portfolio.positions[stock].amount*data[stock].price
    denom = np.sum(context.b_t)
    # test for divide-by-zero case
    if denom == 0.0:
        context.b_t = np.ones(context.m) / context.m
        context.b_t = np.divide(context.b_t,denom)

    x_tilde = np.zeros(context.m)

    b = np.zeros(context.m)
    # find relative moving volume weighted average price for each secuirty
    for i, stock in enumerate(context.stocks):
        mean_price = np.mean(prices[-n:,i])
        x_tilde[i] = mean_price/prices[-1,i]
    # Inside of OLMAR (algo 2)

    x_bar = x_tilde.mean()

    # Calculate terms for lambda (lam)
    dot_prod =, x_tilde)
    num = context.eps - dot_prod
    denom = (np.linalg.norm((x_tilde-x_bar)))**2

    # test for divide-by-zero case
    if denom == 0.0:
        lam = 0 # no portolio update
        lam = max(0, num/denom)
    b = context.b_t + lam*(x_tilde-x_bar)

    b_norm = simplex_projection(b)
    weight =,x_tilde)
    return (weight,weight*b_norm)

def rebalance_portfolio(context, desired_port):
    for i, stock in enumerate(context.stocks):
        order_target_percent(stock, desired_port[i])

def simplex_projection(v, b=1):
    """Projection vectors to the simplex domain

Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by [email protected] AT
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0

Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w

>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print proj
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print proj.sum()

Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2012 by Thomas Wiecki ([email protected]).

    v = np.asarray(v)
    p = len(v)

    # Sort v into u in descending order
    v = (v > 0) * v
    u = np.sort(v)[::-1]
    sv = np.cumsum(u)

    rho = np.where(u > (sv - b) / np.arange(1, p+1))[0][-1]
    theta = np.max([0, (sv[rho] - b) / (rho+1)])
    w = (v - theta)
    w[w<0] = 0
    return w
We have migrated this algorithm to work with a new version of the Quantopian API. The code is different than the original version, but the investment rationale of the algorithm has not changed. We've put everything you need to know here on one page.
There was a runtime error.

The leverage spike on Aug 24th is clearly visible. I am live-trading a version of this myself now, among others.

Pretty classic. Outperforms massively in-sample, doesn't do so well OOS.

A few papers to consider looking at:

Backtesting (Harvey and Liu), Pseudocharlatanism, and Quantifying Backtest Overfitting

This all fits under the area of p-hacking or irreproduceability in the Sciences.

Usually simpler is better in statistics. Much simpler is much better in algorithmic trading.

"Usually simpler is better in statistics. Much simpler is much better in algorithmic trading. "

Correct, but if you say that a couple of more times, the folks here will quote Investopedia on you, telling you that active trading adds some diversification which is hard to achieve by the buy-and-holders...they will probably also try to convince you that SPY is not the right benchmark, etc. etc.

Here is how I know this:

winners-real-money-daily-returns.csv was last updated on 4/19/2016

By that date results was:

Grant Kiehne -10.16%
Simon Thornington -7.70%
Szilvia Hegyi -10.63%
Michael Van Kleeck 4.21%
Spencer Singleton -2.26%
Pravin Bezwada 8.73%
Robert Shanks -1.26%

@Nick Firoozye
The right name of second paper is not Pseudocharlatanism but Pseudo-Mathematics and Financial Charlatanism.

Updated to 6-29-16

Loading notebook preview...
Notebook previews are currently unavailable.

Beta of this strategy is almost 1, so in fact there is no difference between strategy and benchmark. It's typical overfitting.