This algo is an implementation of Universal Portfolios, described in paper by Professor Thomas M. Cover from Stanford. His book is one of the standard textbooks in Information Theory. For implementation tricks to bring the theory into practice, please refer to comments in the code.

This model makes no statistical assumption about distribution of asset prices (e.g. normal, log-normal). Also, it leaves few rooms for backtest-fitting, as only parameters are choice of (i) assets in portfolio and (ii) look back period. I chose very basic 8 ETFs and one year period.

Interestingly, in late 2008 the model decided to own none of the 8 ETFs and just held cash during financial crisis. Even more interestingly, the model again sold all ETFs on August 26, 2015, so we just have $3M in cash today (started with $1M 10 years ago)...

Note: It is long-only and with high beta - not suitable as for contest. But it could be potentially a useful framework in allocating funds to algos in hedge fund.

Returns | 1 Month | 3 Month | 6 Month | 12 Month |

Alpha | 1 Month | 3 Month | 6 Month | 12 Month |

Beta | 1 Month | 3 Month | 6 Month | 12 Month |

Sharpe | 1 Month | 3 Month | 6 Month | 12 Month |

Sortino | 1 Month | 3 Month | 6 Month | 12 Month |

Volatility | 1 Month | 3 Month | 6 Month | 12 Month |

Max Drawdown | 1 Month | 3 Month | 6 Month | 12 Month |

''' --------------------------------------- UNIVERSAL PORTFOLIOS --------------------------------------- Implementation of strategy inspired by the paper by Thomas M. Cover, Information Theorist Implementation authored by Naoki Nagai, 2015 Description: This algo is an Quantopian python implematation of Universal Portfolios, described in paper by Professor Thomas M. Cover from Stanford. Universal Portfolio is mathematically proved to achieve the return close to to the optimal constantly rebalanced portfolio in hindsight. Methodology: Let us construct regularly rebalanced portfolios with fixed weights given to each security (e.g. 40% Equity, 40% Bond, 20% Gold). What would be the optimal weight given to each asset? In this methodology, we evaluate every portfolio with every possible combination of weights, and calculate the return for each. Then, our Universal Porfolio will be the weighted average of the all of these possible portfolio, weighted by the performance of each. We don't make any kind of statistical assumptions about the underlying distribution of prices. It's purely based on historical pricing data. Proof: Professor Cover's paper shows that return generated from this methodology S^ approaches S*. S* is the return of the regularly rebalanced portfolio with the optimal constant weight, which was selected in hindsight. Even though we select the universal portfolio before knowing how it turns out, it approaches the optimal portfolio that was selected after the performance is known. (for proof, see paper http://www-isl.stanford.edu/~cover/papers/paper93.pdf) Analogy: Algo works kind of like this. We have tens of thousands of porfolio managers who decides their own allocations. Then looking at the their performance for the 1 past year, we allocate our investment fudns proportional to the past 1 year return. You can imagine this probably works. Implication: Perhaps Q fund could allocate their entire fund to each algo using this methodology. ''' import numpy as np def initialize(context): set_symbol_lookup_date('2015-01-01') context.equities = symbols( # Equity 'VV', # US large cap 'VO', # US mid cap 'VB', # US small cap ) context.fixedincome = symbols( # Fixed income 'TLT', # Long-term government bond 'IEF', # Mid-term government bond 'LQD', # Corporate bond ) context.realasset = symbols( # Commodity and REIT 'GLD', # Gold 'VNQ', # US REIT ) context.securities = context.equities + context.fixedincome + context.realasset context.period = 252 # One year to evaluate past performance context.lever = 2.0 # Leverage context.allocation_intervals = 10 # Allocation intervals (100% / 10 = 10% increment) context.weights = dict.fromkeys(context.securities, 0) context.shares = dict.fromkeys(context.securities, 0) # analyze function determine the target weights for the week schedule_function(analyze, date_rules.week_start(days_offset = 2), time_rules.market_close()) # rebalance function determine the target shares for the day schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(minutes=60)) def analyze(context, data): # History of return prices prices = history(bar_count=context.period+1, frequency='1d', field='price', ffill=True) # Returns is daily change, remove empty security, remove the last day NaN returns = prices.pct_change().dropna(how='all',axis=1)[1:] # Change the data to Numpy for faster calculation X = np.array(returns) X[np.isnan(X)] = 0 # Transpose and add 1 (i.e. change -0.01 -> 0.99) X = X.transpose() + 1.0 (n, m) = X.shape # In theory, we are supposed to calculate the integral of wealth over all portfolio # You cannot do that in practice, so we approximate by doing it descreetly. # We are going to vary weight by 10% due to memory constraint. B = binnings(context.allocation_intervals, n) / context.allocation_intervals # B is a matrix containing weights for every possible portfolio. # We try every combination of weights for m securities # There are precision C m combinations for such portfolio = 19,448 portfolios for 8 securities log.info('--- Universal Portfolio: evaluated %d possible portfolio for %d assets' % B.shape) # S is wealth vector corresponding to each portfolio. It is calculated as below # - B contains vectors of weights for all portfolio, X contains daily return for each asset # - By matrix algebra, BX will calculate daily returns for each porfolio for the past yaer # - Product of BX along the axis 1 (time) is the annual return for each portfolio. S = np.prod(np.dot(B,X), axis=1) - 1 # Finally weight is calculated by weighted average of all portfolios possible, # using the past 1 year of past return as the weight. We can do this by SB/|S| W = np.dot(S,B)/sum(abs(S)) # Store the weight in context variable. We calculate this weekly. # Actually ordering of shares is peformed in rebalance function i = 0 for sec in returns: log.info('%4s: % 2.1f (%s)' % (sec.symbol, W[i] *100, sec.security_name)) if sec in data: # We set the weight to long-only. # After we calculate the weight average of the all portfolio returns, # it could happen that weighted average ends up being negative. # It needs to be verified but it does not mean we should short it. # It means it needs not to be invested in those securities with negative weight context.weights[sec] = max(0,W[i]) i = i + 1 # From the target weight, calculate how many shares we should be owning def rebalance(context, data): # Take averages of 3 days to avoid over-reacting to daily price fluctuation prices = history(3, frequency='1d', field='price', ffill=False).mean() for sec in context.weights: # Target weight for this asset target_weight = context.weights[sec] * context.lever # How many shares are we trading? target_share = context.portfolio.portfolio_value * target_weight / prices[sec] # Record target shares context.shares[sec] = target_share def execute(context, data): # Average trading volume per hour tradingvolume = history(3, frequency='1d', field='volume', ffill=True).mean() for sec in context.shares: # If share has no data, skip if sec not in data: continue # If we still have outstanding orders, skip if sec in get_open_orders(): continue # How many shares are we trading? target_share = context.shares[sec] # How many shares do we have now? current_share = context.portfolio.positions[sec].amount # Trading shares is the gap between the current and target shares trade_share = target_share - current_share # volume of share trade cannot exceed the trading volume of the last bar trade_share = min(trade_share, tradingvolume[sec]/390/5) # for buying shares trade_share = max(trade_share, -tradingvolume[sec]/390/5) # for selling shares # Don't trade less than $1000 to save comission if abs(trade_share * data[sec].price) < 1000: continue # Make the order order_target(sec, current_share + trade_share) def handle_data(context, data): w = context.weights record(equities = sum(w[s] for s in context.equities if w[s] > 0)) record(fixedincome = sum(w[s] for s in context.fixedincome if w[s] > 0)) record(realassets = sum(w[s] for s in context.realasset if w[s] > 0)) record(cash = max(0,context.portfolio.cash) / context.portfolio.portfolio_value) execute(context, data) # Thanks to smart implementaion by the user 'bar' from stackoverflow # http://stackoverflow.com/questions/6750298/efficient-item-binning-algorithm-itertools-numpy def binnings(n, k, cache={}): if n == 0: return np.zeros((1, k)) if k == 0: return np.empty((0, 0)) args = (n, k) if args in cache: return cache[args] a = binnings(n - 1, k, cache) a1 = a + (np.arange(k) == 0) b = binnings(n, k - 1, cache) b1 = np.hstack((np.zeros((b.shape[0], 1)), b)) b1 = np.hstack((np.zeros((b.shape[0], 1)), b)) result = np.vstack((a1, b1)) cache[args] = result return result