Back to Community
How to Get an Allocation in 2019

At Quantopian, we continue to get more experience and deeper insights into how to run our fund and what types of algorithms work best in our investment process. In addition, we have frequent discussions with signed authors, where we learn about their challenges crafting investment algorithms and their questions about the fund. With the benefit of these learnings, we are able to provide actionable insights on what to focus on and what to neglect when developing algorithms.

Our approach has changed quite a bit from before. Initially, we ran your licensed algorithm pretty much exactly as you coded it as its own standalone portfolio. This has several downsides, the most severe being the lofty requirements on the algorithm to appropriately manage trading and risk. That approach also made it very difficult for us to effectively manage risk across multiple strategies.

Our new investment approach is called "factor combination", where we do not view your algorithm as something that emits trades we execute, but rather as a "factor" (i.e. a daily scalar score for every stock in your universe that should be predictive of future returns). While we can't directly observe the underlying factors in your algorithm, we use the end of day (EOD) holdings of your algorithm as an approximation. In this analysis, we completely ignore individual trades and also recompute your returns based on your EOD holdings.

We then select multiple factors from the community, potentially do some transformations on them (e.g., to manage turnover), and then combine them together into a single factor. We then plug this combined factor into our optimizer to form our final portfolio. This approach gives us more flexibility and significantly lowers the requirements on individual strategies, allowing us to license new strategies faster and in greater numbers than before.

Given our shifting investment process, we also want to explain how we are thinking about the contest. We are not immediately making changes to the contest rules or guidelines for the "factor combination" approach. Even with our new approach, the contest continues to be our best resource for finding authors to fund; we look closely at the authors who enter the contest, given the skill and effort it demands. In addition, changing the contest rules is a costly procedure (time and resources for Quantopian engineers and community members) and we'd like to see how the suggestions in this post are received before we make any rule changes.

New Suggestions for Aspiring Fund Authors

With this updated investment process, we have some new suggestions for developing new algorithms / factors:

  • Your primary focus should be on coming up with new factors with positive, medium-term alpha (see below on what that means).
  • Find alpha in new places: As price-derived factors have been around for ages, your chances of finding alpha that is likely to persist into the future will be maximized if you look at alternative datasets like estimates. You might also consider uploading your own data via self-serve.
  • Set your trades to execute 1-2 hours before the close: since we evaluate your algorithm purely based on its EOD holding, it does not matter how you got into your portfolio at the close. Whether you trade at the open or at the close, we won't see a difference, even though it will look different to you in the backtester. By setting your trades to execute near the close they will certainly get filled and your backtester performance will look similar to what our own analysis will show. Finally, when your strategy's performance is influenced by the trade time, it's indicative of short-term alpha, which we are currently not trying to capture (more on that further below). Here is some example code: schedule_function(rebalance, date_rules.every_day(), time_rules.market_close(hours=1))
  • Be mindful of alpha decay: we try to have our final portfolio follow the combined factor as close as possible but there is always going to be some delay due to turnover restrictions. As such, if your portfolio only exhibits alpha on the first day, it might actually be detracting on subsequent days, when it actually makes its way into our portfolio. It's thus critical to know how predictive your factor is when its delayed by a few days. This alpha decay analysis is the central point of our new tearsheet: In fact, this analysis is what we use to evaluate your algorithm and thus should be your primary tool as it shows the alpha decay front and center. Unfortunately the backtest analysis screen will not show you this information: If you have a factor with lots of short-term alpha it will look great in the backtester (which assumes no trading delay) but to us it will not look attractive as we want positive alpha over several days (anywhere from 5 to 10). If you do have short-term alpha that you want to extend, one possible approach is to increase the lookback of your factor (e.g. use 6-month instead of 3-month momentum). Keep in mind that there is no easy trick for finding alpha with slower decay rates; that is where having innovative ideas is most valuable.
  • If you need to reduce turn-over, average your factor over multiple days rather than by subsampling (e.g. only trading weekly): Related to the point above, we don’t target 100% turnover per day; we try to build exposure to your portfolios subject to all constraints. We would rather not trade into a stale portfolio which you artificially subsampled. A better method to reduce turnover is to apply a moving average (e.g. 5 days) to your factor scores. Pipeline makes this very easy (SimpleMovingAverage(inputs=[factor.notnull()], window_length=5)). That way it will reduce turnover while still updating daily. You can use the alpha decay analysis to iterate how much you can slow down the factor to still capture alpha. In general, turnover <20% and alpha decay that doesn't drop below 0.5 in the first 5-10 days is going to be ideal. 20-30% turnover is in the grey area.
  • Worry less about transaction costs: The impact of one factor in a large mix of other factors where you don't know how it's traded is impossible to estimate. As such, you should worry less about the cost to trade, since our portfolio is now less sensitive to a single fill price. An exception to this rule is an algorithm that only trades small, illiquid names; that portfolio is harder to scale, so we have a harder time funding an algorithm with that universe.
  • Use the optimizer with care: while it's tempting to squash all risk by relying on the optimizer, doing so can have very detrimental effects on your alpha. For example, one thing we often observe is that due to certain constraint settings the resulting portfolio ends up being equal weighted. Thus, your original factor that scores stocks where it has a lot of confidence highly and vice-versa is losing all that valuable sensitivity due to the optimizer. Remember that we only see your final EOD holdings, not your actual factor scores. Try to have your final portfolio be the most accurate representation of the original factor. To achieve this, you should use the optimizer as little as possible and not worry too much about exposures, especially if specific returns look good. Code-wise, you should not use MaximizeAlpha and instead TargetWeights. This is a good place to start: order_optimal_portfolio(opt.TargetWeights(weights), constraints=[])
  • Try to keep your universe as large as possible: Evaluating your factor on a large universe (e.g., the whole QTU) will make it harder for you to overfit and give us more optionality. It's natural for your signal to work better on stocks where factor scores are extreme, but if you keep the sensitivity of your signal (see the previous point) most of your returns will come from those extremes anyway as they will get the largest allocation if you don't squash things to equal-weight with the optimizer.
  • Use proper hold-out testing: overfitting is still the biggest risk you can run into and you should be paranoid about it. Something easy to do is to never evaluate your factor on the last 1 to 2 years until you are absolutely happy with it, and then only test it once on the hold-out period. If it fails, you should become very suspicious if you overfit somewhere in your process or if perhaps your factor favors certain market regimes.

We hope that these pointers clarify what we are looking for and help to focus your efforts and to avoid wasting time on the wrong things. It will not be the last time that we learn new things and provide updated advice. Thanks for your eagerness to learn and please let us know if there are any questions. Thanks also to our fund authors for proof reading and providing valuable feedback on this post.


The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

45 responses

Thanks. I did have several algos that did well through alphalens but performed poorly after using the optimizer.

So, in the new setup the factors are significantly more important than the optimized portfolio. This is an extreme departure from the current setup where adhering to constraints (optimization) was necessary to be a part of the contest/allocation. This leads all attempts to minimize exposure to style/sector factors as futile. But any change towards a greater goal is welcome!!

I am still skeptical of the new approach!

While we can't directly observe the underlying factors in your
algorithm, we use the end of day (EOD) holdings of your algorithm as
an approximation

Use the optimizer with care: while it's tempting to squash all risk by
relying on the optimizer, doing so can have very detrimental effects
on your alpha

The above change will only focus on extreme alpha values from individual contributors (as you will only have subset of alpha values via holdings). An intermediate alpha with great risk reducing capability will not be part of this mix. So how will your optimizer pick risk reducing trades given you will have no alpha value for some stocks at all (which can potentially be great in terms of risk reduction).

Isn't it better to just skip the rigmarole and ask contributors to submit alpha factors for all stocks in QTU? Once you have this info, it becomes much easier to create a portfolio using in-house optimizer, eliminate highly correlated alphas, do style/sector analysis of alpha factors amongst other benefits. What's the harm in this approach?

Isn't it better to just skip the rigmarole and ask contributors to submit alpha factors for all stocks in QTU? Once you have this info, it becomes much easier to create a portfolio using in-house optimizer, eliminate highly correlated alphas, do style/sector analysis of alpha factors amongst other benefits. What's the harm in this approach?

In a world with unlimited engineering resources that's what we'd do.

I don't understand why it would need unlimited engineering resources. For the contributor, it's just one step short of submitting a portfolio (submit the alpha factor). For Q, the process would still be the same (simpler rather, no need to derive any alpha from holdings). Am I missing something?

This leads all attempts to minimize exposure to style/sector factors as futile.

You still need to deliver specific (as opposed to common) returns. In many cases the optimizer provides the easiest way to orthogonalize your alpha to the common risk factors it may have incidental exposure to.

As I understand, the new setup discourages the use of optimizer. And optimizer did control the risk exposure, there were several short-comings to using Maximize Alpha (no risk control, no trade-size control, quadratic penalties). I am saying this because I was working on creating an optimizer that did better than just vanilla Maximize Alpha. Seems like that wont be useful anymore or may be.

So if I understand correctly, we need some kind of alpha weighted portfolio. Are there any constraints other than turnover, one should care about in the new setup? For ex:

  1. Number of Names
  2. Maximum Position Size
  3. Beta Exposure, Dollar Exposure etc.

or the current constraints stand as it is?

We do not discourage the use of the optimizer, it's the best tool to convert your factor into a portfolio with the right properties. What we do say is to use it with care. Specifically, do not use MaximizeAlpha with very tight risk limits and low maximum position concentration as it will probably just destroy any alpha (or lead to overfitting) and put you in an equal-weight portfolio.

Rather, start by using TargetWeights without risk constraints (it's fine to use dollar-neutrality and a maximum position size that allows for variability in weights), look at your exposures, try to understand them. Then, if one is dominating and contributing a lot of common returns, try to place a risk limit on those to reign it in some. Even better is to start by just analyzing your raw factor using the alpha NB and iterating on that. Then when you're happy with it, try to form a portfolio out of it that is an accurate representation of that underlying factor.

@Thomas -- I know you're guiding more towards trading as large a universe as possible, but lets say the reality is that we find an alpha signal that's only predictive for 50 stocks or so and/or only predictive on either the long or short side. Everything else is volatile noise. Does alpha like that become at all valuable to Quantopian under the new paradigm?

@Viridian: Yes it's still valuable, although 50 is definitely on the low side and the threshold for it to be included would be higher as well (as fewer stocks make for a noisier signal which is easier to overfit). We have seen a behavior were quants would start large and then narrow further and further until it started to look, a pattern very susceptible to overfitting.

On a somewhat tangential topic.

It seems that Quantopian would benefit from transparency around the fund performance. Currently the fund is a blackbox even for signed authors let alone the rest of community. Is there a particular reason why Q would not be transparent about its performance and challenges?

Which leads me to the next question. Given the proposed changes one can only deduct that it is likely the fund is not performing as expected by Q and/or investors. Since Quantopian's future is tied to the performance of its fund (this is what presumably pays the salaries), how can we be assured that Quantopian is actually here to stay for many years to come and it is worth investing our time into Quantopian's model of doing things?


I just tested two recommendations from your post:
Set your trades to execute 1-2 hours before the close: since we evaluate your algorithm purely based on its EOD holding,
Code-wise, you should not use MaximizeAlpha and instead TargetWeights.
The first notebook uses original MaximizeAlpha and rebalance at market_open(minutes=30).

Loading notebook preview...

The second notebook uses the same alpha factor but instead of MaximizeAlpha using TargetWeights(orth_alpha_norm),
constrains are the same and rebalance at market_close(minutes=65).

Can you explain in details why I should use TargetWeights and end of the day rebalancing if I seeing that MaximizeAlpha 5x better?

Loading notebook preview...

It seems by design that the new approach cannot take advantage of strategies that rely on the particulars of timing entry and exit in positions. This seems like quite a broad range of strategies to rule out. Can you elaborate more on why Quantopian is not interested in capturing short term alpha? Unless I'm missing something, It effectively implies you have more faith in the reliability of predicting market direction than market behavior which in turn means you reject the efficient market hypothesis and most theories of market invariants I've come across. This seems fairly well stacked against the prevailing wisdom from academic studies (at least those I seem to come across).

Is this about right or have I completely misunderstood something?

@Vladimir, I don't think he was suggesting that TargetWeights is a drop-in replacement for MaximizeAlpha, but rather that TargetWeights will allow you to create a portfolio that is weighted according to the strength of your alpha as opposed to a more equal-weighted portfolio generated by MaximizeAlpha. The larger point is that an alpha-weighted portfolio of 500 stocks is more valuable to Quantopian than an equal-weighted portfolio of 100 stocks due to how they will use your signal to combine with others. Your example would indicate that your alpha signal is not as good an indicator of future returns as equal-weighting is. So while your screen seems effective, you could probably find a better weighting scheme. Or perhaps you could look into winsorizing?

@Tim, Prices only become efficient on factors that are known and heavily traded off of. It would appear that (in addition to momentum, value, growth, size) there are other undiscovered/undisclosed market inefficiencies. At least that is the premise here. If there were no market inefficiencies, how could hedge funds justify their existence?

As for short-term alpha -- the capacity wouldn't be reliably large enough to be able to efficiently enter and exit trades for the amount of capital Quantopian is managing. You can imagine the difficulty of moving half a million into a position within a short time span -- sometimes the liquidity simply wouldn't be there at that moment and slippage would shoot through the roof or you'd miss your window of opportunity. Longer-term alpha gives Quantopian the flexibility to move into the position more gradually as needed in order to minimize market impact.

@Vladimir: Because we won't be able to capture that alpha. What is odd in your case, however, is that there seems to be some medium-term alpha in your first version that's not present in the second. Can you use MaximizeAlpha but execute at the close?

@Tim: That's right, it can't take advantage of short-term alpha. Every fund is designed to operate on a certain time-horizon, and that is what we are currently operating under. One day we might run a separate factor-combination approach that is focused on more short-term alpha (or even longer-term), but that's not our current focus.


Can you use MaximizeAlpha but execute at the close?

Here it is:

MaximizeAlpha rebalanced at market_close(minutes=1).

It is only slightly worse then MaximizeAlpha rebalanced at market_open(minutes=65).
But much better then TargetWeights(orth_alpha_norm) rebalanced at market_close(minutes=65).

Loading notebook preview...

@Vladimir: That is quite surprising as the NB just uses EOD holding anyway, so somehow you must trade into a different portfolio depending on when you execute your trades. Also, you can see in the lower left corner that MaximizeAlpha just leads to an equal-weighted portfolio. It seems that the actual ordering of your factor is detrimental to the final portfolio. You could try to just binarize your factor and use TargetWeights, that should produce the same outcome.

@viridian, you've essentially stated that the efficient market hypothesis is false, it is falsified by the existence of identifiable inefficiencies, whether or not they are on the basis of less traded signals.

Obviously all traders think the market has inefficiencies within a given timeframe or they wouldn't have a strategy. Short term inefficiencies are easy to square with efficient market hypothesis by believing it is not instantaneous but takes a certain time to relax to optimal (by when optimal position will have moved again which is why prices are forever changing in liquid markets). Essentially the longer term you believe you can predict the longer you believe it takes the market to reach optimum, but that is an expected feature of less liquid markets. So there's a kind of contradiction (or looked at another way, a happy balance quantopian is hoping to identify).

This leads me nicely on to my question for @Thomas, can you say any more on the event horizons Quantopian is looking to trade on and the kind of liquidity Quantopian would be happy with over those event horizons? Presumably more longer term alpha is likely to be found in less liquid stocks which in turn increases the risk of not being able to execute the desired positions.

Knowing more about what you're looking for will help us direct our digging for the signals you're really looking to build your portfolio off.

Hi Thomas,

thank you for the update.
when you say "Whether you trade at the open or at the close, we won't see a difference, even though it will look different to you in the backtester." what do you mean?
The effects of trading at open should "visible" at EOD of the same day, and those of trading at close should be visible at the close of the following day, so in any case they would be noticed by you. Besides this I think If you are looking for slow decaying alphas it should actually be better to trade later than earlier.

As for the preference for medium alphas, how can you select algorithms according to this criterion if the contest rules do not enforce it?
Beside this, the alpha decay can be very contaminated by the optimization as you mention, so only alphalens can give this kind of information(as you also mentioned), but you only have access to the backtest results of the strategies, not to the results of the alphalens (which may not even exist if the alpha factor was not tested in research).
I think the new model is similar to that of other online platforms who also combine together different alphas, but if this is the new direction it seems like the contest scoring algorithm should use alphalens more than the backtesting environment.
From the point fo view of your new business model that would be more efficient, but from a pedagogical point of view contest participants would learn less, since the all process of portfolio optimization would not be required anymore.
This is bit like applying taylorism to quantitative finance, which is how other funds work I think, making each quant just a small gear of a big machine they do not fully understand(to better protect IP..), but I guess the artisanal age of quantitative finance in which one single quant takes care of everything is gone for while already ..

Regarding data sets, are you planning to add options data, or at least implied volatility ?



when you say "Whether you trade at the open or at the close, we won't see a difference, even though it will look different to you in the backtester." what do you mean?
The effects of trading at open should "visible" at EOD of the same day, and those of trading at close should be visible at the close of the following day, so in any case they would be noticed by you. Besides this I think If you are looking for slow decaying alphas it should actually be better to trade later than earlier.

They could be noticed by us, but we don't look at that. We don't use your strategy backtest returns in any way (in this new framework). We recompute them using your EOD holdings. So how you got into those EOD holdings won't show in our analysis. You can check the alpha notebook which uses price data to recompute factor returns.

Hi Thomas,
to minimize trading costs, turnover and overlap between different strategies why don't you run a PCA analysis of all the alphas you like to construct few super-alphas ?

Hi Johnny,

My understanding of PCA is that it maximizes the orthogonality of components and doesn't pay any attention to the sign of the alpha (i.e. pos/long or neg/short). The only way I see "super alphas" being useful is if you couple it with some form of supervised machine learning that can generate weights that ultimately are attune to alpha sign.

Please correct me if I misinterpreted how you suggested to use PCA. I'd love to be proven wrong here and uncover a new alpha combination tool!

since every alpha should be market neutral by construction, any linear combination obtained from PCA will also be market neutral.
PCA would basically avoid overlaps between different strategies, and consequently minimize turnover and trading costs.

Worry less about transaction costs

Does it mean, it's okay to to set 0 for commission in backtests for the contest?

set_commission(commission.PerShare(cost=0, min_trade_cost=0))  

And what about slippage? Can we set it to 0, as suggested in this post ?

@Costantino, I believe the contest uses default slippage+commissions on your entry no matter what you manually set it to.

That's true: according to the official contest rules (

Other Rules [...]
2. Algorithms submitted to the Contest will be run with the default commission model that charges $0.001/share, with no minimum trade
3. Algorithms submitted to the Contest will be run using the default 5 basis point fixed slippage model. The details of the fixed basis
points slippage model are provided in our help documentation

I'm quite confused... what should we use for our contest submission... are the old guidelines for slippage and commissions still valid or not?

Read the original post again. It states:

We are not immediately making changes to the contest rules or guidelines

The no slippage/commissions guidance pertains to allocations.

okay, now I understand.
Anyway - as pointed out by Bernard's post above - for me it's not yet clear, how can you select algorithms according to the new criterions,if the contest rules do not enforce it?

@Costantino: You can submit algorithms that meet the criterions outlined above to the contest (although they will be run with slippage there) and increase the chances of getting an allocation. We are also not forced to only license algorithms from the contest.

@Thomas, as I understand from the discussions above, the Style and Sector constraints will no longer be applicable for the author at an individual algorithm level, but would be taken care of at the fund level itself. Is this understanding correct?

Will the minimum leverage constraint also not be applicable any more? This would mean that algorithms may have up to 0 holdings during unfavorable times.

@Rainbow Parrot: That is correct, you should not do any risk management on your factor. The only thing where the risk model is helpful is in seeing how original your idea is.

Leverage should still be steady at 1 all the time, unless you have a confidence signal included which controls leverage in a clever way.

The advice contained within here is still intact for 2020 with the addition that we are most closely monitoring the challenges we started posting in the forums.

In addition, we are starting to look only at the last backtest of your algorithm. So if you want to increase your chances of getting allocation, make sure that the last backtest of your algorithm is the latest and greatest version.

@Thomas, thanks. Any consideration as to the number of backtests done? For example, a strategy with 500 backtests might have a higher sharpe ratio than a strategy with only 5 backtests, but it may not be as robust, and not work as well on future data, as the one with only 5 backtests? In other words, do you look at ‘deflated SR/IR’ at all?

@Joakim: Not really, I did look for these patterns in the DB but there is no strong effect.

make sure that the last backtest of your algorithm is the latest and greatest version.

I sometimes re-run a backtest to check on the OOS performance. Presumably Q is looking for strategies that have some good OOS track record. But if you only look at the most recent backtest, are you going to see that previous versions have accumulated much valuable OOS whereas the latest backtest looks like it was written yesterday?

@Viridian: Thanks for letting me know, I hadn't thought about that. So yes, we would just think that your algo has few OOS days which is not ideal. I suppose we could add a flag whether the code changed or not between backtests.

@thomas Is there a plan to remove the style and sector from the contest

@thomas Is there a plan to remove the style and sector from the contest

Thomas, can the backtest UI be extended to support a preferred checkmark, or maybe some quantopian supplied standard keyword can be added by the user in the user controlled backtest description field to indicate the preferred version (when it is not the latest). I have mostly submitted the preferred versions to the contest, but sometimes I have tried a few more ideas but not deleted those later backtestids.

Maybe in addition to pulling the Backtest ID number, Q can pull in something like the Backtest Name too?

Thomas -- It seems like the emphasis on low turnover is encouraging people to smoothen their algorithm's turnover. Wouldn't it be wise, much like you now encourage algorithm authors not to squash their risk exposure, to likewise not squash their turnover spikes? Instead, the tearsheet can be programmed to simulate the portfolio smoothening for assessment purposes.

I'm thinking if it takes Quantopian 5 days to leg into a position, but that algorithm also has a ~5-day SMA already applied to it (solely for the sake of having better turnover stats), it's cumulative, so you're legging into positions where the signal is now 10 days stale. Likewise, with signal combination, if the signals are pre-smoothened they're going to sometimes interact in less than optimal ways, foregoing some of the benefits of the signal combination approach. It can create situations where you start legging into a position that is already stale, only to have to start legging out of it once the data catches up.

It seems that having the strongest, most pristine signals to feed into the signal combination stage, and then smoothing the turnover on the combined signal would be preferable.

Perhaps ideally all your alpha would have little to no decay, in which case none of this is an issue. But then you're needlessly leaving a ton of alpha on the table.

@Viridian: You raise good points, everything the user does takes optionality away from us and smoothing signals ourselves is definitely something we've done. I won't change our messaging based on that yet but it's something I'll incorporate into my thinking. Thanks!

@Thomas Wiecki

Are allocations still a thing as of this year?

@Thomas Wiecki all this is quite exciting, thanks for the great work! I agree that it makes more sense structurally if people submit alpha factors and Q does the optimisation centrally, which is a lot more similar to a modern quant manager. Ultimately I guess it depends on where you think everyone's relative edge is. I'd be more inclined to say that the "crowd" in aggregate has an edge in constructing uncorrelated alpha factors, but perhaps less so when it comes to alpha combination and optimisation (which is much more mechanical and technical).

In any case, I think it'd be incredibly beneficial for you guys to make an example notebook/algorithm demonstrating exactly what kind of setup you want. I am currently in a position where I have a number of pipeline signals which look very promising on alphalens and I'd love to be able to drag-n-drop them into an algorithm that meets your criteria. Personally, I find the part of the workflow involving the development factors that capture an economic hypothesis to be much more fun than the subsequent part of figuring out rebalancing logic.