Hello folks,

TL:DR Post your tearsheets here and we'll review them.

We recently released our new backtest analysis screen to provide better feedback on your strategies. We also recently released a new feature which allows you to upload your custom signals to the platform and evaluate them in alphalens. We still get many questions about how to interpret the metrics we provide, and what you should do to improve your strategy. Whether or not you're looking to receive an allocation, we'd like to try to provide more ways to get feedback on your strategy.

Please leave your tearsheets here so that Jess, our head of portfolio management and research, can review them via a recorded webinar. She can provide feedback based on what we're looking for, and also in general what quants tend to aim for in strategies. We'll run webinars whenever we have enough tearsheets to review, up to a reasonable total number of tearsheets. We're not sure of the response we'll get here so please work with us as we adapt to the level of interest. If you can't make it to the webinar we'll put links to recordings in this thread so you can catch what you missed.

In order to submit, reply to this post with your tearsheet notebook attached. Please also include a brief description of why you expect the strategy to make money (what is your model's rationale).

For instructions on how to create a tearsheet see this tutorial lesson.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

156 responses

I think the "Analyze Backtest" button described in that tutorial lesson is from the old backtest page. In the new backtest it's called "Notebook." Might be confusing to people who are following the instructions.

Hey, great catch. We're fixing and will deploy ASAP.

Here's an example tearsheet so others can see what it should look like. This is just the default algorithm from the Getting Started Tutorial, so whereas it satisfies risk constraints, the performance isn't amazing.

Economic Rationale: The hypothesis is that stocks which have particularly positive sentiment will outperform in the near future. This strategy attempts to buy stocks which have more positive than negative sentiment and short the opposite.

29

Hi Delaney -

How do we find the in-sample/out-of-sample periods for a given algo that has been captured in your system (by running a full backtest)? I'll post some tear sheets, but I'd like to include the in-sample/out-of-sample dates and the cone thingy.

Hi Delaney,

I would be interested to receive feedback on the attached tearsheet of my algorithm. To give a little bit of background, the algo combines a number of fundamental and technical factors and rebalances weekly based on these signals. There is currently very limited logic in terms of ordering (I simply use TargetWeights through the Optimizer), and this is an area I would like to improve in the future when I have a little more time (trailing stop loss / stock black-list, for example). As an aside, the algo performs very well from 2004 (earliest date available in the backtester), but I cannot load a full 15-year backtest into a notebook so I am posting a notebook based on a relatively shorter time horizon.

I am not a programmer by trade so my coding skills are somewhat rudimentary compared to others on Quantopian, but I work in Private Equity so my skills in terms of fundamental company analysis are somewhat better (relatively!).

Will

17

@Grant, for simplicity sake, I would set the live date as either the date you entered the algo in the contest or the date upon which the backtest was run.

Thanks
Josh

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here's one of my contest entries.

Please also include a brief description of why you expect the strategy to make money (what is your model's rationale).

I'll just say that there are multiple pipeline factors, combined simply (no fancy ML). I have it as a task to dig into the individual factors using Alphalens to get a better feel for what might be going on.

24

@Grant, bravo, those are great numbers. What is your handle so I can follow on the scoreboard? With such numbers you should be near the top of the list if not the top.

Happy Fourth of July, Quantopians!

Here's my non-contest tearsheet. A bit high on volatility to Q's liking but consistent to the basic investment principle, "high risks, high returns". If there's one factor that will bring high returns, it is volatility as proven by "God mode", trading with perfect hindsight.

Economic rationale, statisical arbitrage using a mix of fundamentals and technicals.

4

@Grant, impressive! One thing I noticed is your algorithm seem to consistently push your tech exposure towards -18% as much as it can, which ends up bringing your total returns down. Is this because your algorithm is applying the same factors to all sectors and they happen to erroneously disfavor tech, or is it because the optimizer is doing something funky in order to get something like beta under control?

@James Villa, why isn't this in the contest?

This is a brand new one I'm working on -- it's still pretty early. It attempts to sort stocks into two groups depending on whether they're a disruptor or a disruptee. Different combination of fundamental and technical signal are applied to each group.

First issue is that the categorization for the stocks is vague, subjective, and introduces look-ahead bias. Second obvious problem is that macro-trends like these get overheated and eventually revert, if at least somewhat, but hopefully the other signals I'm using overcome that issue. The other obvious problem I have yet to tackle is that it currently biases heavily towards long tech.

2

@Viridian Hawk,

While the algo meets all contest requirements, based on my readings on what Q is looking for in terms of allocation, it falls short because annual volatility is at 10.5%. Unless, Dr. Jess Stauth says otherwise, I think they are looking for volatility in the range of 2-3% which translates to around 4-6% annual returns in my estimation, a notch above risk free rates, Normally, the industry utilizes a market neutral long / short equity strategy as a hedge against market risk and as a component of the overall hedge fund. But I've also read somewhere in the forum that Q's intention is to leverage this strategy 5-6 times. If this is true, then my algo could also translate volatility to 50-60%.

Right now, I'll just wait for the feedback and then decide whether to enter it in the contest.

@Q folks,

Should we run them with (round_trips=True) and any other options (start date for OOS data?), or it doesn’t matter.

I’m in the middle of moving interstate so haven’t had much time to spend here lately, but I’d like to post a tear sheet here too in the next few days.

Impressive strategies posted so far!

@James,

The overall volatility will be minimal in a portfolio of strategies with uncorrellated return streams, so I don’t think individual high volatility strategies are necessarily bad (or not preferred), as long as the returns are worth the high volatility (a consistently high sharpe ratio essentially).

The beauty with a portfolio of strategies with uncorrelated return streams is that you get all (100%) of the returns from all strategies, with reduced overall volatility in the portfolio.

@Joakim, the answer is yes. One should definitely use the  round_trips = True  option.

It is the only option the tearsheet has which will answer the following portfolio payoff matrix equation:

F(t) = F(0) + Σ(H∙ΔP) = n∙x_bar = (n - λ)∙AW + λ∙AL

where AW is the average net profit per winning trade, AL the average loss on losing trades with λ the number of losing trades, while n is the total number of trades. Not having these numbers is like operating blind. In the end, it is the account's balance that will matter and the above equation says how you got there.

@Joakim,

I agree with your assessment on the relationship of uncorrelated return streams and volatility in a portfolio of strategies. What I'm trying to deduce from the design of contest as it relates to the prospect of an allocation is the intended usage of Q's version of market neutral long / short equity strategy. From observation of past contest winners, it seems to me that metrics used and its weighing to score the rankings is skewed towards favoring low volatility and low beta outcomes. These then translates to return streams of approximately 4-6%. So essentially, we are looking at 4-6% returns on 2-3% volatility which is consistent with the performances of typical market neutral long/short strategies with intended usage of being a buffer to market risks in a portfolio of other strategies, mainly as a risk mitigator component.

Point72, the investor of the $250M allocable fund, is the investment vehicle of Steven Cohen, the controversial hedge fund manager that employs high leverage strategies. cohen-point72-s-reveals-high-leverage-as-firm-recruits-new-money. This is what is throwing me off, is this the intended usage of Q's market neutral strategy? If so, then why not design the contest to reflect that (x times leverage)? I already posted a tearsheet, but I figured I would post this one here as well just for the sake of contrast. (For some reason the forum will only let me post simple tear sheets or else I get an error when trying to attach the notebook.) This algorithm is based on the premise that much of the market moves together, however some stocks are more inclined to march to their own drummer. For its longs it looks for stocks that are typically resilient to down days while participating in rallies, and for shorts it looks for stocks that typically don't participate in rallies but nonetheless fall on down days. It doesn't do anything fancy -- just has a 1-yr look-back window and a score. Obviously predicting future returns shouldn't be that easy, so it's no surprise that it backtests at a measly 0.5 Sharpe. I wouldn't expect to get an allocation for this. However, the reason I wanted to post this here is that this algorithm appears to be favored by Quantopian's grading system -- currently in 5th place in the contest and a couple weeks ago the score was 1.041. Maybe gives people some context. @Joakim Regarding individual volatility being a non-issue in a fund of non-correlated algorithms, I don't believe that Quantopian holds the same viewpoint. As I understand it, the Q Fund is weighted such that the algorithms with the lowest volatility receive the largest allocation. I suspect it's because they want to be able to shut down algos that are no longer operating within expected parameters, without locking in significant losses. With a high volatility algorithm you might have -15% losses by the time you realize its alpha signal has failed. Ouch. That obviously won't work. 2 Click to load notebook preview It's great to see so many tearsheets being shared on this thread! I'm looking forward to going through them at an upcoming webinar! @james you asked a question about why we want to evaluate strategies at unit (1.0) leverage. Specifically "This is what is throwing me off, is this the intended usage of Q's market neutral strategy? If so, then why not design the contest to reflect that (x times leverage)?" The answer is that it makes our task of evaluating strategies at scale that much simpler if we can assume a fairly consistent leverage profile across all candidate strategies. In our investment process we apply leverage at the portfolio level, and we assign a weight in the portfolio to each individual algorithm. So we think about weights and leverage separately in our process. While it's certainly true that we could try to back out leverage applied at different levels by different users, that can get complicated if people use widely varying leverage over time in their strategies. There's nothing wrong with that approach in principle - but it not only makes our evaluation problem harder, it makes combining such a strategy into a portfolio of strategies more challenging as well. Under the current contest design the way I think about it is that we're creating a level playing field of max leverage = 1 and allowing people to compete to achieve the best results possible given that (and several other) constraint(s). Hopefully that's helpful context, and please feel free to clarify if it doesn't answer you question fully. Best regards, Jess Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. Hi Jessica, Thanks for your prompt response. I perfectly understand your thought process as you explained it. At the contest level, you are trying to level the playing field by setting max leverage at 1. At the portfolio level, you have a seperate weighing mechanism, perhaps proprietary and reflective of real intended leverage. I also totally get the complexity of analyzing strategies with different levels of leverage. However, I just like to point out a salient prespective with regards to utilizing optimization with leverage and other constraints in financial modelling. It is often best practices when optimizing variables, in my opinion, to define the objective function and corresponding constraints to be as close to the intended execution of the strategy as possible because it can easily change the dynamics of the system. In your case, you are applying a two tier approach to achieve the final execution of combining and weighing the different market neutral strategies evaluated at a one unit leverage but executed at x times leverage. So I'm assuming that the process of weighing the different market neutral strategies extracted from the contest design is highly dependent on its metrics and/or it's scoring mechanism. The algorithm attempts to develop a winning trading strategy for those QTradableStocksUS which it has determined to have a repeating "rhythm" of price movement over a designated time frame. Based on those results, a Signal Strength is determined for each tested stock (awarding higher values to continued good performance). Finally, stocks within each market sector are sorted by Signal Strength, and those stocks with sufficiently high Signal Strengths are traded. If the profit performance of a stock continues on track, its Signal Strength will remain high, and it will remain in its list of tradable stocks. However, should a stock’s performance begin to deteriorate, then its Signal Strength will quickly decline, resulting in the stock being demoted into a non-tradable category. The underlying concept: Trade only the stocks with the highest Signal Strengths ... they are the ones which are performing the best right now, and they have the highest probability of near-term continued good performance. The success of this method of trading is predicated on the algorithm’s ability to generate profitable trading signals that adapt quickly to changing market conditions. This approach to short-term trading, in addition to a custom long/short balancing strategy, produces low beta returns with minimum drawdown – across multiple years of varying market action. Markets change, and this algorithm adapts quickly to those changes. 5 Click to load notebook preview Folks, Here is one that keeps bouncing out of the contest for various small reasons, yet looks good to me! Basic Economic Thesis: Select stocks by sectors, from covariance clustering, then score clusters using gaussian copula instead of gaussian normal. ( For more info, see: https://www.quantopian.com/posts/my-last-shared-algorithm-good-luck-all ) alan 3 Click to load notebook preview @Bryan Richardson, I'm loving the trading logic of the algo and its results. I wrote an algo in another platform a few years ago with a similar logic that focuses on adaptability to changing market conditions using Evolutionary Algorithms ( Particle Swarm Optimization). If your interested in reading about it, here's the link: Price Evolution and Trade Adaptation Theory @Guy Flury - I’m “Off-White Seal” in the contest. I’ve made about$1k so far; lately, my algos are struggling to get back into the top 10.

@ Viridian Hawk -

One thing I noticed is your algorithm seem to consistently push your
tech exposure towards -18% as much as it can, which ends up bringing
the same factors to all sectors and they happen to erroneously
disfavor tech, or is it because the optimizer is doing something funky
in order to get something like beta under control?

Not sure what's going on. I am running all factors over a subset of the QTradableStocksUS which perhaps is the explanation. I am not doing anything to explicitly disfavor tech.

Accidentally posted for help in understanding the full tear sheet results before I saw this thread, perhaps someone here could take a look and help out:)

A bit late to the party (pun intended) but here's my slightly improved, or possibly more overfit, PARTY Algo :). I wanted to use the live_start_date as 06-18-2018 but got lots of errors when I tried it so left it (I've had it work before on shorter backtests).

We've been moving from Sydney up to Brisbane so haven't had much time to spend on Q since late June, but plan to get back in to it again soon.

The profitability rate on the short side (about 40%) is still too low to my liking, which is the main problem with the algo as I see it.

Economic Rationale: Risk Neutral Relative Value based strategy, with momentum, trend, volume, volatility, and sentiment components, buying/selling under/over valued stocks once they are on the move in the desired direction. The hypothesis is that the market tends to overreact in the short term, but tend to correct itself in the longer term. 'Short-term' can vary quite a bit though (and be a loooong time for some stocks), so also using the other components to determine appropriate entry/exit points. In summary, it's a longer-term 'reversal/contrarian/correction' value-based strategy, with shorter-term 'momentum' type components.

EDIT: Updated and expanded on Economic Rationale to more accurately reflect the hypothesis behind the strategy.

6

@Joakim, I like the relatively low-volatility, low-beta equity curve of your tearsheet.

However, if I may, I would make the following observations. They are not a critique of your program, just some observations.

First, putting numbers from the  round_trips=True  in the portfolio equation:

F(t) = F(0) + Σ(H∙ΔP) = F(0) + n∙x_bar = F(0) + (n - λ)∙AW + λ∙AL

F(T) = 10M + (182333 – 89428)∙561.30 + 89428∙(-524.06) = 15,281,939

which corroborates the tearsheet. The number of trades is sufficiently large to make the averages representative of the whole. The equation is sufficiently accurate (within rounding errors) to validate itself.

The percent profitable at 0.51 is indicative that overall the strategy seems to be playing on market noise. Was available, or offered by the market, a range of 0.52 to 0.54 or about just by flipping a coin. As such, I cannot remove the null hypothesis that the 0.51 might be mostly random-like.

Your average net profit based on your average trading unit is about 0.116%. Not much, but a profit is still a profit, especially with an almost flat equity line. Note that your average win to your average loss is pretty close, making it almost a binary thing (+1, -1). Consider it as if a fixed profit target with a fixed stop-loss strategy since yours will behave about the same. A 0.116% return can easily be wiped out if frictional costs were not included, therefore, I do hope they were, otherwise it might impact the strategy negatively, and even destroy it.

Nonetheless, the handing of shorts is dismal. It is a total drain on your trading strategy. Sure, you need to hedge your portfolio if you want those low-volatility, low-beta outcomes, but there is surely something you could do to alleviate this. Reduce the number of short trades even if it forces you to take a low net positive market exposure to do so.

The portfolio equation says where the efforts should be put in order to improve the total payout. One: increase n, the number of trades. All things being equal, you would make more money since the added trades would behave like your averages. It would also help to compensate for the deteriorating x_bar as seen in the charts. Two: reduce the number of losing trades λ. Without seeing your program, it is hard to say where and how it should be done. Nonetheless, you should concentrate your efforts there. Three: increase your average net profit per trade. This should be easy, increase the average holding time for longs and reduce it for the shorts. Four: reduce the average net loss per trade. Already, if you can reduce the number of such trades λ, you would be on the right track. You can also reduce the impact by reducing the time they are held in the portfolio which will tend to reduce the average loss.

You are dealing with simple math here. A few numbers to take care of. Whatever else you do in your program that does not affect those numbers should be viewed as purely cosmetic code, nothing more, since it will have no impact on the final results.

Thanks for sharing, appreciated, but for me, it is not enough. You can do better. Go for it.

As a side note:

NEU is an outlier in your program. For some reason, it is allocated more weight than the others. You have at least one position that lasted 7.24 years in the portfolio. It is not a bad thing, on the contrary. Holding longs in an upmarket should be the thing to do. However, you are also holding a short for 4.77 years, and there, it better be a downer. Otherwise. At least, you should investigate why it lasted that long.

Hi @Guy,

Thank you for your feedback. I appreciate all of it and agree with most of it.

The algo is using the default commission and slippage. I’d be happy to attach a sheet with 0 trading costs, to prove it. Just need to find some time between unpacking of boxes, IKEA trips, and changing of nappies. :)

Question: If an algo indeed had a 51% edge over ‘just flipping a coin,’ as the house does (roughly) at blackjack, craps, and baccarat tables in a casino, how would that look like in a Pyfolio tearsheet in your view?

Dr. Jess Stauth will be reviewing the submitted tearsheets during a live webinar on Thursday, July 26th at 3:00PM EDT. During the webinar, Dr. Stauth will provide feedback based on what we're looking for and give general advice on how to improve your strategies.

Please note that tearsheets sent in for evaluation may be shared on-screen.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Joakim, the US market by itself is long-term biased to the upside, in the range of 0.52 to 0.54 depending on the academic paper you read. So, by flipping a fair coin (50/50), that is what you should get, something in the range of 0.52 to 0.54. A runs-test would answer that.

Now, if your trading strategy is predictive, it should do better than 0.52 to 0.54, and thereby show its positive edge which I think your tearsheet failed to show.

So to me, there is no 51% edge over flipping a coin (50%) in that strategy. It is more as if it slightly underperformed what a fair coin would have provided, but, by not enough to say it is bad or non-random. Only that it could be indistinguishable from a random-walk scenario. If the trading decision acts like a random-walk or close to it, can we still give it predictive powers, or, shouldn't we consider it as almost random-like?

You are trying to predict the outcome of the next trade in general. Not only on one trade but on 182,333 trades. It is a large enough sample to be representative. And you get it right 51% of the time. If I had a program doing that, how would you describe it to me in such a way so that the word “prediction” would make sense?

No one should come up with a hit rate close to 50% and claim they their program has predictive powers. Anyone could flip a coin or ask their machine to do it. However, if the original series is biased downward (<0,50) and you get 51%, then it is more applicable the more the spread.

Note that a sure way of achieving this close to 50% hit rate is to have the underlying series very close to random-like itself. And in such a case, whatever you do, using whatever method, you would get close to a 50/50 proposition and get about the same intrinsic bias that was in the original series which might be none.

The demonstration has not been made that the underlying trading decision process is predictive. And that is what the tearsheet showed.

Regardless, you could increase the “predictive” powers of that strategy by implementing the suggestions provided in my last post since your hit rate: (n – λ) / n, depends on how you trade and the portfolio's payoff equation tells you what is important. This is like saying: you could play with what impacts those numbers, and your trading strategy could become predictive just by increasing n and reducing λ. But there, I would not call it predictive. I would more likely call it skills at following the math of the game.

@Guy,

I have neither the time nor desire to get in to a long back and forth on this so I think we’ll just have to agree to disagree.

I do value your feedback and appreciate your work, and I’d be interested to follow your progress in the contest if you’d be willing to share your ‘colored animal’ handle. If you PM it to me directly you have my word that I won’t share it with anyone.

I'm mixed on whether we should move specific conversations like this to a new thread, I'll let others weigh in on that. In the meantime here's my opinion.

TL:DR If you're risk constrained, make money consistently, make a lot of trades (high sample size), and have a solid rationale for why your strategy works, you probably shouldn't read into the profit/loss rate too much.

I'm not sure if I agree with the conclusion that your strategy should produce profitable trades 52-54% of the time. That assumes that the effect your strategy is trying to take advantage of is the market. If you're designed to be completely independent of the market, and placing lots of shorts and longs, then I don't think you can argue that the market bias towards upside should be a benchmark for your upside bias.

In fact, you don't even need to be > 50% profitable to be profitable overall. If your average profitable trade makes $1000 and your average losing trade loses$500, then you can have twice as many profitable trades as losing trades and still break even. A lot of analysis assumes symmetric profit/loss, which is not always the case (especially, I've heard, for discretionary trading with stop-loss).

The main thing to me is whether:

a. Your model has predictive capacity when viewed as a pure alpha.
b. That predictive capacity survives execution and makes its way to real trades.

This tearsheet is measuring b. and with 182,333 trades (although the true number of independent samples is likely much lower than that due to confounding factors). What you really want to look at is the distribution of PnL trade by trade and test whether that's drawn from a distribution with mean of zero. That will help take into account effects like asymmetric profit/loss and statistical significance of your profit/loss rate, instead of just viewing 51% out of context. Of course put a heavy discount on the results because the amount of true independent samples is likely way less than 182,333.

I think there is a misunderstanding of the basic math of this game. Whatever you do trading, whatever the methodology used, none of it will escape the portfolio payoff matrix:

F(t) = F(0) + Σ(H∙ΔP) = F(0) + n∙x_bar = F(0) + (n - λ)∙AW + λ∙AL

and even if the payoff matrix: Σ(H∙ΔP) has some 1,827+ rows by some 420+ columns, it all boils down to 2 numbers: the number of trades, and the average net profit per trade. Those two numbers are agnostic and their product does carry a dollar sign. You can break these two numbers down as per the above equation, but it won't make the math disappear.

On a day to day basis, especially for trading strategies that rebalance every day, on the whim of hundreds of weights with a 12-decimal precision, you can be sure that there will be a lot of randomness in all those weight variations. That you design a trading strategy that at times can take advantage of that randomness, it is perfectly fine. But, you would have to prove that it is more than pure coincidence and that the future will comply with such a scheme.

In the strategy presented, x_bar is degrading as it is which will lead to a CAGR degradation going forward.

Now, the thing to do is to prove me wrong. And I do not think that that equation is wrong.

Why are you people so afraid to look at price series for what they are? If there is some randomness in there, and I do think there is a lot of it, why not accept it and deal with it?

When I read a tearsheet and I see a hit rate close to 50%, and AW close to |AL|, I can recognize the almost gambling nature of the program as if operating on white noise, market noise that is.

If it wasn't, the strategy would have much better numbers than that to show.

So let's go with the numbers presented:

(n - λ)∙AW + λ∙AL = (182333 – 89428)∙561.30 + 89428∙(-524.06) = 5,281,939

That is profits, that is good. Can profits be extracted in an upmarket? Well. Evidently.

However, as @Delaney mentioned, if AW was 1,000 and AL was -500 you would get:

(n - λ)∙AW + λ∙AL = (182333 – 89428)∙1000 + 89428∙(-500) = 48,191,000

But it CERTAINLY is not the case here.

Furthermore, that tearsheet has shown absolutely nothing that would even suggest that it could post such numbers. And if it could have AW = 1,000 and AL = -500, then you can be sure the strategy would not do 182,333 trades. It would be a lot less. The strategy would act as if having set its price target at a 1,000 dollar profit and with a stop-loss of -500. There is only this big IF: will the market comply with the same hit rate? I do not think so.

When @Delaney says:

In fact, you don't even need to be > 50% profitable to be profitable
overall. If your average profitable trade makes $1000 and your average losing trade loses$500, then you can have twice as many profitable

I would rephrase that since if n = 2∙λ as given above, you do get a hit rate of 0.50.
Again, all I read in the tearsheet is a hit rate of 0.51, with AW = 561.30 and AL = -524.06. If the author of the program could have generated AW = 1,000 with AL = -500, I think he would already have done so and have won more than the contest.

P.S.:

@Delaney, am I to understand that my analysis of people's tearsheet might not be welcomed, or appreciated, or should only be done by Q?

@Joakim, I disagree. We can't agree to disagree on this for a very simple reason. You are presented with an equation, it is not an opinion, a maybe or an if, it is a statement bearing the most unequivocal symbol: an equal sign.

F(t) = F(0) + Σ(H∙ΔP) = F(0) + n∙x_bar = F(0) + (n - λ)∙AW + λ∙AL

The thing to do is prove that the equal sign is not valid in that equation and make it a not equal sign. And I have not seen anyone come up with the not equal as yet. For anyone that would like to try, what you need to prove is either one of the three below:

F(t) ≠ F(0) + Σ(H∙ΔP), or
F(t) ≠ F(0) + n∙x_bar, or
F(t) ≠ F(0) + (n - λ)∙AW + λ∙AL

I think anyone would be more than hard-pressed to prove the above inequalities.

The first equation must have been around for centuries! It is nothing new. But it does show the simplicity of the game. You have it all in one equation, all that matters is there, from start to finish. In the end, those numbers will prevail and totally describe the outcome of a trading strategy whatever it is.

Whatever the design of your trading strategy it will end up with the above equality. So, what matters is finding ways to increase n, reduce λ, increase AW and reduce AL. Whatever you do, there is nothing else that matters if it does not have an impact on those numbers. The equation stands, and it will stand for the centuries to come. That is not an opinion. It is an old statement with an equal sign declaring and setting it in stone.

IMHO.

And, I will not “bother” you again.

Good going, Joakim + congrats on top 10 you cream of all animals!

Look forward to Jess' feedback! Yes +1 thanks to this thread!

Thanks @Karl!

“The new phone book is here! The new phone book is here!”

And:

“51% of the time, it works every time!”

Guess the movies. :)

Moving on.. Joakim :o) fig is not my jam.

I echo Leo M's remark: low volatility and low drawdown are awesome! I'm officially jealous of your zero-ish risks and $100MM scale-up! I think for OOS interest it would be good to get the  live_start_date  thingie sorted out with Q support as OOS is all that matters. Cheers Fair enough Karl! :) Thanks for your kind words as always. @ Joakim - I'm curious if you are using MaximizeAlpha or TargetWeights as an objective? The reason I ask is that you consistently are carrying 400 stocks in your portfolio, which suggests that you are using TargetWeights. I'd started to use it, and probably bumped into the bug first reported here. If you are using TargetWeights are there any "tricks" to getting it to work properly? I deleted my earlier post as I was not sure if users are allowed to comment in this thread. Since I have been referenced in another post (by Karl) I will add it back "@Joakim Impressive results. I like the consistently low drawdowns below 2%" Thanks Leo! Hi @Grant, MaximizeAlpha. I don’t think I’ve ever used TargetWeights, but I’d like to as well, especially for trying to have multiple (equally weighted) weekly rebalancing portfolios, rebalancing on different days of the week. I’d say you are more likely to figure that out before me though. There's been a lot of good discussion here, and one thing I wanted to make sure of is that we're preserving a good experience for folks using this thread. A question I had is whether folks should comment on this thread, or create a new thread for each discussion that pops up. I'm not sure so I made a survey for everybody to vote, please let me know and we'll use the results to set the tone of this thread going forward. Thanks. https://goo.gl/forms/Ug5ECtlEhqZiFLrG3 Also, I'm looking to do a few interviews with users to chat about what Quantopian could do better in general. This thread is attracting some of our most engaged folks, so I'd like to take the opportunity to invite people to email [email protected] and say they'd like to do a call with me. We'll do our best to schedule a time when you can provide some feedback. Past versions of this have been very helpful. Thanks in advance. Economic rationale: alpha factor is made up from weekly mean reversion supported by volume changes, spearman rank correlation, sentiment data and fundamental indicators created by feature engineering. • this is my first post on Quantopian so I hope I did it well 6 Click to load notebook preview @Vedran, Very nice strategy! Miles better than my first one for sure (which was just trying to get it to meet all the contest criteria). @Leo M., Great strategy as well! I especially like the consistently high rolling sharpe ratio! Just eye-balling the graph, there doesn't appear to be too much alpha-decay going on (is there a better way of measuring alpha decay?). I also like the 'Cumulative returns volatility matched to benchmark' and all the graphs/diagrams showing the consistency of daily, weekly, monthly, and yearly returns (I'm assuming slippage and commission are the default ones?). Well done! Did you run out of memory with the (round_trips = True) option? Personally I find this option very useful to see the profitability hit rate on the longs and shorts, as well as overall. Perhaps also running it with (hide_positions=True) may reduce memory usage(?), with the added benefit that it makes it a lot harder to reverse engineer the strategy. Personally I don't look much at the positions, only if there are any 'extremes/outliers'. @Bryan, Man I'm just jealous! Possibly the best strategy on this thread, in my view anyway. I might be wrong, but it looks like you're exiting/profit-taking (and not just rebalancing) at 2pm everyday (from the 9:30 morning position entries), which I find interesting. If it doesn't intrude too much on your 'secret sauce', I'm wondering if you hold many/any positions overnight? If you don't, do you think it's possible that the Q Risk model don't accurately reflect intraday trading risks in your strategy? During the webinar, Dr. Stauth will provide feedback based on what we're looking for and give general advice on how to improve your strategies. On the Get Funded page, there are a couple requirements that are challenging, in my opinion: • Low Correlation to Peers • Strategic Intent As was pointed out here recently, and as I've noted in the past, authors have no way of assessing the correlation to peers requirement. Presumably, for the algos that Jess will discuss, part of the commentary could include the results of this assessment. My assumption is that Quantopian has internal analysis tools to compute the pairwise correlation with algos already in the portfolio and ones that are under consideration, and could provide the results as part of the webinar. Personally, I'd like to know if my algo is within the +/- 30% pairwise correlation range, and I suspect others would like to know the same for their algos. For the strategic intent requirement, I have some sense for what is required, but it would be interesting if Jess would assess if the economic rationales provided by authors above are sufficient to meet the strategic intent requirement. This requirement is kinda odd, in that it is not clear how it is to be formulated by the author in such a way that it can be definitely assessed by Quantopian. The other issue is that if the author uncovers a true, scalable arbitrage opportunity and describes it fully to Quantopian, then he might as well share his code. So, for the economic rationales provided above, I recommend that Jess comments on their adequacy, and how Quantopian would verify that the algo actually works as described, by quantitative measures. any feedback is appreciated 1 Click to load notebook preview Many thanks to all who submitted their algorithms for review. The recording has been posted to Quantopian's channel on YouTube: https://youtu.be/tGnLqg_5TBk Feedback from attendees was very positive. We think you'll find it very useful! Thanks Josh I ended up not making it to tune in to the webinar live, but just listened on YouTube and enjoyed it. Thanks for the feedback. I was thinking that the code I posted on this other thread might be useful to some of the algos posted here (1, 6, and 7) that suffered from high peak turnover: https://www.quantopian.com/posts/daily-and-weekly-rebalancing-of-separate-alpha-factors-using-optimize-api#5b5b0e6b988bc6689fb1cca5 It's a rough proof of concept and can surely be optimized and improved, but the basic idea is it takes a weekly rebalanced alpha factor and diversifies it across days of the week on a rolling basis. Perhaps it can be adapted to what you guys are doing to solve the peak turnover issue. @Joakim Thank you for the kind words. The strategy primarily makes swing trades lasting 3 to 5 days in duration, on average. There aren't any intraday trades, although it might appear that there are. The trade activity you observed on the open and the close are a result of the fact that the system trades some stocks on the open and others on the close. The system decides which stocks should be traded and at what time of day(open/close). Also, a symbol may be traded on the open for some time, then the system will change its mind and switch to trading it on the close. One might not think that this would make a meaningful difference, however I have found that it does. The reason for the orders beginning to be executed at 2:00pm is to ensure that the positions get filled before the EOD. Even though the stocks all have pretty high minimum volumes, I've noticed that exiting and reestablishing a$120K position on the close, for example, can require a sizable cushion of time.
Now, if I can just get my algos to stop randomly timing out weeks after being entered in the contest I might be in business................ ; )

@Bryan: if you send an email to [email protected] with the name of your entry that timed out, we can look into the issue and potentially re-qualify it.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Joakim, thanks for encouraging words!

Not sure if there are any tools to measure alpha decay? Appears like something that should depend on the alpha, how easy it is to find it or how well publicized it maybe? Perhaps Quantopian long timers or platform experts might know.

Regarding the positions in the tear sheet, I am not too much worried that the strategy can be reverse engineered from the positions. Impossible to do so in my personal opinion as I do some crazy math.

Out of memory. Yes memory has been a concern in generating 15 year tearsheets with 1k positions. Splitting into two parts has made it possible to cover the full 15 year period. Not sure about round_trips if I can get actionable intel from it. But more lack of time (vs things to do) and lack of documentation on that front on what one can realistically deduce from the round_trips output. Besides my understanding from Jess's comments in the webinar is that Quantopian is looking for predictable positive alpha (more than any other nuance) so I would not be too much concerned with the round_trips output.

@Jess,

Thanks for the webinar and your feedback. It has given me a better understanding on what to focus on, much of which have been articulated by @Leo M above and @Olive Coyote in his Risk Focused Algo thread. as their takeaways. Correct me if I'm wrong but in short, it is mainly about containing risks as reflected by the portfolio's volatility through highly diversified holdings ( in the hundreds, if not thousands) with low turnover after neutralizing beta to market, sector and style exposures to extract alpha. Guidance on alpha is consistency in 'Q Sharpe ratio' >= to 1.0 which does not account for risk free rate. I am assuming that Q does not account for risk free rate in the calculation of Sharpe ratio because their intended usage or execution of a portfolio of independent low risks market neutral strategies is to leverage it many times to enhance returns while containing risks at manageable levels .

As to hold out data and cross validation techniques, coming from an AI background, I found Walk Forward Analysis to be stringent enough to satisfy validation of OOS data in financial time series that is non linear and non stationary.

Thanks Jess for the webinar I just watched (while speed-walking on my treadmill, so I can't say that I picked up on everything, but subconsciously, perhaps it all sank in).

I'll provide a set of feedback later.

In case folks want to play around with it, here's some robust code to control turnover, via the optimize API (if a try-except is not used, there's a risk the algo will crash, for a low max turnover constraint):

    if context.init:
order_optimal_portfolio(
objective=objective,
constraints=constraints,
)
context.init = False
return
turnover = np.linspace(0.15,0.65,num=100)
for max_turnover in turnover:
constraints.append(opt.MaxTurnover(max_turnover))
try:
order_optimal_portfolio(
objective=objective,
constraints=constraints,
)
record(max_turnover = max_turnover)
return
except:
constraints = constraints[:-1]


A version of the code is here:

https://github.com/quantopian/research_public/blob/master/code_snippets/increasing_max_turnover_snippet

Note that there is an error. It should read:

# set context.init = True in initialize(context)


Q support: the code on github could be updated, if you want. I think the code I posted here is improved (and perhaps others have improvements, as well).

One thought would be to adjust the max turnover constraint dynamically, based on an indicator (e.g. recent market volatility). The idea is to allow lots of turnover, if there could be lots of money to be made, and less turnover, if the indicator predicts that profits will be slim.

@Bryan,

Very interesting and cool! Proves how linear and boxed my thinking can be sometimes.

The timeout issue used to happen to me all the time as well, and I also thought it was due to inefficient coding on my part. Turns out it was most likely due to this BTS Timeout issue, which many other users had experienced as well. I haven't had this happened since the fix was implemented, and I can also attest to Jamie's comment that if you send Q Support a mail with the algo name, they most likely will re-qualified the algo that timed out. @Viridian Hawk also did this recently which bumped him up to the middle of the top 10 leaderboard for a while (possibly only short-term, and partly thanks to the 0 returns floor for that particular algo I believe, but still...).

Hey @Leo,

Regarding the positions in the tear sheet, I am not too much worried
that the strategy can be reverse engineered from the positions.
Impossible to do so in my personal opinion as I do some crazy math.

Yeah, fair enough. I don't worry much about reverse engineering either. I was mostly thinking that perhaps by hiding the positions the notebook would use less memory, and perhaps enabling you to run it with round_trips = True. Anyway, great algo!

Hi everybody,

Some highlights:
- Around 500 positions always.
- Low volatility (2-2.5).
- Sharp around 2 it only drops below 0 shortly on a 6-months rolling basis.
- Risk controlled for common risk factors as defined by Quantopian.
- Max drawdown over latest 68 months: 2.1%.

Hope everyone is having an excellent day.

2

A few take-aways:

1. Recommended to have >= 100 stocks in the portfolio, at any given time. The more the merrier, it would seem (unclear why this makes sense, for a crowd-sourced fund with broad quant participation).
2. Limit daily turnover to 0.1-0.15, and avoid large spikes in turnover.
3. Sharpe ratio of >= 1.0 (this requirement needs clarification, since the Q Sharpe ratio computation does not include the risk-free rate, whereas conventionally, it is subtracted from the returns).
4. Consistent risk factor exposure is frowned upon, even if it is within the contest/fund limits.
5. Each algo is being viewed as more-or-less stand-alone, and needs to be highly diversified and scalable. Not really worried about low correlation to peers requirement (looks like this may have been dropped?).
6. Q working on update to slippage model, to improve accuracy.
7. Factset data expected to be benefit Q Community, but some data will not be accessible to present, creating a kind of imposed hold-out period (Q would be able to run algos through present). Unclear how this would work for contest. Also, unclear how Community quants would have path to get access to data, up to present, without paying (e.g. a perk of getting a fund allocation?).
8. Over-fitting a big challenge, given that current Q evaluation process requires quant to wait 6 months or longer to know if algo is decent and potentially fund-able. Longer backtests?

@Grant, Good point 8. on longer backtests. In his recent book Systematic Trading, author Robert Carver mentions the average number of years of historical data needed over which backtesting has to be done to be reasonably certain that a strategy with a particular Sharpe Ratio will actually be profitable out-of-sample. He determines this using the classic T-test for statistical significance. Check out Table 5 in Chapter 3: Fitting. On an average, Sharpe Ratio 1.0 strategies need to have been backtested over at least 6 years of historical data and Sharpe Ratio 0.7 strategies need to have been backtested over at least 10 years of historical data.

Grant, regarding point 6 on the accurate slippage model.

An estimation like universal 5 basis points is a risk in my opinion.

This enhancement will benefit everyone when there is more certainty and confidence in the modeling (measurements).

Hi @Grant,

For 7. my understanding of how it would work is that Community users wouldn't have access to the most recent data (e.g. last 12 months or whatever they agree to with FactSet) in Research and backtest IDE, but that it would be accessible in live paper trading so the datasets can still be used in the contest.

Grant, regarding 3. (Sharpe ratio of >= 1.0) does that matter as much. I personally don't care if Q ignores risk free rate for Sharpe. If they take a range say [0..4] and recalibrate it to [0.5 .. 4.5] at the end of the day when comparing users relative to each other a range start offset does not matter (risk free rate is same for all users). Does Q fixing the Sharpe ratio to match industry standard affect your algorithm in anyway? Make it more profitable, less volatile?

Derived metrics like Sharpe, Calmar, Sortino will not translate to live performance. Cost model will be more important to get right, because that will affect the viability of a strategy.

The original backtest I run said this wasn't in the QTU, but I know for a fact that the factors derive from the QTU, and have added on the QTU (to double-check) to all my Pipeline screens, and still got that result, so I'm not sure why.

Economic Rationale: The main hypothesis is that stocks which are well "loved", will continue to be loved in the future. This strategy also blends multiple sub-strategies so that volatility from any one strategy or factor is reduced.

0

@ Leo M. - The Sharpe ratio reference was removed from https://www.quantopian.com/get-funded, so we'll assume it is no longer a firm requirement, so the definition doesn't matter.

@ Rainbow Parrot - Yes, I read that same book, and recall the discussion of relatively long backtests required to determine if a SR is real. I also recall reading that a long-term SR of ~ 1.0 is about as good as one might expect, with ~ 0.5 being more typical. I kinda wonder if Q is inflating things a bit...

This is an updated version of my previous post (I couldn't find a way to change the attached object), where I fixed a small amount of my code dealing with allocation, to the originally intended allocations.

"The original backtest I run said this wasn't in the QTU, but I know for a fact that the factors derive from the QTU, and have added on the QTU (to double-check) to all my Pipeline screens, and still got that result, so I'm not sure why.

Economic Rationale: The main hypothesis is that stocks which are well "loved", will continue to be loved in the future. This strategy also blends multiple sub-strategies so that volatility from any one strategy or factor is reduced.
"

1

Update on how to handle discussions. 4 people voted they'd prefer a new post for each discussion, while 3 voted that they wouldn't. Given the divide and low response rate I won't enforce anything in this thread, instead please use your judgement on whether a discussion would be better served as a new post linked in a comment. Thanks.

We have the so-called style risk factors (Momentum, ShortTermReversal,
Size, Value, Volatility ), and the webinar emphasized the importance
of low exposure to them. However, if they are examined with Alphalens,
would one conclude that they are consistently predictive factors at all, or just
noise? And for ones that are just noise, would it make sense to drop
them from the Optimize API constraint (that presumably most folks are
using to keep their algos in-check with the contest/fund rules)?

This tearsheet is based on the first strategy found here: https://www.quantopian.com/posts/fundamentals-up-the-wazoo

I presumed that “up the wazoo” was a derogatory expression for stuff we should not mention. However, the more I study the mechanics and structure of this strategy, the more I find some merits to it. Not in the way Quantopian presently wants to look at strategies, but where I do see a diamond in the rough where I have to get rid of the roughness. Chip away at its holdbacks, instill some protective measures (it is lacking in that department), emphasize its better procedures, and let it fly. At least, it is of sufficient duration to have seen most types of markets.

The economic rationale of this trading strategy is that it buys stocks on the way up in a rising market. It has found a complicated excuse to do so, but that is fine. Somehow, it does issue buying orders whatever its excuse might be. Note that I have not as yet completely read the code, but I do not need that yet to put pressure on the variables of its payoff matrix equation.

My intervention up to now has been to increase the number of trades and the number of selectable stocks. I have not started changing the code's logic yet. I expect that doing so might slow down the strategy's CAGR, and on the other hand might improve on what is presented.

The game remains a CAGR game. 5% over 30 years is 4.32 while 29% over 30 years gives 2,078.22 times initial capital. So, one is comparing games (strategies) where you will have to work for 30 years with one having its payoff matrix tally 4.32 million compared to another which ends with 2,078.22 million. The same amount of work will be done by one's “machine” but it will still take 30 years to get there.

I think it is worth the effort to push strategies beyond their limits to see which barriers we should not cross. But until we do find those limits, who could say where they are?

4

The strategy seems to have been discovered by the markets though, the returns are tapering out recently.

@Quant Trader, yes, it is tapering off. But, it is not because the strategy has been discovered and that too many traders are playing it.

The tapering off is a weakness of this strategy which can be corrected, or at least alleviated. The charts show market exposure is gradually tapering off with time to end at around 40%. Meaning that only 40% of the capital is being used. The remedy is to unrestrict the stock selection process in order to allow more stocks to be chosen which will result in more trades, thereby increasing n, the number of trades, in the payoff matrix equation. Increasing the number of trades will help sustain its CAGR.

I've only started modifying the strategy. I might end up not liking it, but thus far, it shows potential, so I will forge ahead and try to reduce its drawbacks and enhance its strengths further.

On the topic of mitigating over-fitting, Quantopian has user meta-data in their toolbox, as well. For example, in 2016, Quantopian published a paper, which was discussed on https://www.quantopian.com/posts/q-paper-all-that-glitters-is-not-gold-comparing-backtest-and-out-of-sample-performance-on-a-large-cohort-of-trading-algorithms. There's also the possibility to have a computer review the contents of code, as we see with checks for use of the optimize API for ordering, specific slippage and trading cost models, etc. (presumably, the terms of use limit Quantopian's application of automated code review to these sorts of structural checks?). Taking a more comprehensive view, it would be interesting to hear how the presumably anonymous black-box algo performance ("exhaust") data are melded with user meta-data (if at all) and the self-reported Strategic Intent (discussed here), to make a funding decision.

I realize that this may be a bit off-topic here, however, over-fitting was a concern discussed by Jess in the webinar, and so I figured I'd bring in some considerations from the Quantopian perspective regarding additional factors that could be brought to bear, within their site terms of use.

@Grant, the question should be: is a trading strategy over-fitted if it does not know when why what and how much it will trade?

This strategy has quite a complicated trade setup. Only a limited number of stocks can pass through the loops. To the point that there is an initial limit to the number of trades it will take and then it tapers off.

Presently, my efforts to increase the number of trades has stalled. Even if I tell the program that it has more trade opportunities, none are taken. And there is a reason for it. It is in the structure of the program itself. The design constraints will not allow more stocks to be selected even if there are more selectable. Making it another technical problem to solve which is to expand those constraints, and not restrict them more. This is talking overall strategy design. Not as code but as: what do you want the strategy to do and how should it do it?

The strategy's trading decision matrix will form a trade decision maze or net that it be over past or future data. It cannot tell which of the many parameters is the trading mechanism, even more so when calculations are done up to 12 decimals. A fraction of a penny change somewhere in those parameters over the past year or so and you have a trade!

We can transform the strategy's payoff matrix equation to include its decision matrix. It would result in the following:

F(t) = F(0) + Σ(HD∙ΔP) - Σ(Exp)

where we will find the decision matrix D composed of series of ones and zeros, indicating market exposure or not. However, you will not find any rationale for its behavior except maybe for some averages per trading interval.

This maze that the strategy gives to its price series is “predetermined”, not predictable, but predetermined by the trading logic and its constraints. This does not make the strategy predictable since it still does not know why it is trading that many shares in that stock at that time. It is just an ensemble of circumstances and infinitesimal variations in an ensemble of factors. However, it is consistent in buying stocks in a rising market. And this excuse is sufficient for it to profit.

I cannot say in the past what exactly triggered a trade, nor will I be able to in the future. But then, the question might be: does it really matter? The money is rolling in. Maybe a better question could be: is it acceptable that it trades this way? Or, should I modify the code to correct what I see as design flaws and weaknesses here and there? And force the program to do what I want to see.

If I do not know why, how or when a trade will be triggered, am I not responding to a quasi-random phenomenon? And then the question becomes: can I over-fit randomness?

The attach backtest is the same as the last one, except for one number. Instead of using the 10-day average trading volume, I raised it to 15. A strategy should not be that sensitive to such a minor move. And yet...

1

This is the same strategy as in the previous post where added lag was introduced to increase profits. While in this iteration, lag is reduced on 2 other parameters to again increase performance. This time by some $44 million. From the first modifications to this strategy, which, btw, is not mine, it went from taking 2,722 trades to 4,000. And, in the process added$443.9 million to the pot. All by changing program numbers. Not touching any of the program's trading logic.

The point I would like to make is: a trading strategy has a predetermined and hard-coded view of the world it has to live in. I view a developer's task to extract from this random-like maze what he thinks he can.

You simply improve on the strategy's payoff matrix, and you know the numbers that can have an impact on the strategy's end results, therefore, all the pressure should be put on those in order to push the strategy to where you want it to go.

The portfolio's payoff matrix equation is: F(t) = F(0) + Σ(H∙ΔP) - Σ(Exp) = F(0) + n∙x_bar.

It only required to increase the number of trades and increase the average net profit per trade to get better results which is exactly what has been done in this iteration.

1

You could push for more if you wanted to as long as you would not have found the strategy's limits. How could you tell where they are if you do not even search for them? The point being: do exceed the limits and then scale back some to a level you might find more acceptable for whatever reason. At least, you will know where the limits are, and that you should not cross them, if you have not already.

In this iteration of the strategy, I again requested to reduce the time lag of the trade triggering mechanism knowing that it should enable more trades to be taken. The changes are minor, again just a few numbers, nothing extraordinary, but still, numbers that can have an impact on the strategy's payoff matrix. I did 4 or 5 this time. I do not know which one had the most impact, or which combination thereof helped generate the added profits. But, this could be isolated and identified by making more simulations while changing one factor at a time. That is too much work at present time. Better things to do.

The modifications are not looking to make an impact here and there, the objective is to have an impact on all trades, however small it might be. Since you are dealing with thousands of trades, all the profits will all add up to n∙x_bar.

0

Should you push for more? It is an interesting question. And, how much is enough is another.

This trading strategy is peculiar. If I had to describe it from the outside, I would say: it buys stocks on the way “up” in a rising market. And that phrase summarizes all it does. It is wrong about 35-37% of the time giving it a 63-65% hit rate. The trade triggering mechanism is so complicated that you do not really know what is triggering a trade, and this for anyone of them.

Its redeeming “quality” is that it will participate in the market often. Albeit, not with definite precognitive notions of what it is doing, but should it really matter if it is able to profit anyway simply as a side effect of its participation.

There is no explicit stop-loss in this strategy. Nonetheless, there is still one present that will trigger an exit according to the very structure of the trading procedures and its stock selection process. Again, you do not know why or how, but it will do it. You have to live with the consequences, and in this case, it gives a positive portfolio payoff matrix.

It must show that I started reading the program.

1

@Guy Fleury,

With all due respect, I suggest you move your posts regarding this particular algo you're trying to showcase to a separate thread because this thread is specifically geared towards algos that qualify for allocations following a long/short market neutral strategy. This algo is a long only strategy and therefore would not qualify for the contest or for allocation. Let us please be mindful and respectful of Delaney's request:

Given the divide and low response rate I won't enforce anything in this thread, instead please use your judgement on whether a discussion would be better served as a new post linked in a comment.

Thank you.

@James, with all due respect, I suggest you re-read the first paragraph where it is said:

Whether or not you're looking to receive an allocation, we'd like to
try to provide more ways to get feedback on your strategy.

which might have a different interpretation.

@Guy,

Again, with all due respect, re-read the first sentence in the first paragraph you're referring which states:

We recently released our new backtest analysis screen to provide better feedback on your strategies.

The new backtest analysis is tailored to follow and analyze a long / short market neutral strategy with constraints and thresholds that gives guidance to the new contest. I believe this is the purpose of this tearsheet review thread.

@James, then again, with all due respect.

We recently released our new backtest analysis screen to provide

It does not specify any particular type of trading strategy. It only states “your strategies”.

Furthermore, the backtest analysis is applicable to any type of trading strategy whatsoever. It is a generalized tool that you can use or not. It was not designed just for strategies attempting to qualify in the contest.

You don't like what I say, my recommendation... don't read my posts.

What I would like better is someone attempting to demolish what I presented and provide some reasonable explanation for doing so.

That you might not be interested in trading the way that strategy is designed. It is perfectly fine. I am not selling anything. However, at some point in the future, Quantopian will be searching for this type of trading strategy or something similar. There can be a lot of benefits for having one or two around even in the current Quantopian scenarios. I posted something to that effect some time last year or so.

@Guy,

I have no desire to further squabble with you on the interpretation of textual context of the purpose of this thread:

Please leave your tearsheets here so that Jess, our head of portfolio management and research, can review them via a recorded webinar. She can provide feedback based on what we're looking for, and also in general what quants tend to aim for in strategies.

What is Q looking for? A long / short market neutral strategy. Why did they redesign the backtest analysis? To reflect what they are looking for!
What do you think is the first thing Dr. Jess Stauth will say in her next tearsheet review webinar to your showcased algo? Convert it to long / short market neutral first, then we can talk about it.

I merely suggested to move your posts to a separate thread as I found it to be out of sync to the intended purpose of this thread. I did not comment on the contents or what you are trying to do with that particular algo. The minute I saw it was a long only strategy, I knew it was outside the realm of intent. Do not lose your marbles over trying to rationalize it

@Guy, would you have a original strategy to share in this thread, one that passes the new contest criteria. The thread can be become difficult to navigate if users start modifying strategies posted in this thread and then themselves start posting several different versions of modified tear sheets of what appear to be transformations in leverage, exposure that are way outside the bounds of the new contest.

It is an open forum, and folks will post what they will, within the bounds of the site terms of use. Guy's posts, although not of particular interest to me at this point, are generally in line with the request for tear sheets of strategies. It would be interesting to hear from the Q powers-that-be on the potential institutional market for long-only algos, and then to consider what Guy has shared in that context. Q has mentioned that at some point, they'll be accepting other styles into the 1337 Street Fund. Maybe long-only will be one of them, and Guy's strategy would be a perfect fit? Or maybe not? There's a "teachable moment" here, either way.

There are markets for many different kinds of strategies. Long only, long biased, market neutral, beta driven, futures, etc. The one that Quantopian has chosen to engage in for our first funding process encompasses pure alpha, market neutral, low volatility, and low risk exposure. In the long-term future, we are planning on developing other processes to fund other styles of algorithms. Right now we are still developing our first process and focusing purely on the style of algorithms that will pass the contest criteria. If you have ideas for strategies that don't satisfy the contest criteria, we'd love for you to continue to learn, test, and develop them on the platform. Maybe, in the process of developing a long-only strategy, you'll discover an anomaly that could be useful in a market neutral strategy, or you might find a long-only model that can be combined with a short-only model to be market neutral.

For the sake of this particular thread, I will ask that people keep the focus on the style of algorithms that pass the contest criteria. I agree that otherwise the thread becomes confusing. I would recommend starting another thread that focuses on a separate style of algorithms if you'd like to discuss that.

Hi Jess -

Regarding your feedback on the number of stocks traded, I'm wondering if you might be able to sort out how to support combining algos, for the contest/fund? Say I write three more algos, each supporting ~25 stocks. When combined with the one described by my tear sheet above, we'd be up to the ~100 stock mark.

This approach could potentially also address the issue described here:

https://www.quantopian.com/posts/daily-and-weekly-rebalancing-of-separate-alpha-factors-using-optimize-api

So, by supporting the combination of multiple algos into one, I think you could "kill two birds with one stone" if not more.

Hi Grant,

In my view, combining multiple alpha signals (or even algorithms treated as signals) is much more powerful when there is overlapping coverage across a large number of stocks.

This is not to say that it is impossible for folks to come up with a piecemeal coverage effect, but in that case something like, sector specific or eventually market/geography specific signals is more likely to do well.

While adding up 4 25-stock portfolios does improve diversification and thus decrease idiosyncratic risk driven from one or a few positions, you aren't getting the effect of signal diversification in a single "bet" which can be really powerful.

My advice for anyone looking to build alpha signals would be to use the full extent of the QTradableStocksUS and attempt to "score every stock" with each signal and with their final combo signal if they are combining. From there if you achieve a per-stock ranking or alpha signal with the broadest possible coverage you have preserved a lot of flexibility in how you (or we) construct a portfolio to trade.

Hi Jess -

I kinda figured that would be your answer. For the algo I used to generate the tearsheet above, I actually restricted the universe to high volatility stocks (something like 75-95 percentile range), and jacked up the exposure limit to 4.5%. But hey, I won ~$1K in the contest (would be nice if I could be trading the$1K on Robinhood on Q now...but I guess you aren't bringing that back). Using the entire QTU, and going to 1% exposure kills the algo. No matter. I've decided to start from scratch, deleting everything on Q. Gets boring after awhile.

It would be interesting to know if any of the algos you've funded follow the multi-factor, rank-across-the-QTU approach, or if they are more idiosyncratic. But I guess you just look at the "exhaust" and get a little glimpse via the strategic intent description, so it may be hard to tell. Has the internal Q team, just for yucks, tried to write a viable algo using the prescribed workflow? Is there any empirical evidence it works?

"It would be interesting to know if any of the algos you've funded follow the multi-factor, rank-across-the-QTU approach, or if they are more idiosyncratic."

We've made allocations to both types, with the largest allocations going to algorithms that use "the multi-factor, rank-across-the-QTU approach". Going forward I expect we'll continue to give strong preference to strategies with low single position concentration. I plan to review the tearsheet of one such algo in my next webinar, which trades roughly 1200 positions in total.

Thanks Jess!

If the information is available, and can be disclosed, it would be interesting to also understand if allocation algos tend to use premium datasets in addition to the free ones? In other words, do you find higher quality strategies among algos that make use of premium datasets?

Since average holdings >= 100 seems to be a requirement for allocation, would this criteria be introduced as a constraint for the contest too? That way the allocation criteria and contest criteria would be better aligned and we can concentrate more on getting an allocation, rather than tailoring the algorithms to win in the contest only.

My general impression is that Q needs to do a better job of articulating what they actually need, and the Low Position Concentration requirement on the Get Funded page is a good example:

Low position concentration
Strategies cannot have more than 5% of their capital invested in any one asset.

The requirement is fleshed out here for the contest (which presumably reflects the fund requirement):

Low position concentration
Contest entries cannot have more than 5% of their capital invested in any one asset. This is checked at the end of each trading day. Algorithms may exceed this limit and have up to 10% of their capital invested in a particular asset on up to 2% of trading days in the backtest used to check criteria.

Presumably, the requirement is now baked into the backtester, and one can check against it there, along with a tear sheet.

The problem I see is that on the one hand, the message is 5% would be just fine, and on the other, Jess is saying:

Going forward I expect we'll continue to give strong preference to strategies with low single position concentration. I plan to review the tearsheet of one such algo in my next webinar, which trades roughly 1200 positions in total.

Presumably, this is the $50M algo in the headlines, and the tear sheet review will be of the algo (as Fawce mentions here). If this algo is being held up as exemplary of a typical kind of algo that would be funded, the position concentration is very much lower than the 5% limit in the contest/fund requirements. So, it is not clear how the "Low position concentration" requirement is to be interpreted. Perhaps it is just meant as a guard, for transient spikes in concentration, but the average concentration should be much lower? I gather that perhaps what Q really needs is the scoring/ranking across the broad QTU universe in the form of a combined alpha vector, versus full-up tradable algos (the 1200-stock algo is basically passing through the QTU ranking, I figure). Q quants would focus on the red alpha-generating circles in the quant workflow. I guess if I were Jess and wanting to construct a fund, I'd just ask for the daily alpha vectors across the QTU, and then have my team of whiz kids and my prime broker to handle the rest (including an additional alpha combination step, of all of the alpha vectors from the crowd). This would seem to make sense, versus all of the dinking around with the optimize API, etc. Trading 1200 stocks would suggest either a broad market inefficiency was uncovered, or a relatively large number of factors is being applied, each operating on sub-universes. It would be interesting to get some sense of which it might be, or if something else is going on. I'm thinking 100-200 stocks per factor, so maybe there are 6-12 factors? I'd also note that given the algo development cycle time of at least 6 months, it is really important that Q provide good guidance, in the form of a set of written requirements for a fundable algo. I would have never guessed that a 1200-position algo would be desirable, based on the "Low position concentration" requirement of 5% and may have taken a different approach 6 months ago. @Grant, I have found this lecture particularly useful : Lecture 25 "Position Concentration Risk" I remember reading somewhere in the forums that the contest limits are set to be not overly restrictive so as to allow a wide array of strategies to be able to qualify. "Trading 1200 stocks would suggest either a broad market inefficiency was uncovered, or a relatively large number of factors is being applied, each operating on sub-universes. It would be interesting to get some sense of which it might be, or if something else is going on. I'm thinking 100-200 stocks per factor, so maybe there are 6-12 factors?" @Grant, the answer to the above question is there in Jess's previous post. "My advice for anyone looking to build alpha signals would be to use the full extent of the QTradableStocksUS and attempt to "score every stock" with each signal and with their final combo signal if they are combining." @Jess , in the webinar you mention that the peak turnover at any point in time should not be >= 100%. What is the maximum guidance value on this? Can this value also be added to the risk constraints in the backtester? This way we can better develop strategies meeting all your requirements. Currently, the backtester checks only for the average turnover over the last 63 days and not peak turnover at any single point in time. If this is not added to the risk constraints in the backtester, a strategy implementing this constraint will be at a disadvantage compared to strategies not implementing it in the contest. Because every added constraint brings down the performance by some amount. Here is an update on my attempts to restrict peak turnover value at any point in time by passing a constraint defined like the below to order_optimal_portfolio along with other needed constraints. import quantopian.optimize as opt # algorithm code goes here max_turnover = opt.MaxTurnover( 1.0 # constraint for restricting max turnover value to 100% )  Note that turnover value can go up to 200% as explained here: https://www.quantopian.com/contest/rules Turnover is defined as amount of capital traded divided by the total portfolio value. For algorithms that trade once per day, Turnover ranges from 0-200% (200% means the algorithm completely moved its capital from one set of assets to another). So, in the above code snippet, where I am restricting peak turnover value to 100%, it means that I allow up to 50% of the portfolio to be reshuffled. Now, needing to change 50% of the portfolio in a day is quite a probable scenario if you backtest over a longer duration, like say 10 years. Then what happens if at any time you exceed this maximum of 50% change? The order_optimal_portfolio fails to find a portfolio that satisfies all the constraints because of the max turnover constraint. And from that moment onwards, because the signals are likely to change even more in the succeeding days from the time order_optimal_portfolio failed, the portfolio cannot be reshuffled any more resulting in a deadlock. Since, this constraint is for the maximum allowable turnover for any day, it does not make sense to handle the exception and increase the max turnover value just to find an optimal portfolio. In conclusion, the current contest criteria of restricting turnover based on rolling mean daily turnover definitely makes a lot of sense. But, restricting the max turnover on any single day could result in the problem I mentioned above. In my experience, if you use the turnover constraint, you need to put the order_optimal_portfolio in a try-except structure. If it fails, then you can either drop the turnover constraint, or successively increase the allowed turnover, until all of the optimization constraints can be met. You do raise a good point that the next time all hell breaks loose in the market, as we saw at the start of the recent Great Recession, there might be a reason to allow for transient high turnover. The dot-com/telecom bubble might be another example, in recent times, when high turnover might be justified. My guess is that Q assumes that the market will somehow be "well-behaved" but even in my lifetime, I can recall multiple instances of multi-day "crises" that perhaps could be profitable at higher turnover, if it is allowed. I guess the risk is that if the higher transient turnover is allowed, then the trading could also be unprofitable. Hi folks, I wanted to bump this thread and refresh the call for you to share tearsheets here for feedback. I'll be doing another feedback webinar in a few weeks (tentatively targeting October 10 at 1pm ET) and I would love to see what strategies you are all working on! If you joined the last webinar (or have watched it on youtube since) you'll recall that we touched on a few themes: (1) We find a lot of strategies in the community which take long exposure to mean reversion /short term reversal style and short exposure to volatility -- I would love to see what strategies you are working on that take advantage of fundamental data (value, quality, etc. type signals). (2) Diversify! we reviewed a lot of strategies with concentrated portfolios, I would love to see your strategies that score a wider breadth of stocks (>100 positions ideally) If you submitted your tearsheet but were not included in the first feedback webinar no need to resubmit, I'll make every effort to get to your strategy this time around. Last but not least, if you submitted your tearsheet but haven't heard from us directly, not to worry, we are working through a backlog of promising submissions and we will be reaching out directly to a number of you about next steps in the allocation process! Best, Jess Hi Jess, Thanks for the bump. After your first webinar on tearsheet feedback, I now have a clearer and better understanding of what Q is looking for in terms of possible fund allocation. The tearsheet below is my interpretation. This algo uses 4 different fundamental factors and 5 technical factors that are then combined and ranked/scored across QTU universe. I took extra care in choosing fundamental factors given their issues on availability and frequency. Because my backtest starts in 2008, made sure fundamental data is available on that start date. PEG_ratio which is a good fundamental factor was dropped because of later start availability. I touched upon this issue here I have a changable variable, Total Positions, that controls position concentrations. Anyway, this algo has an overall average of daily holdings of ~440 stocks which translates to average position concentration of 0.227%. 1 Click to load notebook preview To illustrate the changable variable for position concentration on the same algo above, this one has an overall average daily holdings of ~135 stocks or position concentration of ~0.75%. If I were to compare the results of the two algos based on Q Sharpe ratio and Q contest scoring, this one with a higher position concentration ( albeit still trades over 100 stocks) will be a better choice. Is this a correct assessment given that the final execution at the fund level is to leverage it 3-8 times? 2 Click to load notebook preview Hi All, I'll be attaching a tear sheet too in the next few days, with a strategy I've been working on (still work in progress), trying to incorporate Jess' feedback from her previous webinar: • Trying to minimize overfitting (I'm sure I still have plenty) • Minimizing daily average turnover, but >= 5% • Minimizing position concentration Regarding position concentration, I'm curious if anyone else attended Jess' webinar yesterday with The Data Incubator? At around 17 min into the webinar, Jess discusses that a strong/quality strategy/factor should have an increased Information Ratio (and Sharpe??) as the number of bets/positions increase. I have noticed that my strategies do have a slightly decreasing Sharpe (in sample) the more positions it trades. The decline is not nearly as dramatic as with the example strategies in the video, but I'm wondering if I should be concerned about this? Is it an indication that my strategy/factor(s) are overfit? I've also noticed that if I increase the number of bets/positions too much, the daily average turnover will dip below 5% and the strategy fails this contest criteria for daily average turnover. I've found that for this particular strategy, the max I'm able to have is 800 positions (400 each side) without it having turnover dipping below 5%... Is this a concern? Hi Experts, Do you know how to extract the "average daily holdings, overall" from the tearsheet in notebook? I can only extract returns. I look through quantopian/help but couldnt see the holdings parameter. bt.create_full_tear_sheet() total_returns = bt.attributed_factor_returns['total_returns'] ann_total_ret = ep.annual_return(total_returns, period = 'daily') print 'ann_total_ret {}'.format(ann_total_ret )  Hi all, We've just announced the upcoming tearsheet review with Jess Stauth, which will take place on Wednesday, October 10th at 4:00PM ET. Please submit your tearsheet by 5pm ET on October 9 so Jess has proper time to review. You can register for the webinar here. If you want to submit your tearsheet but cannot make the live webinar, please note that it will be recorded and made available for viewing on the Quantopian Channel! Let us know if you have any questions. Thanks! Update: I made a mistake here, and thought I was averaging Grant's algo with our copula algorithm, yet I forgot the average, so this ends up being pretty much Grant's algo, so I'm withdrawing this version, and will submit a a revised version if it pans out. Sorry for the confusion. alan @Paige, @Jess, A bit late, yet here is a tearsheet to analyze. It is a mixture of the factors from @Grant's new multi-factor algo(https://www.quantopian.com/posts/multi-factor-alphalens-example-1) and our version of copula. alan 2 Click to load notebook preview Hopefully I'm not too late, but... here's my MARTY Algo: Mostly Alpha Risk pariTY. :) Sales pitch: "52% of the time, it works EVERY time!" With this one, I've tried to get 'mostly' specific returns without risk constraining the style factors. I've also tried to consciously minimize overfitting, daily average turnover, and position concentration. Main concerns as I see them: • Drawdowns are way too long (though not very deep) • Returns appear to be degrading with time (as might be expected, as inefficiencies get's 'discovered'?) • Losing quite a bit on volatility. It's still a work in progress - I have a bit more OOS testing to do on individual factors. Most of the time before the start of this backtest I still have available for OOS testing for most of the factors. Parameter overfitting I've tried to minimize as well, though I don't know how best to do this, other than minimizing the number of parameters used. Is Kalman filters the best way to reduce parameter overfitting? 4 Click to load notebook preview @Joakim, Very nice! MARTY beats Magic, LOL! What is the cut off for submitting Tearsheets? Daniel: I think its safe to say if you can get your tearsheet up by 5pm ET today I should be able to at least take a look. If you don't get it up in time to get into the webinar list, then just flag that and I'll make sure to get back to you 1/1 with feedback as soon as I can. Thanks for all the submissions so far! It has been exciting to see the evolution of thinking and results in this thread. I'm looking forward to putting together the second installment of this review for tomorrow! Jess Thanks Jess, Just want to add my latest version of the MAGIC algo :-) Sorry, I'll miss your live webinar due to other commitments in that time slot but look forward to the recorded version. 2 Click to load notebook preview Hi Jess, I am trying to share my tearsheet, but each time i try to attach it, it crash and a window saying contact support popup. Is there another way to send you the notebook? Thanks, David @David, That's happened to me a couple of times too. Try saving the tearsheet in a new NB, save and close the NB and kill all open NBs, then try to attach it. That seems to have worked for me. Thanks for the suggestion @Joakim. @David, if you still have trouble, you can also email your tearsheet to us at [email protected] and we will make sure it gets to Jess. @Joakim @Page Thanks for the advise and solution..... So this is a simple algo, base on a single factor computed from the closing prise of the last 5 days (it used a 3rd order predictor, with a 2 point stencil). No fitting at all. I would have not bet a penny that it could be working, but rank 12 today LOL. 2 Click to load notebook preview @ Jess, thank you for the support Here's a tear sheet of a work-in-progress. I am currently exploring the impacts of adding a few technical indicators into the strategy. Not too sure which ones specifically would help but open to ideas. @ Joakim @ James .... I don't have a sweet name for it yet but you've inspired me to get creative haha 2 Click to load notebook preview This submittal is late now, yet it is the average of two factors: @Grant's new multi-factor algo(https://www.quantopian.com/posts/multi-factor-alphalens-example-1) and our version of a copula factor. Which corrects the error I mentioned above...the results are similar, yet different. alan 2 Click to load notebook preview You can now watch the latest tearsheet review, "How to Get Funded," with Dr. Jess Stauth below. Attendees received feedback from Jess on how to improve their strategies and increase their chances of placing in the Quantopian Daily Contest. Specifically, Jess discussed fundamentals-based signals and fundamentals-based cross-sectional equity algorithms. Please continue to submit your tearsheets to this thread and we will include it in our next live tearsheet review. Thanks @Jess, the webinar provided a lot of useful insights! A question from my side: If I have a strategy that on an average makes 2% to 3% annually, would it be possible to leverage it profitably based on your borrowing costs? For example, lets say my strategy makes 2% annually and it is leveraged to 2 times. But if your annual borrowing cost is also 2% on the leveraged amount, the net expected returns on the leveraged part is 0%. So, what is the guidance value for minimum average annual returns to make leveraging economically viable for the strategy, given your borrowing costs for the leveraged capital? I believe that with the interest rates rising in the economy as is happening now, this would become even more important for us to consider in our strategy design. @Jess & @Paige, My feedback: • For me, these tearsheet review webinars are incredibly helpful. I like the ability to have a dialogue with the investment team on a regular basis, and being able to ask and get questions answered. Thank you for doing these! • Doing them quarterly gives me enough time to research, backtest, and incorporate feedback from the previous session. Anything less frequent and people might forget or lose interest. • Simple template algos of what you're looking for is great! More of those please. :) • Sometimes Jess talks about someone's strategy, but the next slide is showing in the presentation. Perhaps Paige or someone can help ensuring the correct slide is showing? • I like that Jess answered questions throughout the session, rather than only at the end. • I also like that Jess answered ALL the questions. This time there weren't that many questions, and we finished early, but sometimes there isn't enough time to answer all questions, which can be a bit frustrating. It would be great if you could schedule a 30min overflow buffer, to ensure everyone's question is answered (just my opinion). • Personally, I would prefer to have a new/separate thread for each tearsheet review webinar. Mostly because this thread gets a bit difficult to navigate as it grows longer. I can no longer view this thread on my phone due to crashes (iOS12, Safari). Maybe I'm the only one with this issue? I can understand that you may want to keep everything in one place though. Lastly, a (loaded perhaps) question: If we score most stocks in the QTU and send our 'alpha factor' to the Optimize API (both required in the contest), wouldn't you, in theory at least, be able to assign whichever position concentration level and risk constraints (and objective) you want? Effectively, since you control the Optimize API, wouldn't you be able to 'overwrite' the constraints that we assign, without actually looking at the 'secret sauce alpha-scoring' code that feeds the Optimizer? Just curious really. :) +1 Rainbow Parrot in respect of "annual borrowing cost" aka "Cost of Capital" ps: Cost of Capital may consist of a mix of equity+debt in which Cost of Equity depends on the shareholders' required rate of return (or hurdle rate) whereas Cost of Debt (or borrowing cost) depends on the source of funds. For example, venture capital that takes up equity position is regarded as the most expensive CoC vs bank borrowing at competitive market rates. See WACC Weighted Average Cost of Capital I don't think Prime Brokers actually charge for borrowing costs and leverage. I believe they tend to make money mostly from trading and settlement fees, and potentially from selling order flows to market makers, HFT firms, dark pools, and perhaps from exchange rebates as well. In short, they want to encourage (not penalise) hedge funds to use their capital (and long inventory) as much as possible, closely risk managed of course. Just my understanding, could be wrong... @Jess One little question: What should be "idealy" the daily turnover of a strategy to be workable? Thanks in advance. Hi all, Thank you for the feedback and questions! @rainbow parrot - almost more important than the size of the annual returns (assuming they are positive) is that the returns be consistent (high hit rate of positive days/months/years vs. negative) and as much as possible the returns be idiosyncratic to common risk factors. Financing, borrow and other costs can be kept fairly low for quant funds because they are set based on the global portfolio risk characteristics (among other things). This is one of the operational advantages of running a dollar and market neutral portfolio. tl;dr yes, a strategy that returns a consistent 2-3% per year on unit leverage (with application of default 5bps fixed cost penalty we use in the contest) with reasonable turnover and risk exposures is of potential interest to us @joakim - will take all your suggestions on thread/formatting - agree! Lastly, a (loaded perhaps) question: If we score most stocks in the QTU and send our 'alpha factor' to the Optimize API (both required in the contest), wouldn't you, in theory at least, be able to assign whichever position concentration level and risk constraints (and objective) you want? Effectively, since you control the Optimize API, wouldn't you be able to 'overwrite' the constraints that we assign, without actually looking at the 'secret sauce alpha-scoring' code that feeds the Optimizer? Basically yes @David Daverio: ideal is probably between 5% and 20% average daily turnover, but anything below 50% average daily turnover is probable workable. One note - because this value is computed as an average across days, you can get to those values a couple of ways. We'd rather see consistent turnover of 20% per day as opposed to say 100% turnover once per week. @Quant Trader, Turnover a whooping 832%, it wouldn't even draw the charts! What's going on here? Hmm, I'm a little bit confused now. The backtest only has turnover at 170% ish, I don't know where the tear-sheet got the 800% number? The algorithm's just a 'turbo-charged' mean reversion algorithm with a twist, the results were significantly better than I was expecting! It's only ~150 lines of code. I don't know how to get the leverage up to 1 though, I get really weird spikes in that between 0.2 and 1.0 all throughout the backtest. yes, a strategy that returns a consistent 2-3% per year on unit leverage (with application of default 5bps fixed cost penalty we use in the contest) with reasonable turnover and risk exposures is of potential interest to us Thank you for the clarification! This will allow us to work on having a greater number of holdings without worrying too much about diluting returns. @Jess Indeed, one want to keep the turnover as constant as possible, I am working with pen and paper on this problem currently :-). You might be able to give me a little help with it (turnover computation is still not clear for me). So lets say I want to reach a turnover of 20%. I have a portofolio with a capital of X. First question: Being dollar neutral means I short X and long X (not 2X as some people told me in the forum as one have to take into account the margins (is 100% usual? or is it more?)) So the question is the 20% turnover objective is 0.2X or 20% of 2X = 0.4X? Put differently, to get the 20% should i reballance 20% of my short and 20% of my long or only 20-X% of my short and X% of my long. The point is that my factor selection depend evidently on turnover objective ;-). I was mainly focusing on 1-2 days prediction which lead to a 60% turnover.... Then a final question about allocation. There is algorithm which perform better when the market goes down (it is what I was focusing on lately). So my question is: do you prefere an algo which combine algorithm which perform well in both case (market down, market up) or you dont care as you can balance algorithms in the overall portofolio? Then lol I have to share it with you: happy day, position number 10 in the contest :-). @Joakim: thanks I would not have managed this without some of your post! Attached a tearsheet of my first attempt with the new FactSet data - I'm quite happy with the data quality compared to morningstar. Rationale is based on fundamental valuation, the algorithm uses several FactSet fundamental and one OHLCV factor. Thoughts/feedback highly welcome. 4 Click to load notebook preview Algorithm based on a basket of fundamental and statistical factors, each individually analyzed for predictability and robustness in different market conditions, weighted by machine learning classifiers. 4 Click to load notebook preview I have sent my tearsheet of a new algorithm (I have been working since middle of this year) to Jess. Posting here to hold my position in the queue for the next webinar. Hi, Here's a tearsheet of my latest algo, hScalp. The default fees are used. It uses limit orders so there is no slippage. It doesn't meet the contest criteria, but I would be interested to hear what the Quantopian team has to say about it. It doesn't reinvest profits so you should ignore the annual return. Intraday leverage is under one for the vast majority of the test. Although it's only a two year backtest this algo has a 496.1% daily turnover and trades about ten securities at a time, so there are lots of trades being put through, which makes shorter backtests more reliable. Starting cash is$1,000,000.

I'm not for giving up much info with regards to my 'economic rationale', sorry. I will say I've been doing my best to ensure that there are no major flaws in the backtesting. Slippage, fees, intraday leverage, limit order fill models, I've been analyzing all of the things that can throw a backtest off.

Start date 2016-11-02
End date 2018-11-01
Total months 24

Cumulative returns 542.3%
Annual volatility 34.3%
Sharpe ratio 2.88
Calmar ratio 6.33
Stability 0.94
Max drawdown -24.2%
Omega ratio 2.05
Sortino ratio 5.92
Skew 5.89
Kurtosis 94.90
Tail ratio 1.27
Daily value at risk -3.9%
Gross leverage 0.21
Daily turnover 496.1%
Alpha 0.99
Beta -0.04

Regards,
Warren Harding

1

@Warren, those are really impressive numbers. Great work.

Could you show the same with the  round_trip = True  option?

Having recently discovered a silly little flaw in a silly little system presented in a research notebook I am more and more disinclined to take any notice of any system presented here without the full trading logic being available.

I suppose, as far as quantopian is concerned it doesn't really matter. They will follow algos put forward on the contest and those algos will get discarded when they mess up.

Nonetheless my suspicion is that systems with fatal flaws can prove profitable for considerable periods through sheer random chance. Every system will eventually get screwed anyway so perhaps the fact one particular system has a fatal flaw in logic so that it does not operate as its coder intended is irrelevant.

I think I am probably completely anal. Perhaps it is the fact I'm an ex lawyer which makes me dig my teeth in and not let go until I have established that there are no errors and a system works as planned.

I would therefore find it incredibly hard to rent black box algos as quantopian does.

But then I'm anal and Quantopian is the clever boy who has managed to get an allocation from Big Steve not me.

Success in life works in strange ways. It's often as much about networking and people persuading as techical expertise.

But I waffle, as usual.

Hi Zenothestoic,

I tend to agree, at least to part of it. If I run 1 million random (shorter term) backtests, a few of them will most likely look pretty good. However, longer backtests (10 years or more), are more likely to cover different market regimes, and if the strategy still holds up reasonably well during a longer backtest (decent sharpe without a downward trend) then I'd say the strategy is a lot less likely to be fitted on noise.

Combine this (long decent backtests) with the 1 year holdout period from factset data (as the first OOS test, after any other OOS testing done while creating the model), and if the strategy still does well, I think it's even more likely to be a 'good strategy'.

The next OOS test to me is the 'live paper trading' in the Q contest. Again, if the model holds up reasonably well for 3-6 months in the contest, it's even less likely to be overfit.

However, the 'final' (and REAL) OOS test will be trading these strategies that have passes the first three tests (long backtests, FactSet holdout, and 3-6 months contest performance) LIVE in the REAL market with REAL money! It's relatively easy to create strategies that does well in the past, or that does well in OOS tests in simulation. The real market is a different story however...

Oh, and in the spirit of attaching tear-sheets, here's one of a strategy I've been working on. Not too shabby I think, but could be overfit (the live_start_date is just the start of my own OOS test). I'm tempted to submit it to the contest to do the '2nd OOS test' using the FactSet holdout period, and to see how it does in the first 63 days of the contest. :)

4

As requested by Guy Fleury I posted a tear sheet with round_trips = True. I didn't want to clutter this thread so I posted it here.

Hi everyone,

We will be hosting another live tearsheet review with Dr. Jess Stauth on Thursday, January 24th at 2:00pm ET. Please submit your tearsheet by 5pm ET on January 22nd so Jess has proper time to review. You can register for the webinar here.

If you want to submit your tearsheet but cannot make the live webinar, please note that it will be recorded and made available for viewing on the Quantopian Channel.

Let us know if you have any questions. Thanks!

Hi

Here is my algo perf.

Thank you
L

2

Hello all,

Algorithm is making trading decisions based on alpha consisting of 8 fundamental factors and 2 momentum factors.

Thank you,
Vedran

2

Hey @Antony,

Thank you, really appreciate it!

I agree that it does look quite impressive IN sample (IS). However, I do believe it's at least somewhat overfit. My original 'training period' was 01-01-2010 till 01-01-2017. I then started to 'test' it out-of-sample (OOS) on 01-01-2017 till 14-12-2017 (all recent data available to me), but I made the mistake of also starting to 'train' using this period.

Fortunately I still had most of 2005-2010 available to me untouched/unseen (I haven't trained on it at all), and testing the strategy during this period I get an average sharpe of 2.46. Maybe 'some' decline/variation in average Sharpe should be expected, and I'm not sure if this would be reasonably within those Bayesian bounds.

It's not horrible I suppose. However, when I submitted it to the Q Contest, and got access to the FS holdout period of last year, it performs even worse during this time - average Sharpe of 1.4 in 2018! Which to me IS borderline horrible... I believe Jess has said to expect a drop of 1.0 in average Sharpe between IS and OOS, and that some people argue that a drop of 2.0 is more appropriate. Seems quite accurate in this case.

The strategy uses mostly fundamental data, which I'm hoping/rationalizing is less prone to overfitting, and I've taken precautions in the design to minimize the risk of overfitting, but still... not looking too good OOS. I'll leave it running in the contest for at least 63 trading days, to see how it does going forward, 'hoping' for a 'dead-cat-bounce.' :) This strategy is also somewhat/reasonably uncorrelated with my current top strategy in the contest (which does perform really well in 2018) so maybe they will complement each other in my 'contest portfolio.' :)

Back to the drawing board though I think... I might create a separate thread where I'll post the OOS test tearsheets. I don't want to 'pollute' this thread too much with multiple tearsheets of my own (it takes ages to load fully already, and crashes when I try to view it on my phone... )

PS: Does anyone actually read my long rambles? ;)

PPS: The strategy's Economic Rationale: "Warren Buffett On The Move!" Buying 'great companies at reasonable prices' when they are 'on the move' [credit: Andreas Clenow - a fellow Swede :)] in the right direction. Shorting the opposite.

Haha, no worries Antony, I didn't even notice till you pointed it out. :)

This is for review by Dr. Stauth, yet comes with a question about usage intent, not the usual performance questions.

The algo is what I'll term a Phantom Algo(zero everything) , fueled by a fundamentals-based alpha factors pipeline, blended with a sectors-based residuals factor.

It's been in the contest for 2-months, so I included that as the out-of-sample test.

With Microsoft and BlackRock colluding on retirement annuities for the masses, it got my brother Jeff and I thinking about these Phantom algos Q has us developing.

So the question to Dr. Stauth is:

Can I take as my Cumulative Return for a Phantom Algo the return I get on what it took me to fund the true execution of the algo instead of my initial cash position ?

Obviously there is risk there, yet like an annuity, I might be able to get "insurance" to cover that risk.
In the example presented above, the Phantom algo makes 5% on $10M over 2 years, yet never uses more than$500K, if I run this Phantom algo with a $500K investment, I make 100% cumulative return over 2 years...so which is it...run for two years with$9.5M cash sitting in a brokerage account, or fund the whole thing with $500K and use$500K more for insurance/risk mitigation...hence use $1M to make$500K over two years.
Happy New Year!
alan

6

fund the whole thing with $500K and use$500K more for insurance/risk mitigation.

What method of insurance/risk mitigation did you have in mind? I spent months testing options and concluded it was a complete waste of time.

Although mind you, with the sort of returns you are talking about, we are into a whole different ballgame. The cost of options is nothing compared to 50% returns. It is amusing to imagine what options one might use given that one has adopted a "neuter everything" approach in the constraints part of the optimization.

We are almost down to chaos theory but trying to imagine what kind of event to guard against.

Are we worried about butterfly wings? Black swans flapping? What?

The neutralization approach has in fact introduced great complexity into the algorithm. It becomes difficult to know quite what risk you are now guarding against and hence what hedge might be most appropriate.

Also I have found it interesting in my research to note that you can produce absolutely HORRIBLE algorithms using optimize + constraints and yet still pass all the Quantopian risk tests.

It so happens that I have also produced some algorithms which look wonderful in back testing also.

But neutering the algo, at least with the current list of constraints, does not prevent a disaster if you use the wrong "alpha". And of course today's "alpha" might become tomorrow's "omega".

So if you have any useful ideas on insurance let me know!

@Zenothestoic,

When mentioning insurance, I was thinking in the meta-sense, not in terms of an options straddle,
more in the sense of what kind of insurance/due-diligence would a funder of this algo want...
e.g. what would it take to get Lloyds-of-London to insure it, and what process does an insurance company go thru when packaging a product.
alan

Model Name: Complexity Simplified
\$10m notional and Quantopian default cost model. Passes all Quantopian Contest constraints
Backtest period # 1 is 01.05.2015 to 01.09.2018
Backtest period # 2 is 02.02.2005 to 12.30.2014 (For backward validation, looking for consistencies)

Economic Hypotheses
I use two Factset fundamentals, a delta of one factor and "special" price based transformation with a central theme. Complexity can possibly be solved with Simplicity. Basically, I take a FactSet fundamental factor and its delta with confirmation from another fundamental factor and then combined with the price based indicator which is then ranked across the board. To disclose the central theme will expose the IP and trading logic of the algo so I'll leave it at that.

Portfolio Construction
Configured to trade approximately 1,000 stocks with bin weighing technique targeted to minimize volatility and drawdowns while hurdling all constraints.
Main focus is to find consistencies in performance metrics over long periods with various regime changes. Let's just say the way the Q framework of Optimize with constraints, I consider these two backtest as in sample results.

Since I'm using Factset data there is the one year holdout period which I will consider as out of sample data and see if there is overfitting. So, I just entered this algo in the contest fully aware that, if it holds, it is suboptimal with regards to the contest because the insample volatility (1.4-1.5 %) is less than the minima thereshold for volatility. Be that as it may, it may still be an interesting component to Q fund.

Short backtest tearsheet:

1

Here's the longer backtest tearsheet:

1

What is your definition of “delta" in this context? And transform of price as in moving average if its simplicity you are after? Given the vast complexity of the optimisation / constraints I suspect the simpler the alpha the better. As you say!

What is your definition of “delta" in this context? And transform of price as in moving average if its simplicity you are after? Given the vast complexity of the optimisation / constraints I suspect the simpler the alpha the better. As you say!

@Zenothestoic,

Delta, in this context, means simple difference of a factor between time periods, i.e, daily, monthly, quarterly, yearly. Moving averages as a price transform while smoothing the price series introduces some time lag which I don't want. My "special" price transform has more to do with causal relationships between stocks/assets. To say more would be giving away the whole kitchen sink! Chuckle...

If not, I'd like to know why?

Possibly! I guess time will tell. Although since time is a continuum that caveat is perhaps a little disingenuous. On that basis it would of course be hard to ever know. Nonetheless we'll done you for the attempt. I wish you well in the competition.

@Z, Thanks! Indeed, time will tell. Cheers!

Throwing my most recent one into the mix. This one is fundamentals heavy, using a mix of morningstar and factset data to pick winners and loser every morning.

Maintaining 1000 positions and keeping volatility at around/below 2%.

Let me know any thoughts.

3

Review Entry:

DIVA Derived Intrinsic Value Alpha

Economic Basis

The algorithm is inspired by the fundamental investment principles and methodology of intrinsic value investing – in the tradition of Warren Buffett, Charlie Munger, Benjamin Graham et. al.

Algorithm Design

The algorithm uses a company’s financial performance metrics for calculating intrinsic values as inputs into financial equations to derive value-based factors – expressed as ratios and z-scored for selection by normal distribution, typically at μ ± Xσ thresholds.

DIVA Portfolio Features

Given the intrinsic alphas, the algorithm selects any stocks that fulfil the distribution thresholds, ignoring any sector, style, market capitalisation, price level or liquidity.

• High specific returns – aiming at near zero common/risk returns.
• Low Drawdown, low Volatility – aiming to achieve lower in proportion to portfolio size scaling up.
• Neither style nor sector specific; no exclusion by capitalisation, price or volume – aiming to be as cross-sectional as possible.
• Risk focused – aiming to achieve near zero risk exposures.
• Avoid overfitting – aiming to minimise by distribution thresholds viz μ ± Xσ accordingly.

At this point, the algorithm has been backtested up to 2009 for portfolio range of 20~250 in size, all from the QTU universe at < 10% average daily turnover.

Concise posting of texts only – hopefully the community can read this post at the bottom of the thread – my PC returns memory errors on a blank page all too often!

Tear sheet and write-up have been emailed to Paige, as suggested.

Karl

@Karl, nicely captured the specs! Good Luck! Mysterious though....

Mysterious though....

To the extent that, Karl, one wonders as to the intent of the post? Particularly as it is unaccompanied by any tear sheet or write up!

Hi Paige,

Concise posting of texts only – hopefully the community can read this
post at the bottom of the thread – my PC returns memory errors on a
blank page all too often!

I'd suggest to create a new thread for the next tearsheet review webinar; this page crashes and keeps reloading when I try to view it on my phone, and even on a PC with a proper browser it takes forever to completely load.

Also, I wanted to post a tearsheet of my slightly amended strategy of the one I had posted earlier (walked-back and hopefully a lot less overfit), but I'm not sure I'll have time to work on it before the webinar (I'm currently on full time daddy-duty in Singapore). Please use this one if I don't post anything before the review due date. Thank you.

Joakim

Hi all,

Thanks to those of you who reached out about this thread getting too long. We've created a new thread to use regarding the upcoming live tearsheet review on January 24th. All discussion and feedback should be posted to the new thread going forward.

If you already submitted your tearsheet, you do not have to re-submit (unless you'd prefer to do so, of course).

Don't forget to register for the webinar!