Back to Community
Digging Deeper Into Long-Short Equity

I have yet two more questions regarding the long-short strategy. 1). So some hedge funds will use there ranking scheme to identify which securities will be placed in the long basket and which in the short basket. The dollar exposure is up to them they can say put 70% in long exposure and 30% in short basket or be market neutral 50/50? Than they can use portfolio optimization and further sort out securities with more details. So funds will sometimes use both?
My other question is so the securities in the short basket are obviously the securities that you think will underperform, but I get confused on this. Do you choose securities that will underperform against the long basket securities or are these stocks suppose to not be correlated to the long securities and you choose the short securities because you think they will tank in general? Thanks and sorry for being kind of confusing.

100 responses

Hey Nick. You are correct, the amount of leverage/basket funds use will vary. Because leverage is measured in % original capital base, if the strategy is running on borrowed capital sometimes you can get up to numbers like 300%/300% in the long and short baskets.

The securities in the short basket are the securities you think will perform worst. The nature of an alpha factor (ranking scheme), is that you sort by relative predicted value. So the stuff that is going to be worth more and have higher returns should go to the top and vice versa. The important thing is that they perform worse than the longs, not that they perform poorly necessarily. In an up market both baskets may be up, but the long basket more than the short, which is still fine. In fewer words, pick securities you think will tank in general. Also, yes you want to reduce correlation of the securities in both baskets to decrease volatility. This lecture discusses that: https://www.quantopian.com/lectures#Position-Concentration-Risk

One approach can be to select a set of say 400 stocks to short and 400 to long, then pick 200 from each based on an optimization that minimizes forecasted correlation. Another similar approach would do the same but just by adjusting weights amongst the 800 total.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Delaney -

I'm guessing you and Thomas are singing off of the same sheet of paper. I asked a question of him on https://www.quantopian.com/posts/machine-learning-on-quantopian regarding this business of ranking and then going short and long, regardless of the forecast. If I have 50 stocks, and my forecast is that all of them will have higher prices in 5 days, should I still rank the forecasts, and go long on the top 25 and short on the bottom 25 (even though they'll go up, not down, if the prediction is correct)? It would seem that I'd be ignoring my forecast, no? Or is the assumption that if I have enough stocks that have sufficient volatility and lack of correlatedness (correlatitude?), I'll always have forecasts that predict both positive and negative returns (and then I won't have to feel foolish shorting stocks that I've predicted to go up or vice versa)?

It is just not very intuitive how this ranking business is supposed to work. What's the point of the ranking? Why not just use the forecasts directly?

If you have a binary forecast of up/down, then a long/short ranking scheme will not work. If you believe that your forecast is somewhat monotonic, in that a higher forecasted value implies a higher real future value, then you can go long/short as the stuff at the top should go up more than the stuff at the bottom. You are not ignoring your forecast as much as using it to get at a different type of return stream.

There is absolutely nothing to stop an investor from making a long only portfolio of the 50 stocks. In an up market that might do very well as the absolute upwards motion of the stocks may exceed the difference between the top and the bottom sets. The reason you go long and short is that you are protecting yourself against market volatility and crashes. In practice it's incredibly difficult to predict the next time the market dips, and being long only means you're exposed to that. Ideally your alpha factor will spread stocks out enough that you aren't sacrificing returns in the long run when you compare a long only portfolio with crashes and a less volatile long/short portfolio.

Lastly, in general it's much easier to make relative value forecasts compared to absolute value forecasts. With an alpha factor you are not betting that a company's share price will be $101 next month, you are comparing two companies and betting that one will be more valuable than the other. This tends to be much easier because both companies have the same comparable metrics, and you can focus on the difference between the companies as opposed to trying to incorporate all the other factors you might need to make an absolute value prediction (market conditions, sector, global, etc)

Effectively you can use a simpler model that relies on comparing comparable metrics across companies or over time. 538 uses this a lot in their models for politics and sports, rather than comparing two different polls and trying to get a sense of the absolute state of the opinion, they compare polls with the same polls conducted before as the bias will likely be similar across both (they also select polls that have consistent bias). This leads to a more apples to apples comparison. If you haven't read Nate Silver's book "The Signal and the Noise", I highly recommend it. It goes over a lot of key concepts in statistical construction and validation of forecasts.

Part of my confusion is why not use a hedging instrument, such as SPY (or a basket of ETFs)? Then, I can go all long or all short, and hedge with the instrument. Then, one can simply forecast in an absolute sense, and hedge, as separate operations. Is there a reason the focus is on all-equity, excluding ETFs and such? In fact, in your crowd-sourced fund of algos, couldn't you just add up all of the shorts and longs, and then make up the difference with a hedging instrument? Why have users write beta neutral strategies at all, since the market neutral constraint could be met (perhaps more effectively and efficiently) at the algo combination step. Maybe in the end, over N algos, you'll be market neutral by default?

Part of my confusion is why not use a hedging instrument, such as SPY (or a basket of ETFs)? Then, I can go all long or all short, and hedge with the instrument

The idea is that hedging broad market ETF is not as optimal as hedging with those stocks (let's say hedging long position for this example) that will do worse than SPY for example. It's quite simple actually if you think about it.

Hmm? Just seems like if I'm trying to get the SPY'ness (i.e. beta) out of a basket of stocks, I should use SPY directly. I'm thinking about it, but still don't get the argument.

Another alternative way to optimize the Long-Short Equity strategy is to completely get rid of the ranking scheme and pick the stocks directly with some sort of technical or fundamental analysis and buy leap call options on the stocks you technically or fundamentally analyzed to rise in value and do the same thing for the stocks you analyzed to decrease in price but your going to buy leap put options on those, thus giving you more leverage and security because you can only lose what you paid for premium of the options and this also provides you with nice leverage if you don't have a huge capital to just buy thousands of shares outright.
Here are some very good videos that will explain in further detail:
https://www.youtube.com/watch?v=HjJ9c-Ufbnw
https://www.youtube.com/watch?v=q3M0Y8y4Gqs

@ Grant

Something to consider from a cost minimization perspective...Say you are forecasting only the short side of the equation, and hedging with SPY on the long side:

a) When hedging with SPY, are any of your shorts already in the S&P 500?
i) If Yes, then when buying SPY you are going neutral those stocks, paying commissions on both sides of the trade, and paying the 9 basis point management fee for SPY.Probably a good idea skip paying money to brokers and ETF managers simply to get zero exposure on a portion of your portfolio in return.
ii) If No, then you are still paying the management fee to SPY. Although only 9 basis points, this figure will add up over time when compounding takes effect. More importantly, SPY trades about $85 million shares a day, a figure matched by just the top 5 stocks in the S&P 500. So by not coding your hedge on the individual stocks in the S&P500, you are precluding yourself from the benefit of much more liquidity if you were to spread the orders around to the individual securities with your code.

From an economic perspective...does't seem like you will have a good shot at reducing your portfolio's correlation to other strategies if half of the exposure is in an index that garners so much passive investment from the investment world.

Just some thoughts.

Frank -

Yeah, I wasn't talking about going all long or all short, necessarily (could be 70/30, for example, with SPY to bring beta to zero). I see your point regarding duplication of securities. Maybe that is the rationale, plus maybe if one is managing a huge long-short portfolio, it'll all come out in the wash. You'll always end up 50/50 long-short, with the longs forecast to go up and the shorts forecast to go down, in absolute terms.

Anyway, it would be interesting to hear from the Q folks, if they care one way or the other. In the algo I'm working on, adding in SPY allows me to completely null out beta, which for the contest at least, may be an advantage. But if everyone did as I am, then Q would have a bunch of buys/sells of SPY, which might not make sense when they combine N algos in the fund (but then, I guess they have to have a means to manage the redundant buys/sells of stocks, so perhaps it won't matter?). Maybe they want all stocks, and will do their own beta nullification, if any beta is left, with some super-efficient institutional investment thingys.

Grant

Delaney,

Can you prove by some simple backtest or other way that yours hypothetical recommendations above:

300%/300% in the long and short baskets
select a set of say 400 stocks to short and 400 to long, then pick 200 from each
do the same but just by adjusting weights amongst the 800 total

will be practically fruitful, let say for 10M account?

@ Delaney -

Thanks for the book reference (for those who missed it, is was:

If you haven't read Nate Silver's book "The Signal and the Noise", I highly recommend it. It goes over a lot of key concepts in statistical construction and validation of forecasts.

I'd also suggest "Against the Gods: The Remarkable Story of Risk" by Peter L. Bernstein. One thing I recall from the book is that sound statistical thinking is relatively new on the intellectual scene, from a historical perspective. So, go easy on us Neaderthals; we don't understand your modern ways.

Vladimir's point has been raised numerous times on the forum, but it tends to be ignored. From a pedagogical standpoint, though, it would seem to be valid. Reading between the lines, there seems to be a collective vision at Quantopian that you know what you are doing, that there is an underlying recipe for success. With a little research, users can construct multi-factor multi-security long-short equity algos that will yield (1+r)^n type returns (i.e. like a bank CD, except with finite Sharpe ratio). I guess this is where you are headed with the workflow and it is just not yet possible to provide an existence proof, since the machinery is not yet in place. Fair enough. Still, though, I'd think the Quantopian team could pull together a start-to-finish cobbled together example, just to lend some credence to the approach, kinda like a physics professor starting a lecture with a demonstration and then doing a deep dive to explain how it works. It is a tried-and-true pedagogical approach.

Making relative measurements is nothing new. There is a whole approach called "gage R&R" (from the 6 sigma discipline) that is used to assess measurements (and ultimately, to include the intrinsic measurement error in making pass/fail decisions in manufacturing, for example). At a minimum, one wants to be working with instruments that are linear in response, so that accurate relative measures can be made regardless of the scale (e.g. I don't care about the offset on an ohmmeter if I just want to find the difference in two resistance values). Even if the response is nonlinear, so long as it is monotonic, I can compare one widget to another. However, if the response is non-monotonic or the response and/or noise level varies from day to day, for example, then I'm gonna have problems. It seems, analogously, one needs to establish alpha factors that are good instruments; if they suck at making measurements of individual securities, or the market as a whole, the entire effort will be hopeless. The problem, it seems, is that the alpha factors are transient--sometimes they are good instruments, and sometimes they are buggy. One needs to establish, point-in-time, which alpha factors are reasonable to use, and which ones need to be ignored.

Hi everybody,

I am not sure if one has thought about the danger of do SHORT. I have an algo which do both long and short. As I do the back testing, I find the return drops down rapidly from +400% to -500% in just one day! Incredible!

The reason is, the price of a stock which I short , exploded from about 3.5 to above 30.0 USD in one day.

I am thinking about how to avoid such a catastrophe by doing SHORT but haven't found any idea since each stock, even the APPLE or GOOGLE, their price can explode, if not 10 times in one day.

Cheers

I wanted to address a few points.

Grant and others had discussed why you can't just hedge against an instrument. First of all, whereas the SPY is something we use to measure market exposure, we aren't concerned about the SPY as much as what it is measuring. Therefore hedging against it is really an indirect way of trying to reduce market exposure, and in weird circumstances the SPY can fail to follow the market and lock up due to illiquidity and whatnot.
In practice trying to put a large amount of capital through one hedging instrument will result in a ton of illiquidity, slippage, and risk. As such it's far better to split that hedge out across the broad market by shorting a ton of different names, increasing the capital capacity of the strategy and decreasing the risk should a single asset do something weird.
Lastly, even if your predictions are that everything will go up, which may be the case in an up market, the short is a form of insurance. You expect to lose money on it and pay a small(ish) amount every month while the market is up. In return you are covered when the market decides to crash.

Most of these points were touched on, I just wanted to clarify my opinion.

In response to Thomas Chang's concern of shorts blowing up, yes that is entirely possible with a short. However, it tends to happen more with very low price stocks, which arguably should be filtered out in your trading universe. Still, it is possible with things you shorted in a reasonable universe, and really the best way to get around that is investing in more assets. In general you have to accept that on any individual bet there is a tail probability of things going south, and the more bets you take out the lower the chance that you hit enough tails simultaneously to ruin your portfolio. See this lecture.

https://www.quantopian.com/lectures#Position-Concentration-Risk

Here's the conundrum related to the long-short strategy that I have been wrestling with virtually ever since I joined Quantopian.

To make the strategy work, you have to do some stock picking. Yes, in a relative sense of course:
-- Your long positions should outperform your short ones, in RELATIVE terms, when the market is on the up;
-- Conversely, your short positions should drop more than your long ones in times of market decline.

The problem is how to establish the relative values of the momentum of the individual stocks.

There are many ways of measuring momentum, but one that follows the definition of the geometrical Brownian motion, the model used in the derivation of the Black-Scholes formula, is the following.

P = data.history(context.secs, 'price', 252, '1d').dropna(axis = 1)  
p = numpy.log(P).dropna(axis = 1)  
r = p.diff().dropna()  
x = r.mean()  
s = r.std()  
mu = x + s * s / 2  
z = mu / s  
z = z.max()  
record(z = z)

If you do this for a universe like the Q500, you will see that the maximum of z, which is the ratio of the stocks' trend to their volatility, never reaches beyond about 0.3!

The movement of a stock, any stock, is therefore largely determined by its random component, especially in the short term. Reliably finding stocks that outperform others is virtually impossible.

Can someone describe a way out of this impas?

Hey Tim,

That may be true for that particular model, but in general I don't think it's a big issue for the following reasons.

  • There are other models than momentum.
  • There are other data sources than price. In general using prices to predict prices these days is pretty tricky due to how many other people are trying the same thing.
  • Even with a weak predictor, you can combine it with many other weak predictors to achieve a good predictor. The sigma component of that equation will decrease when combined with many other independent signals, while the mu will not.
  • Following on the previous point, the idea behind an alpha factor which ranks all stocks by some criteria is not to be able to reliably predict the motion of any individual asset. Instead it is to say that over a large number of assets, the chances you get > 50% wrong is low enough to use. As such even models which weakly predict the movement of stocks individually can produce reasonable spreads when applied to many stocks in a ranking fashion.

Does this make sense?

Hi Delaney,

I find out, many of the rapid drop down or explosion stocks are from the phamar, bio or oil industry. I wonder how to filter out such sector or branch assumed I use the pipeline? I could remember one can do this but forget where I've read this article.

Cheers

Thank you, Delaney,

for your insightful answer.

The problem, however, is that no matter what kind of features you use to predict the returns or how you combine them, the actual movement of the price will largely remain random.

The Geometrical Brownian Motion model may not be the only one available, but it is certainly well established and it is hard to believe that any other model would give a substantially different ratio between the trend component of the price movement and the random one.

Hi Tim, I've updated my previous answer to include a final point. Basically that even if the ratio of noise to signal is high, by using the same model on many assets you can achieve a reasonable ratio when considering the movement of the portfolio as a whole.

Thomas, it makes sense that those sectors, especially pharma and bio, would be the biggest culprits. I'm not sure of an example of doing this off the top of my head, but I believe it is possible. I'd start here to see if it's helpful. https://www.quantopian.com/lectures#Case-Study:-Traditional-Value-Factor

@Thomas: To filter out securities from a specific sector, you can build a Sector classifier, create a filter using .eq(sector_number), invert it using ~ and adding the filter as a screen to your pipeline. These two lessons from the Pipeline Tutorial should be helpful:

https://www.quantopian.com/tutorials/pipeline#lesson4
https://www.quantopian.com/tutorials/pipeline#lesson8

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Delaney,

Indeed, diversification is an important part of the answer to my question. But I do not think it is the whole story.

However, allow me to first clarify why I seem to be insisting on the so-called Geometrical Brownian Motion model. This is because another way of describing it is by simply stating one's belief that returns are distributed (more or less) normally. The mean of this distribution happens to be much smaller than its standard deviation, that's the rub.

This fact cannot be escaped, unless you either assume that there is temporal autocorrelation present between the returns, or, as you have pointed out, you bet on many different assets simultaneously. It would also help to trade less frequently (once a week, or once a month), since the increase in the price due to the trend scales proportionally with the time, whereas the random contribution scales as its square root. Profitability gets affected by trading less often, though.

Autocorrelation can be determined empirically and is pretty much non-existent.

The diversification approach should work. Except if the different equity prices are correlated. And this is unfortunately often the case, especially in case of a major market downturn, when all the stocks precipitate very much in unison.

The situation is different when trading with futures, because they can refer to very different types of underlyings. Momentum following thus works well in the future markets and is widely used by the CTA's. But with equities it is very difficult to get good results. Not to mention the cost associated with borrowing many stocks for a diversified short leg of such a strategy.

This all being said, it is obvious that a long-short equity strategy (involving many assets) can be profitable, because it is used by many hedge funds and is probably also the approach followed by (some) of the Quantopian fund algorithms, The question is thus what am I missing in my reasoning. I would love to get a hint, after all the hours, days and nights I seem to have spent in vain tackling this subject. But I do not think it is diversification alone that does it.

Hi Delaney -

Regarding using SPY as a hedge, that's kinda what I figured. However, I see that it has an average daily volume of 85,482,213 shares and net assets of $200B. Say I had a $5M allocation from Q and just dumped it all into SPY. That's ~ 25,000 shares out of 85,500,000, or 0.03%. Would there really be concern about "illiquidity, slippage, and risk" as you say above? Is this really the reason not to have an ETF like SPY in a long-short algo that trades in equities? It just seems like a simple solution, to get beta down to as close to zero as possible, at every moment in time, if that is the requirement for each stand-alone algo.

Not to be sarcastic, but have you or anyone there at Quantopian for that matter, ever seen one of these multi-factor long-short institutional algos in action, trading real money for 5-10 years? Theoretically, it sounds wonderful, but does it actually work and have you seen at least one in the wild? I hear that hedge funds do this sort of thing, but then I also hear that they are very secretive. Maybe they just call their funds long-short blah, blah, blah, but they are actually dong something completely different. How do we know this isn't a unicorn hunt?

Hi Grant,

Since I have been talking a lot here lately :-), I'll risk an answer to your SPY question as well.

I think what Q is afraid of is that if the "allow" such an approach, everybody would do it, since it is easier than a full long-short algorithm. That could result in the fund as a whole being more or less in SPY at a given moment -- the whole 250 million of it. In reality, probably nothing catastrophic, but a hedge fund simply cannot afford this kind of situation from the point of view of risk management, etc ...

@ Tim -

That could result in the fund as a whole being more or less in SPY at a given moment

Well, it might be ~ 50% of the total portfolio value. If the market takes a steep, protracted drop (a "crash"), for example, then all of the factors could lock into the overall market downtrend, so you'd have mostly short positions in stocks. To balance them, you'd have a large position in SPY. If everyone's algo were constructed this way, then the Q fund could end up ~ 50% in SPY. Even at $125M, it should be no big deal, since SPY is a $200B ETF (Q/Point72 would only own 0.06%).

My guess is that although there is no explicit rule against using hedging instruments, my read from Delaney's comments, and one by James Christopher on https://www.quantopian.com/posts/machine-learning-on-quantopian is that hedging instruments within equity long-short multi-factor algos are undesirable. Looking ahead, if this glorious crowd-sourced concept works out, Q will generate a stream of buy/sell signals across a large universe of stocks. In an order management system, it will be pooled with orders from other sources (including Point72, per my worldview), in an overall optimization. Part of that optimization would be to beat down any remaining market risk, more efficiently than the using the retail SPY. There will be some institutional derivative thingys, I figure, that would be a better choice. So, if the signals include SPY and other hedging instruments, then they have to be managed, as well. I think what Q will want to do in the end is just buy/sell on the stock signals, but then, for example, how would I be rewarded as a "manager" if none of my SPY orders are executed?

The Q500US and Q1500US don't contain any so-called "hedging instruments" and the material that Q has been pumping out seems to imply that they don't want them included. Delaney, is this a correct assumption? If so, why? Not trying to beat a dead horse here. It just hasn't been articulated as part of the workflow ( https://blog.quantopian.com/a-professional-quant-equity-workflow/ ). Why all stocks and no ETFs?

@ Delaney -

I've been trying to fill in the gap between Jonathan Larkin's high-level https://blog.quantopian.com/a-professional-quant-equity-workflow/ and what you guys are talking about below the Quantopian executive suite.

Conceptually, the universe definition step is straightforward. Point-in-time, I pick a set of stocks, subject to a set of criteria. But then I'm totally lost with language like:

...successful cross-sectional strategies balance a tension between price dispersion and self-similarity in the universe. By definition, cross-sectional strategies extract relative value across securities and, in order to be able to rank something intelligently, there needs to be some degree of uniformity in the characteristics of the things being ranked.

  1. What is meant by "cross-sectional"? Does this mean across industry sectors? Or something else?
  2. How are the terms "price dispersion" and "self-similarity" defined in this context? Do I need to measure them, to determine that my universe is good-to-go?
  3. What does "extract relative value" mean? Does "value" mean returns? And why would I be interested in relative returns? Relative to what? In the end, I want absolute dollars, right?
  4. Conceptually, I think I understand "...in order to be able to rank something intelligently, there needs to be some degree of uniformity in the characteristics of the things being ranked." If I evaluate apples based on crunchiness and oranges based on juiciness, then do an overall apple/orange ranking, it won't make sense, since apples aren't juicy and oranges aren't crunchy--it would be the classic apples to oranges comparison. I guess there is something similar being said with regard to stocks, but how would I know that I have enough uniformity, but apparently not too much?
  5. The use of ranking seems to be ubiquitous in this domain. To me, this is mysterious. Is it a kind of brute-force way of dealing with whacky fat-tail distributions? Apparently, it is an accepted and useful technique, but it seems like a lot of information is being lost. For example, say I administer a test to a group of students. Everyone in the class scores below 50%, except for a couple whiz kids who score above 95%. If I rank and ignore the raw scores, I'll lose a lot of useful information. Ranking seems like a scary way of analyzing data.

    I have a lot more questions, but I figured I'd start at the very beginning of the workflow.

Nice questions Grant particularly as regards ranking. I know I rank in terms of performance to judge momentum but ranking in bulk across so many different factors does, intuitively, sound sub optimal.

I'm not clear yet on ranking. It may be fine, under certain assumptions. Delaney seems like a real numbers guy, so maybe he can articulate it.

I'm being a bit of a devil's advocate, since I get the feeling that the Q crew has a directive to just do things like everyone else in the industry. It is very easy to move forward quickly on received knowledge, but if it is wrong, then you can end up in a bad place. The flip side is that if they pull together the workflow with enough modularity and flexibility built-in, then the crowd can supply the innovation. Presumably, the competition doesn't have a platform like Q's (although Point72 is both a customer/investor and a competitor...and eventually, the effective owner, perhaps, should they reach the $250M mark and no other customers emerge).

Grant,

I agree with you about ranking/rotational strategy architecture. Personally I prefer a model which only holds positions that meet relatively rare setup conditions which actually have favorable odds, and am happy to leave portfolio slots unoccupied when those conditions are not occurring.

Thomas,

To filter out biotech and pharma specifically (which is a good idea), you need more granularity than the sector level. Here is how I coded it:

group = morningstar.asset_classification.morningstar_industry_group_code.latest  
filter = filter & ~(group.eq(20635) | group.eq(20636))

What I'd like even better would be a filter for only companies that live or die based on FDA approval, since those are the ones with the real risk of 300% gains or 80% losses overnight, and not only if low-priced (see ICPT in January 2014).

One thing I am curious about it that the big hedge funds must have an architecture that allows them to add/drop factors on a dynamic basis. I'm guessing that once they are up and running, then you have analysts working to sort out which factors should be dropped, and which ones added, to eke out a bit more return. It would be fairly low-risk to drop in a new factor to see how it plays out with real-money trading, if it only ties up a little bit of capital and won't blow up the whole effort. I wonder if most of the effort is put in at the factor discovery level, where an army of relatively low-compensated worker bees can buzz away at proposing new factors. My guess is that in the big leagues, you don't assign single individuals the task of writing soup-to-nuts multi-factor, many-stock long-short algos, taking responsibility for the entire workflow, as outlined on https://blog.quantopian.com/a-professional-quant-equity-workflow/ and then deploying them in a black-box fashion, as Quantopian is proposing.

It seems like what Quantopian needs is a stream of signals, which would come from writing factors instead of entire algos. In the end, that's what they'll get anyway. In the limit of a large number of black-box algos in their fund, they'll get minutely buy/sell signals across a large number of stocks, which will be signals to be combined, etc. If they could compensate users who contribute good factors, then it would seem that they would be able to pay out to a lot more users. There aren't that many slots for $10M mega-algos (only 25, given the $250M on the table from Point72).

...have you or anyone there at Quantopian for that matter, ever seen one of these multi-factor long-short institutional algos in action, trading real money for 5-10 years? Theoretically, it sounds wonderful, but does it actually work and have you seen at least one in the wild?

At Quantopian we seek algos which will continue to have a high Sharpe ratio out-of-sample. We don't purport to have the monopoly on ideas of how to achieve this. You are welcome to subscribe to any process you like in pursuit of this goal. However, the process which I describe at a high level, and to which Delaney and others have expanded on with content and concrete examples is indeed employed successfully at hedge funds, proprietary trading firms, and asset managers which have distinguished themselves with top performance over many market cycles. The lecture series (as well as blog posts by myself and others and periodic posts in these forums) exists to distill information which supports this process from first principles. We also realize the lecture series is incomplete, and are working hard to expand it to cover these concepts more accurately and thoroughly.

Have no fear that you are on a fool's errand in subscribing to this process (or the concept of long/short in general); you can rest assured that you are in the company of the most successful quantitative traders and investors, large and small, in the history of this industry.

Best wishes to all in this pursuit.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Jonathan -

Glad to see you decided to take the dive and participate on the forum. Fawce used to man the forum/help desk himself, so you need to pay your dues as a new executive. You'll earn brownie points with your boss!

Grant

There have been a lot of good points raised here. Rather than typing out long answers I'm gonna hold a Q&A webinar on the 18th. Here's the link, we'll post the recording as a followup.

https://attendee.gotowebinar.com/register/7843670039773376004

Also have a good weekend everybody!

Any chance this webinar will be recorded in order to be available for us people in time zones far far away?

Perhaps it would be interesting to look at the evidence. I have not yet done so but hope to report back when I have analysed the performance of long short funds to the extent such information is in the public domain.

Perhaps the appropriate benchmark against which to measure their success would be an annual rebalance of 50℅ short term treasuries and 50℅ S&P

Anthony -

Yes. If you really want to make quantitative folks anxious, say things like:

Have no fear that you are on a fool's errand in subscribing to this process (or the concept of long/short in general); you can rest assured that you are in the company of the most successful quantitative traders and investors, large and small, in the history of this industry.

From the outside, it really seems like the finance industry is a mess as an intellectual enterprise. Take, for example, the recent subprime mortgage debacle, and global melt-down, freezing of credit markets, government bail-out, etc. And the 2007 quant-crisis, the dot-com bubble, LTCM, and the list goes on. My earliest memory is Black Monday, 1987. CalPERS moving away from hedge funds is another example that comes to mind. The assumption that doing what everyone else has done is not so clear to me, but then I don't work in the industry.

My training is in physics, and engineering (by trade). It is safest not to think you know what you are doing, unless you have data to support it, and there is a pretty high bar to actually "prove" something (e.g. the tremendous effort to show that gravitational waves, as predicted by Einstein, actually do exist). Even the term "first principles" has certain connotations. If I'm solving a problem in electricity & magnetism, for example, it might mean laying out certain assumptions to justify use of Maxwell's equations, and then applying them rigorously. In the field of finance, since I don't work in it, I don't know what "first principles" implies.

It is nothing personal, but it just doesn't matter who the authority is or what everyone else is doing. The Richard Feynman of finance could declare that multi-factor long-short equity strategies work, but until I see at least an existence proof that has traded real money for 5-10 years (or even a backtest on Quantopian), I'll be skeptical. I'm supposing that Jonathan has actually worked on these things, but by NDA, can't give a personal account, or even answer the question of whether he's seen one in action first-hand and what it looked like.

One problem here is that there may be a survivorship bias at work. The raw data on multi-factor long-short equity strategies over the last 30 years may be that typically they fail (or grossly under-perform relative to a benchmark). My hunch is that Quantopian is hoping that they can be one of the winners, and beat the statistics.

@ Delaney -

I won't be able to participate in the webinar, even though you have conveniently scheduled it over lunchtime. That said, I'll be thinking of a list of questions to post here that might be helpful. One thing that is not clear is what the alpha combination step is doing. I'm familiar with design of experiments/response surface methodology, which systematically deals with multiple factors and their interactions (and disciplined approaches to avoid "over-fitting"). Are you concerned here with just the diversification effect, to bring up the Sharpe ratio? Or is the secret sauce in the interactions? Or both?

For all of this to work, it seems that the the underlying system needs to be stable in time. If I have a drunken monkey in my system, turning knobs and pushing buttons at will, then I won't be able to do jack until I get that money out of there. I guess the assumption here is that the equity market has a monkey in it, but he's not drunk, and so there's hope that he may be doing something understandable with his little monkey brain. Starting from first-principles, why would I think that there are any statistical arbitrage opportunities in the first place, across the broad market (e.g. within your Q500US or Q1500US universes)? Is there any evidence that we aren't working with a drunken monkey here? And if not, on what time-scale can we make predictions? If we are looking to extract weak, transient signals from a lot of noise, is there any way to show that those signals are there in a generic sense (not specific factor analysis)? That we aren't in the realm of pure Brownian motion.

Long/short and market neutral are not the same thing. I believe that Point72 is looking for market neutral and a low beta to hedge systemic risk.

Market neutral can be accomplished by long stocks that will outperform their sectors and short stocks that will underperform their sectors.

These strategies are the dream of every fund manager because they are known to have the lowest correlation with other popular strategies.

I highly doubt that such strategies could be found by machine learning due to stochasticity and other issues we discussed in the relevant thread.

The way this is usually done is by employing two groups of analysts that specialize in identifying strong and weak candidates based on analysis involving fundamentals and technicals.

The main problem with machine learning is that relevant features change constantly and classification based on a fixed set of features tends to generate noise results in longer-term.

This is a task for human intelligence and that is the edge. I hope this will save you a lot of time and maybe money.

I traded long/short with Merrill Lynch as the broker for a Swiss Fund during the whole of 2008.

If you go short and there is an acquisition, you will get a large hit. I was lucky and a major S&P 500 acquisition was in long part and I got a cushion that helped me minimize losses due to the crash, although the signal was generated by my software, which I used to generate the long and short signals. My approach was purely technical: select the universe (S&P 500 in that case) and generate long and short signals.

I traded up to 10 positions each way with max 0.2% risk each for total portfolio heat of 4% at any given moment. Turnover was high with average holding period of 1 to 5 days.

I finished 2008 with a small gain. The fund manager hit cumulative stop-loss on his long/short portfolio when long positions continued to lose money and short positions rallied due to short covering.

I told this story because the major risk of market neutral long/short is when "someone" or the "market" will sell your longs and at the same time buy your "shorts". Sound risk management is required. Optimization algos will filter out these events in most cases but in reality they will happen.

Hi Michael,

Many thanks for your valuable account.

You say that you

... traded up to 10 positions each way with max 0.2% risk each for a total portfolio heat of 4% at any given moment.

Allow me a few inexpert questions, please.

How was risk defined in this case? Was the limit achieved by preselecting a universe of stocks with limited volatility? And what is the portfolio "heat"?

Thanks again!

Hello Delaney -

Some comments and questions for you to consider:

  1. Above Jonathan comments "You are welcome to subscribe to any process you like in pursuit of this goal." As a theoretical statement, that may be true, but practically, the Q platform and the tools you are developing have limitations. In some instances, one could argue that users could write custom code (or adapt Q code available on github) and run it on the Q platform, to have access to the data. But taking pipeline as a specific example, it is a closed API that runs only on daily OHLCV bars. It would be impossible for a user to reproduce it. One confusion for me is that pipeline is not capable of incorporating price and volume information from your minutely OHLCV database. For example, still keeping with the daily frequency, a database of summary statistics for each day, derived from the minute bar database could be set up. Then, factors could operate on the new database, which should do a much better job of representing the price and volume of each stock for a given day. Limiting pipeline to daily bars doesn't make sense to me, but maybe it is a known fact that daily OHLCV bars are good enough for equity long-short strategies (I'm not implying high turnover, but that a lot of potentially valuable information is available but not accessible to users)? Since other data are low-frequency (e.g. fundamentals), the return and its error (volatility) can be estimated adequately from daily OHLCV bars?
  2. As Michael Harris comments above, "...relevant features change constantly and classification based on a fixed set of features tends to generate noise results in longer-term." Also, above, you say "Even with a weak predictor, you can combine it with many other weak predictors to achieve a good predictor. The sigma component of that equation will decrease when combined with many other independent signals, while the mu will not." The issue I see is that there needs to be some way, point-in-time, to determine if a given predictor is weak, or if it is not a predictor at all, which is an important distinction, it would seem. I'm a bit confused about how you are implementing the workflow. There is the Alpha Discovery step, which would seem to be this point-in-time determination of the validity of a set of predictors--weak ones are acceptable, but ones that just spit out noise should be rejected. The tool is Alphalens, I think, but I don't see how it integrates with a point-in-time workflow that would take in multiple factors and combine them. Or is the combination step meant to make the distinction between weak and totally useless factors? Should I be able to feed in any ol' factor to the combination step, and it will accept/reject automatically and reliably?
  3. I see a direct analogy with control charting ( https://en.wikipedia.org/wiki/Control_chart ) used for process control. In order to use a given factor, it would seem that I'd want to chart its predictions versus time. Once I see that it is rigorously under control, I could proceed to use it in combination with other factors. Ideally, I would also look at the control chart trend, to see if the factor might be trending out of control (i.e. forecast the control state of the factor). It seems pretty cut-and-dried, but perhaps it doesn't work in this field?
  4. The focus seems to be on forecasting returns, but not the volatility in returns. Shouldn't a forecast include its error bars?
  5. In the most general sense, what should be the input to the portfolio construction step? For each stock in the universe, should I have an absolute price forecast (and perhaps its error bars) that was derived from a combination of all factors that have been "under control" in a statistical sense, and would be expected to remain in control? Then, within the portfolio construction step, I would decide how to optimally combine all of the forecasts, subject to constraints (e.g. minimize turnover, track minimum SR goal, beta = 0, etc.). One constraint could be that I only pick N of M stocks, and that I have N/2 long and N/2 short. To me, at least, it seems like you are embedding some of the constraints and optimizations earlier in the process. For the universe selection, I guess this makes sense, since you need the universe defined to crank out the factors. But it would seem that the alpha combination step should simply generate absolute forecasts on a stock-by-stock basis. Otherwise, the guys in the portfolio optimization department aren't gonna be able to do their job. Also, one can imagine having a variety of portfolio optimization "modules" depending on the need. It seems that the most general input to such a module would be the absolute stock-by-stock forecasts (plus some other stuff, probably). It is not clear that you are architecting the workflow in a modular fashion, as I see it.
  6. I don't understand how you are handling the scaling of the problem versus the number of factors. For example, as described on http://www.itl.nist.gov/div898/handbook/pri/section3/pri336.htm , if you want to detect curvature, you need a minimum of 3^k "measurements". So, if you want 20 factors, that is 3,486,784,401 data points, which sounds completely intractable. How many factors can be handled?

@Tim

"Allow me a few inexpert questions, please.

How was risk defined in this case? Was the limit achieved by preselecting a universe of stocks with limited volatility? And what is the portfolio "heat"?"

These are NOT inexpert questions. They show a deeper understanding of the practical issues of trading. One could write a whole book based on your questions above.

In my trading I kept things simple by using well-defined stop-loss levels generated by my machine learning program (which by the way I started developing in early 2000s and have improved significantly since.) In this way I could calculate the number of shares to maintain a fixed risk per position as a percentage of available equity, assuming of course the stop was not run by market. Positions were equally weighted (0.2% of available equity) because this was very short-term trading. When holding periods increase, then one must choose a different method of calculating risk and weightings based on volatility, covariance and expected mean-returns. This is not a trivial problem, In my case "portfolio heat" was kept at a maximum of 4%, which was the maximum percentage of equity I could lose at any moment if all position lost money. Again, that assumed that I would be able to exit losing positions at the stop price which was the case most of the time.

This gets us to the problem of effective capital utilization. I was trading long/short but I did not have a requirement for market neutral. Note that it is not always possible to find high quality shorts to be market neutral. This is a real and practical problem. If not able, then capital is underutilized. If you force short positions because you must utilize capital, then the risk of loss is high.

In my trading because of its short-term nature I could always identify 3 - 4 shorts minimum at a time and many more longs. In case I could not find enough longs capital was underutilized but this did not last more than two or three days. Then I would get plenty of signals to choose from.

These are practical issues and while most quants concentrate on classification and ML code, these and other issues come to haunt them and affect performance. Other practical issues involve how you measure returns. I have a brief article about the abuse of return calculations by naive signal providers that includes a few simple formulas. Here is the link.

@ Delaney -

Another consideration is highlighted by Michael Harris' comment:

This gets us to the problem of effective capital utilization. I was trading long/short but I did not have a requirement for market neutral. Note that it is not always possible to find high quality shorts to be market neutral. This is a real and practical problem. If not able, then capital is underutilized. If you force short positions because you must utilize capital, then the risk of loss is high.

If I understand the Q paradigm, you want every algo to be constrained to beta = 0 at every instant of time. In other words, if you saw a given algo going to all long or all short (or even exceeding abs(beta) > 0.3), you'd flag it and maybe shut it down? This may not be optimal, right? Another approach would be to let contributors do as they see fit, and then you'd impose the market neutrality constraint at the fund level.

This brings to light another question I have regarding the whole "crowd-sourcing" concept, and the idea that each individual user would write a free-standing, soup-to-nuts algo, as outlined in the workflow. This would not be the only approach to get Point72 (and other customers) a long-short fund that is market neutral. One could imagine that you'd be 80% of the way there with just a stable of really good factors, and you take things from there. Further comments along these lines on https://www.quantopian.com/posts/how-much-can-the-top-player-slash-developer-can-earn-in-quantopian-1 :

It is a researchy/blue sky/academic question that is hard to "sell" and maybe wouldn't work at all, but have you put any thought into what it would look like if you could engage and pay all 90,000+ users (rather than plucking out a handful of semi-professionals)? For example, what if you paid for individual "alpha factors" rather than algos that employed many factors, across hundreds of stocks? It is just not clear how much value the crowd will add to the presumably standard Alpha Combination, Portfolio Construction, and Execution steps (and I suspect that in large trading "shops" these later steps have been reduced to practice, and the focus is on the earlier steps, in an attempt to keep the fund return on-track with a target).

Any thoughts on paying for "signals" versus paying for full-up soup-to-nuts algorithms?

@Michael,

Thank you very much for your exhaustive reply. It is both fascinating and immensely valuable to have this kind of information from someone with actual, first hand experience of the workings of a real hedge fund.

For example, I am struck by the fact that you only traded 4% of the total equity at your disposal at any given moment and that most of your trades were short term ones. On the Q competition pages one can see from time to time the data on the leading strategies that have excellent Sharpe ratios, extremely low drawdowns and relatively small, but still decent CAGR, combined with very low volatility. Your account goes a long way towards explaining these results, which I always found almost magical, since I have been backtesting in vain almost exclusively strategies that are fully invested at all times and are trading (rebalancing) tens or hundreds of stock once a week or once a month -- because I assumed, perhaps wrongly, that this was the kind of strategy that we were supposed to look for according to Q fund criteria and the related examples and instructions.

I would really like someone from the Q team to comment on this, if possible.

The way this is usually done is by employing two groups of analysts that specialize in identifying strong and weak candidates based on analysis involving fundamentals and technicals.

The main problem with machine learning is that relevant features change constantly and classification based on a fixed set of features tends to generate noise results in longer-term.

This is a task for human intelligence and that is the edge. I hope this will save you a lot of time and maybe money.

This is exactly what my impression of "vanilla" long/short funds is (the human element is highly involved) and I'm quite surprised to see that Q is advocating using ONLY long/short strategies using naive mechanical factor based systems and/or naive machine learning methods.

It will be interesting to see if these kinds of strategies (direct quote) "that are driven by underlying economic reasoning" and always market neutral with nice sharpe etc are to be found or if the focus will be changed after this has been found futile.

My apologies, Michael, I now see that I got it wrong assuming that you were only invested with 4% of the total equity at any given time. I now realize that when you say "positions were equally weighted (0.2% of available equity)" you mean this in terms of risk.

I think you point about the occasional unavailability of shorts for a completely neutral strategy is very important. And conversely, when events like the 2008 sub-prime mortgage crisis happen, for while at least it may be difficult to find suitable longs, if one's approach involves longer holding times, since the correlations between the stocks increase.

Tim, my mistake, I should have written : 0.2% of 1/20 of capital. Each position was allocated 1/20 of capital.

Now, this is the point, as I explain in my book Fooled by Technical Analysis (ML is advanced TA BTW): If the stop loss % is equal to risk percent then allocated capital is fully utilized. I call this the risk ratio RR. Number of shares is equal to capital times RR divided by price (which BTW you never know exactly and this is a source of error known as slippage.) theoretically, you want RR equal to 1.

If my stop loss percent was .2%, then my equity would be fully utilized (RR=1) and I would still have maximum portfolio heat of 4%. But this can be done only in fast timeframes where noise is high. There is no free lunch In short-term trading. In order to avoid frequent losses I had to keep a stop-loss of about 2% or more. In this case RR is 0.1 and I utilize only 10% of capital to maintain 4% total risk when I have 20 positions. If fewer positions, capital is even more underutilized.

Quants with no trading experience must understand that transitioning from the simple fully invested system or a simple rotational system with rebalancing to multiple signals and associated risk management leads to a conundrum you have to deal with to maximize returns. Most fund managers have decided to take the strategic approach (passive allocation and infrequent rebalancing) for not having to deal with this. It is not a trivial problem especially when you try to optimize position size using volatility when holding periods are longer. Human input is required as automation can lead to errors and unforeseen situations.

Another question:

I don't understand how Alphalens integrates into the workflow. It seems that one needs an algo module that can be run point-in-time to determine which (if any) of the multitude of factors are predictive. Yet Alphalens appears to be a manual tool for the research platform, which I suppose is fine if the assumption is that factors are stable. I would find a set of factors that are consistently predictive back to 2002, and then use them in my algo as a fixed set. However, I'm skeptical that such a relatively large set exists (I think we're talking 20-30 factors to be in the land of milk and honey statistically). If I throw up my hands, and just use them all anyway at each point-in-time, then I run the risk of garbage-in/garbage-out when I combine them (using ML or whatever). It just increases the risk of over-fitting. How is this envisioned to work?

Keep the questions coming. I'll try to provide answers during the webinar that address the broadest swath of questions possible. It will also be recorded and the recording will be posted here for those in inconvenient time zones.

Delaney,

Keep the questions coming
Here is the one, not answered, from a week ago:

Can you prove by some simple backtest or other way that yours hypothetical recommendations above:

300%/300% in the long and short baskets
select a set of say 400 stocks to short and 400 to long, then pick 200 from each
do the same but just by adjusting weights amongst the 800 total

will be practically fruitful, let say for 10M account?

This paper refers to bazillions of alphas:

https://arxiv.org/ftp/arxiv/papers/1601/1601.00991.pdf

Some examples are given. It is mysterious how such a contraption could work, and is a far cry from your Strategic Intent intent:

We are looking for algorithms that are driven by underlying economic reasoning. Examples of economic reasoning could be a supply and demand imbalance, a mixture of technical and fundamental drivers of stock price or a structural mispricing between an index ETF and its constituent stocks.

Am I correct in thinking that you are headed in the same direction as the paper--the combining of many factors? If so, how does one envision and articulate an economic reasoning? What is the intuitive picture of how such a thing might work?

Hi Delaney -

I just posted a reply to https://www.quantopian.com/posts/kitchen-sink-data-set-ml-algo which has some bearing on the workflow development. It basically deals with the idea that you may be over-constrained in your development. The workflow needs to work on the present platform. But this may not be the optimum solution. Maybe you should be carrying out research that justifies an evolved platform, rather than trying to get things to work on the present one? Is there any evidence that changes to the platform would help? What is the cost-benefit?

Hi Delaney -

Do you envision Quantopian providing any sort of vetting of alpha factors? For example, on https://www.quantopian.com/posts/machine-learning-on-quantopian , Thomas W. says "here is a library of commonly used alpha factors...we will include these in a library you can just import. " Wonderful. Although your lawyers will cringe, it does imply some potential utility of said factors; why would you want users wasting their time on dead-end factors. How are you deciding which factors to release into the library? What is the vetting process? What are the accept/reject criteria?

The flip side would be, say I have a bunch of factors, proven to be totally junk by the same vetting process. One would not expect to be able to do as you say above:

Even with a weak predictor, you can combine it with many other weak predictors to achieve a good predictor.

This would be a test of the work flow, right? If I put in garbage, and get out gold, then the work flow isn't working. Is this a valid assumption? In other words, if my predictors aren't even weak, then in combination, I shouldn't see any predictability at all. Or should I expect some magic in the alpha combination step?

The key to machine learning is the quality of features, feature selection and new feature constructions. The ML algos are standard.

Start with something like SPY and a few well-known features. There are several academic papers on this and also multiple security ML. You will soon find out that doing better than noise is very hard.

More importantly, classification is far from enough for profitability. You can get 10 TRUEs , win 10 cents in the first 9 and lose $1 in the last one. What matters is positive stationary expectation which is difficult to get from classifiers. You need a MODEL for that and ML is not really required as this is more factor regression than supervised learning. ML is the wrong direction as far as I can tell.

Yeah, I don't yet understand how the work flow should work point-in-time. It is not a one-time set-it-and-forget-it approach. Within an algo, I need to embed some way to weight the quality of factors, based past and projected stability. Maybe this is the supervised learning approach? I need a predictive model for which factors are good point-in-time, and my factor combination model needs to include time, as well? I gather that the present approach being considered by the Q doesn't do this?

Thanks to everybody who attended the webinar. The recording will be here once the video has processed.

https://www.youtube.com/watch?v=J0HjE1jKOlA

We didn't get through as many questions as I would have liked, so I've scheduled a second webinar on the same topics for the same time next week. Hope to see you there.

https://attendee.gotowebinar.com/register/2024333226197828866

Thanks Delaney,

I listened to the webinar. A few quick comments:

  1. Regarding ranking, I'm still unclear what it is doing, why it is used, and how it might be applied incorrectly. It seems that one needs a relatively large number of stocks/factors/whatever. If I have a small number of things (say 2-5), and I convert to ranks, then maybe I'll have problems. There are some monotonicity considerations, too, I think. Also, if instead of having tails, my distribution is rectangular (e.g. https://en.wikipedia.org/wiki/Rectangular_function ), then intuitively, there is no benefit in ranking. There is some underlying operation that is beneficial in this context, which perhaps you can explain. You want the extreme movers-and-shakers, but you push them through a rank-o-lator transform which obviously throws out information. On the surface, it seems like a pretty blunt approach, but maybe the idea is that it works pretty well in most cases. Why does it work?
  2. I'm still confused about how the alpha factors are supposed to work over time. You imply that one should expect that individual ones would go in and out of effectiveness. How is this to be managed? Is the idea that I would periodically run alphalens, and stop my algo to do an add/drop of factors?
  3. I think you mentioned that Quantopian, if you stumbled upon good factors, wouldn't publish them. Are you actively looking for good ones? I understand that you aren't in the business of competing with your users, so what would you do with them? It would be a shame if you just tucked them away in your desk drawer.
  4. Your discussion of the use of hedging instruments such as SPY wasn't so clear to me. Presumably, there are cheap sources of liquid beta out there that institutions use as a market hedge (I guess SPY isn't one of them). I recall Justin Lent mentioning a specific type of instrument. Presumably, if you are an institution trading a bazillion dollar long-short fund, and you want to add on a market hedge, there is a way to do it, no? Maybe Jonathan L. knows? I thought beta was like water from the tap, no?

Hello Delaney -

I did a quick read through https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient and it seems reasonably intuitive. As a first-cut, you want to answer the question does a monotonic relationship exist between my factor and returns, i.e. is there any hope whatsoever that I can forecast. I would run something like alphalens periodically within my algo, set a statistical significance level, and apply a hypothesis test across all of my factors. I would not use the factors whose Spearman rank correlation coefficients can't be distinguished from zero (on a relevant time scale).

Looking at the nice flowchart Jonathan presents on https://blog.quantopian.com/a-professional-quant-equity-workflow/ , I think this is the implication. I would automatically select only known-good alpha factors for trading. Since they might go in and out of goodness, I need an automatic test that can be implemented in an algo.

Alternatively, I suppose I could create a look-up table for backtesting. For example, I would use alphalens every quarter back to 2002. Then, I could paste into my algo which factors to use when. Going forward, I would stop my algo every quarter, and update the look-up table (or maybe fetcher would work?).

Is this the correct interpretation?

Another question is why is there a line connecting the data to the risk model, on:

https://blog.quantopian.com/a-professional-quant-equity-workflow/

It seems that in addition to a public library of common alpha factors, users would benefit from private, personal libraries, that can be imported into research notebooks and algos. Otherwise, there will be a lot of nasty copy-paste, version control issues.

I have a factor in mind that I'd like to start playing with, but not being able to include it in a private library will be awkward.

Delaney -

Another question is what is an appropriate point-in-time benchmark, if any, for a multi-factor equity market neutral long-short algo? For example, there are public data:

http://news.morningstar.com/fund-category-returns/long-short-equity/$FOCA$LO.aspx

Are they at all relevant? Or is it simply a matter of using an ideal compound interest (1+r)^n benchmark (where r is dictated by the customer, e.g. Point72 and eventually others)?

On a separate note, I've heard you mention the theoretical wonderfulness of independent/uncorrelated/orthogonal factors, which intuitively makes sense for increasing the Sharpe ratio (mainly through the denominator, I suppose). If such factors exist, wouldn't it be apparent in the returns data? Should I be able to find point-in-time clusters of uncorrelated stocks in the Q500US/Q1500US (even though I might not know why)? If my hypothesis is that some geese fly north-south, and some fly east-west, then maybe I could just look up in the sky to see if that might be the case? Or is the computational problem too gnarly?

Another question is, what is a realistic long-term Sharpe ratio for such algos? I plugged my way through this book:

Systematic Trading: A unique new method for designing trading and investing systems
by Robert Carver
Link: https://amzn.com/0857194453

The author emphasizes that SR > ~ 1.0 is probably an indication that one is smokin' dope, so to speak. Is this a fair assessment? I know that there's the Holy Grail of sources of "new alpha" but I'm talking about a full-up algo, per the workflow, with many stocks, multiple factors, and mucho dinero.

See this comment about the book you mentioned:

https://www.amazon.com/gp/customer-reviews/R1WCUAHZEDQ96E/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&ASIN=0857194453

Books may distract you and cause damage to your carrier and goals. Stick to common sense and to Quantopian. The future is here. The Vanguards like these books.

I have spoke to Rob Carver, read some of what he had to say. I have downloaded and studied his ope source Python based back test harness. Mr Carver spent 7 years at AHL.

On what grounds do you say that he is guilty of "failed performance"? How is Mr Carver a failed trader? Has his personal trading failed? Has AHL failed? Has a program run or designed by Carver for AHL failed?

You really need to be rather more forthcoming with details. You probably need to keep a watchful eye on the law!

Although you may well have signed up using the Tor browser and a disposable email address.

What are you talking about? I talk about some failed traders in general who write books. I did not say Carver has failed specifically. I spoke in general terms.. If you are so sensitive I have changed failed to "some". The point is that more people try to discourage traders than empower them. This site tries to empower them. Saying that > 0.5 Sharpe is impossible, does that make you an expert? Have you seen the Sharpe of some successful funds? Aren't you tired of people telling you this or that is impossible?

Aren't you tired of people telling you this or that is impossible?

No because, regrettably, I believe they are largely correct. Anyway, probably sensible of you to have qualified your post: it pays to be cautious!

I do not like it when people get upset and I edited the post. But i wonder what you are doing here since you believe the claims are correct.

I wasn't upset but Carver might have been. I have done quite a bit of coding here but through long experience I know that Sharpe on a back test is highly misleading. MAR over the universe of CTAs is tiny. 0.2, something like that. In the long term......it isn't easy.

Anyway, what do I know. try it for a few years, see how you go. Perhaps you will beat the lot of them.

Well, you guys can bicker back and forth, but my original question still stands:

Another question is, what is a realistic long-term Sharpe ratio for such algos?

The algos in question are the kind that might be written by professionals at your typical hedge fund per the proposed workflow. Long-term means, I don't know, 5-10 years, or more? For a mature industry that is up to its ears in data, I'd think that there would be an answer, e.g. X +/- delta_X.

Hi Ricardo -

The point is that more people try to discourage traders than empower them.

I think of the SR guidance kinda like the guidance one might have in carrying out a physics experiment. Say I'm a student tasked with measuring the speed of light. Well, I'd better get something in the ballpark of c = 3x10^8 m/s, or I've botched the experiment. As I recall in Carver's book, he attempts to provide guidance so that aspiring quants do a serious head-scratch if long-term backtests are suggesting they've hit the jackpot with an exceptionally high SR (he also discusses the more obvious risks of short backtests). Batting 0.300 as a professional baseball player is achievable, but 0.600 would seem to be impossible, based on a large data set ( http://www.baseball-reference.com/leaders/batting_avg_career.shtml ). I'd say it is empowering not to be deluded.

I'd say it is empowering not to be deluded.

And I would agree. Look at IASG.com and you can work out the long term sharp ratio and MAR of the longest term CTAs. It is not impressive - and it does NOT account for survivorship bias.

Don't get me wrong - I'm working hard on ML and (together with my computer scientist partner) am loosely co-operating with a small team of roboticists down in California.

But I would not expect to come up with anything better necessarily than Winton, AHL or indeed Point 72. Still, no harm having a go,

I will agree with Carver and Anthony here. One of my best systems, the PSI5, (mean-reversion based on a formula out of a probability book. see equity curve here: http://www.priceactionlab.com/Blog/systems/) is up 12.3% year-to-date with Sharpe 1.93. But over the longer term and since SPY inception Sharpe is 0.61 and Calmar is 0.34. For SPY Sharpe is 0.47 and Calmar is just 0.16. Although all these years I learned not to discount anything in terms of performance, Sharpe above 0.75 seems extra difficult in longer-term.

BTW, CTAs are beta not alpha due to high diversification and liquidity constraints. Their performance has been hit due to that in recent years as my studies show: http://www.priceactionlab.com/Blog/2015/11/cta-underperformance/

Signal-to-noise ratio is too low in financial markets to allow high Sharpe ratio. The problem is that significance as measured by the T-statistic is related to Sharpe in closed form. About 6 years of performance is needed to guarantee significance at the 5% level in the case of a unique hypothesis, such as my PSI5 strategy. But if the hypothesis is not unique and suggested by the data, as in ML, minimum required Sharpe must be adjusted to guarantee significance. As it turns out, required adjustments due to bias rule out significance and one must rely on luck or experience.

Thanks Michael -

Sharpe above 0.75 seems extra difficult in longer-term

This would seem to be consistent with the beta ~ 0, long-short mean reversion algo I've been working on, that uses a relatively large base universe (e.g. Q1500US). After reading Carver's book, I realized that the 2-year backtest approach incentivized by the Quantopian contest is a bad practice; generally, I've been trying to take a broader view, back to 11/1/2002 (the earliest date supported by the Q500US/Q1500US universes). The SR tends to be skewed by the Great Recession market crash (and may be inaccurate, since reportedly there was a period over which there was a short ban on a large number of stocks).

Signal-to-noise ratio is too low in financial markets to allow high Sharpe ratio. The problem is that significance as measured by the T-statistic is related to Sharpe in closed form. About 6 years of performance is needed to guarantee significance at the 5% level in the case of a unique hypothesis, such as my PSI5 strategy.

I'd think that hedge funds would have a hard time getting up and running with large, sophisticated institutional investors. They'd need 5 years of real-money trading before anyone would take them seriously. But I guess there are other sources of speculative capital out there to prime the pump (e.g. Point72). It would also seem to have implications for access to leverage, but maybe the broker covers his risks not by looking at the track record, but by charging a sufficiently high fee for the loan and being able to pull the plug at any time, based on real-time portfolio monitoring.

But if the hypothesis is not unique and suggested by the data, as in ML, minimum required Sharpe must be adjusted to guarantee significance. As it turns out, required adjustments due to bias rule out significance and one must rely on luck or experience.

Sorry, I don't quite follow this statement. Are you saying that if one does not have a specific "strategic intent" (per https://www.quantopian.com/fund ), and instead a more general ML model is used (e.g. across a large number of "alpha factors") then testing for significance is hopeless? It would seem that I could still pick an approach and compare in-sample and out-of-sample. If the latter is real-money trading at-scale then eventually one could establish significance, no?

Here's a tear sheet of my algo referenced above. Note that I only cover about half of the recent "New Normal" period, due to a memory error (the sense by Quantopian support is that it is due to the number of transactions, although my universe is only 200 stocks, and I trade weekly...odd, since I can load the transactions into the research platform, but the backtester is overloaded...the limitations kinda suggest Q is not so interested in long-term performance?). Note also that I use SPY as a hedging instrument (which, per some comments above may not be the best practice?).

Loading notebook preview...
Notebook previews are currently unavailable.

Hello all, you may have received notifications that we've updated the webinar on this to nov 15th when I'll have time to read up on the questions post QuantCon SG. In the meantime I'm keeping an eye on the thread and will make a summary of the questions asked and figure out how to best address them on the 14th.

@Grant "It would seem that I could still pick an approach and compare in-sample and out-of-sample. If the latter is real-money trading at-scale then eventually one could establish significance, no?"

Yes but its an expensive way of doing it in case you are wrong. The whole point of statistics is to infer population parameters from sample parameters. Samples must be sufficient. You will need about 5 years of forward testing with real money to find out whether the model is good assuming the Fed does not distort markets like in past few years. Strategy developers must rely on backtesting otherwise tuition is very high. Most funds actually forward test with OPM.

On the other hand, if out-of-sample is historical and ML is used, you run into this problem: https://en.wikipedia.org/wiki/Multiple_comparisons_problem

The meat is in the features. I was just talking to a fund manager who has bought my software, which among other things determines features in unsupervised mode. He trades SP100 and uses the features my program calculates to run SVM on a daily to select 6 stocks to buy and 6 to short. He also does some feature engineering. What Quantopian is trying to do some smart people are doing it already and successfully. I did this 8 -10 years ago trading for fund. As long as you have the proper features, the task is easy.

ML can find (classify) best choices out of good options only. If the options are bad, the choices will also be bad, no matter how hard one tries. It will be an exercise in futility. Developing features to use in ML is the tough job and a CS degree is not enough for that. This is not a data science job. Data science is support for this job. It takes more than that and some people are successful with it.

I think I am right in saying Michael that your software uses price only in its calculations. That must surely limit the choice of "options"? As opposed to Q who are using many fundamental pieces of information in addition to price.

Most funds actually forward test with OPM

Presumably, OPM stands for other people's money. Well, the implication is that early customers are either unsophisticated (e.g. don't know about the rule of thumb of 5 years of real-money trading), duped by a sales pitch, or fully understand the risk and chose to speculate. Or they attempt to compensate for the risk by requiring a better deal (e.g. a bargain, better than the typical 2/20).

The meat is in the features.

What does this mean? How are you defining 'feature'? And what is 'feature engineering'? Presumably, you are looking for persistent patterns in the price time series? A price forecast model? Are 'features' synonymous with 'alpha factors'?

ML can find (classify) best choices out of good options only.

This is part of my confusion. The overall guidance seems to be lots of stocks and lots of factors (even if they are weak and transient). ML will sort it out point-in-time. But this would seem to run the risk of garbage-in-garbage-out. What one really wants is a finite set of strong, consistent arbitrage factors, particularly "new alpha" ones. Presumably, Q/Point72 can code up every alpha factor known to mankind, and write their own strategy. The real value in crowd-sourcing would be to identify new inefficiencies, I would think, not to write an algo with 30 sketchy factors.

What one really wants is a finite set of strong, consistent arbitrage
factors, particularly "new alpha" ones.

iI you have that then simple scanning and scoring will do the job, no need for ML. The idea is that ML will find gold in otherwise no strong features by constructing new ones, hopefully. Cross-sectional momentum worked well for years with scoring based on 6-12 month returns. It also depends on timeframe you are looking for. Short-term trading with fundamental factors does not make any sense.

@Anthony

That must surely limit the choice of "options"? As opposed to Q who
are using many fundamental pieces of information in addition to price.

The fundamental premise of technical analysis is that all information is reflected in price action. But it also depends on objectives and timeframes. If you want to operate in medium to longer-term then fundamental factors may make sense but if you want to operate in short-term then these fundamental factors are redundant and can actually hurt performance. Some of the price action features my program generates are very complicated but lookback period is 9 bars maximum because after that short-term price action memory diminishes fast according to my studies.

The idea is that ML will find gold in otherwise no strong features by constructing new ones, hopefully.

I'll have to think about that one. I recall that zero times any number is still zero (i.e. making something out of nothing sounds too good to be true). I guess the idea is like picking up on facial features, and correlating them with personal preferences (e.g. slightly bushy eyebrows combined with sorta funny looking ears combined with a bit of a double chin predicts a preference for Scotch over beer). The question for Delaney is what does Quantopian really need to be successful? It would seem that new, predictive factors would get them 95% of what they need; everything else has been reduced to standard practice. Or maybe the thought is that Quantopian users would add value at all steps of the proposed workflow? Where is the big pay-off to Q/Point72 expected to come from? The approach is not yet clear to me, since if Quantopian just solicited factors and paid users for them, then they could apply a single, high-horsepower platform to combine them, etc.

lookback period is 9 bars maximum because after that short-term price action memory diminishes fast according to my studies.

This is roughly consistent with my unsystematic study of price mean reversion (assuming you are referring to daily bars). My comment to the Delaney and Quantopian, which I will repeat here, is that if the time scale is relatively short, then using daily bars doesn't make sense, when a minute bar database is available (or perhaps some combination of daily bars for longer time scales and minutely bars for shorter ones would be appropriate).

Short-term trading with fundamental factors does not make any sense.

Well, it would seem that they could play into universe selection, right? For example, trading in the S&P 100, as referenced above, is using fundamentals, just on a very long time scale. Presumably, if the strategy started to falter, the client might try it on other pools of stocks, selected on the basis of fundamentals. One could imagine doing this on a quarterly/annual basis, while at the same time looking at "price action" on a much shorter time scale.

IMO fundamental factors are useful for valuation purposes, M&As, etc. I have used descriptive factors to construct universe, such as capitalization and average volume. Then, technical factors for feature engineering. Finally I discovered that most technical factors can be reduced to pure price action. Now I use only price action and most of the people I work with do the same. The objective is simple in principle: If you can get a high accuracy/low error rate in classifying according to next period returns (daily) then you can use either directional or long/short strategies. I just finished working on a project that involves feature engineering. You can see some examples here for directional trading: http://bit.ly/2fhXu4f

I have often wondered about M&A arb using fundamental "information". Statistics such as how many offers get completed, does it make a difference who advises the target and acquirer, all that sort of stuff. It would take endless work acquiring the data since it is probably available in a very piecemeal fashion but it might be worth it.

Or again, US secondary offerings. Does it pay to short? If so at what stage? Does it pay to go long the day before assuming the worst is out and the stock will bounce?

You could conduct similar research on the IPO market, although I never needed to do this, it just "happens" when the markets are hot enough and you stick to bulge bracket underwriters and the US market.

Not the subject of this thread I guess....but....

Hi Delaney -

Another topic to consider was brought up on https://www.quantopian.com/posts/event-driven-algorithms-vs-ranking-algorithms . I suppose the general question is, can every trading strategy somehow be shoe-horned into the workflow? For example, is the expectation that any strategy, with some thought, could be converted to an alpha factor? And do you expect to use such a workflow for the hedge fund as a whole? Or would there be something else used to amalgamate the various algos (since they all wouldn't necessarily conform to the workflow)?

Grant

As a quick note, many strategies will end up fitting well into the workflow. At the core of every strategy is a forecasting model. That model can be applied within a universe, and that universe can be ranked, leaving us back at the ability to run a cross sectional long short strategy. In general you want to invest in as many assets as your capital base and universe allow, one of the reasons why is detailed in this lecture.

https://www.quantopian.com/lectures#Position-Concentration-Risk

We respect that there are algorithms that will not fit into the workflow. Our goal on Quantopian is to build tools that support as many different types of strategies as possible, and we will not lock allocation to a specific type of strategy. However, we do have to prioritize how our product is developed, and we believe our best bang for buck initially will be in allowing people to elegantly implement cross sectional factor based strategies. In the same way that you might want to learn Newtonian physics before delving into more esoteric engineering disciplines, I think those interested in learning quant finance should at least understand and try an implementation of a factor based strategy. There may be many ways you might be able to do better, but at least it serves as a well studied foundation. I'll be back later with more info when the webinar happens.

At the core of every strategy is a forecasting model. That model can be applied within a universe, and that universe can be ranked, leaving us back at the ability to run a cross sectional long short strategy.

I understand this part, at least conceptually (although I don't yet understand converting to ranks). One thing that is a little murky is the distinction between "analog" and "digital" forecasting. The analog case would be akin to an asset allocation approach. There is no herky-jerky in-the-market, out-of-the-market action (at least with respect to the forecasts...other steps in the workflow may clip at specified levels). For each stock in my universe, I have a smooth function that says how much return is to be had at any point in time. For the digital case, however, it is all-or-nothing. I suppose that so long as a forecast return can be assigned to the "all" state (and a zero return to the "nothing" state), we have yet another factor to throw into the mix. Event-based factors are o.k. so long as they provide a forecasts across their respective universes.

Our goal on Quantopian is to build tools that support as many different types of strategies as possible, and we will not lock allocation to a specific type of strategy.

In my mind, although you say this, it is not the impression you are giving. When the hedge fund concept was introduced, I had thought that there would be much more overlap with the kinds of algos that individuals would run on their own money, and that Quantopian would sorta leach off of the crowd. In combination across the hundreds (thousands?) of algos, you would achieve a high SR and return. However, the focus seems to be for each prospective manager to construct a mini institutional-style long-short algo, soup to nuts, that would be stand-alone, and scale to $10M or more in capital (at 6X leverage, perhaps $60M?). You are looking for something very specific, I think, no? You are sending the message that there will be no entry point for single-factor, few-stock algos. But then, maybe you looked across all of the real-money retail algos on Quantopian and the collective alpha is zilch, and so you decided to go the mini-institutional route?

"One thing that is a little murky is the distinction between "analog" and "digital" forecasting."

This is an interesting point. But keep in mind that there is actually no "analog" approach. Returns are in discrete domain: minute, daily, weekly, etc. Think about it this way: what is easier to predict? One month ahead or 1 day ahead? I go for one day. Who knows what happens one month down the road. The digital approach is taking forecasting one step at a time.

On top of this, forecasting direction is much "easier" than forecasting returns. But in effect, if you can forecast correctly direction for 5 days, then you essentially forecast the return of those 5 days and maybe much more. And so on...

I think there is more emphasis on the ML process here when in fact the emphasis should be on the economic value of the attributes/features. This article has an example of a long/short ETF strategy and makes a few points about attributes, ML, etc. It could provide a starting point for those who are new in this. However, the emphasis is in attribute construction/derivation and ML is the routine part. In other words, the edge is in the attributes already, not in ML.

Hi Delaney -

Perhaps outside of your theoretical/academic domain, but I'm curious if the first go at the workflow will support both pipeline-based factors (returns based on daily bars) and ones from functions within an algo (which would have access to minute bars, via history)? If the former only, then it would seem that running algos on minute data would be kinda pointless, given that as I understand, the idea is to chuck everything into an order management system. Is the idea to have algos queue up daily allocations before the market opens, and then hand them off to an order management system to combine them?

Just trying to follow the story line here...

Hello all,

Really sorry about this, but in the process of rescheduling the webinar I forgot to update the date in our system to today. As a result anybody signing up signed up for one that "happened" on Oct 25th and there's no event for right now. As our system doesn't allow and people wouldn't be around for an immediate event, I've scheduled another one for Nov 29th. Hopefully this one should give people enough time to sign up. Due to the large backlog of questions, any questions asked after this post will likely not be answered in the upcoming session, apologies again for the mixup.

https://attendee.gotowebinar.com/register/3423415593670387716

Thanks Delaney,

I'll likely end up listening to the recorded webinar, but that's o.k. Hopefully the feedback from me and others is helpful.

the emphasis should be on the economic value of the attributes/features.

@ Michael Harris -

What do you mean by "attributes/features"? Just another word for what Quantopian is calling alpha factors? Or something else?

The standard terminology in machine learning for the independent variables is "attribute". These are also called features.

http://www.cse.unsw.edu.au/~billw/mldict.html#attribute

I suppose that an "alpha factor" refers to attributes but that sounds more like a marketing term. It is fine as long as you know the basic terminology.

Thanks Michael -

I'll need to get my hands on a good ML primer, to get a feel for this stuff. Per the workflow description, "An alpha is an expression, applied to the cross-section of your universe of stocks, which returns a vector of real numbers where these values are predictive of the relative magnitude of future returns." I'm guessing "alpha" is trader lingo for "attribute" or "feature" but I'm not sure.

I'm confused by the approach here, since there appears to be enthusiasm for lots of factors ( e.g. 101 Alphas and Thomas W.'s use of a relatively large number of factors in a ML example). On the other hand, there seems to be guidance that the quality of the factors is critical, with each carefully researched and screened (e.g. using alphalens). You seem to be saying that the way to go about this is to exclude all but the highest quality factors, which would seem to make intuitive sense, but runs counter to the message I'm getting from the Quantopian folks.

As the number of alphas (factors, features, attributes, predictors) increases, so is the probability of an over-fit and a failure of ML process. My work indicates that the optimum number of attributes is between 3 and 7. Here is an example with R code I use attributes that are proprietary. Known alphas lower the probability of finding an edge. There has been relentless data-mining for attributes that generate alpha. My opinion is that after splitting the data in train and test samples and fitting your model in former, you should be able to get classification accuracy in test sample better than 55% to have a chance. The problem is that if you keep testing various attributes in train and test, at the end you may be fooled by a non-significant high accuracy (low logloss) due to data snooping. I prefer to keep things simple: binary logistic regression, a few attributes that are proprietary and I look for a good accuracy in test sample. If I do not get that, I move on. I never try to force things as signal to noise ratio is very low in market data and over-fitting is always a possibility.

@Michael Harris

Thank you very much for your input. I've been reading your posts here and your blog and have learned so much. Your software is a valuable tool. It's so rare to come across someone who can explain these terms with such clarity.

More food for thought. If I'm following, the initial release of the workflow will be based on daily bars, and the portfolio update will be computed prior to the open. Is this workable? It seems like it puts a pretty heavy burden on forecasting, given that overnight/weekend changes are not available. Does anyone in the industry do this, or do they incorporate intraday prices into the mix, prior to deciding how to tweak the portfolio?

Interesting, too, that the backtester no longer supports daily bars. If the initial release of the workflow will only support daily bars, it might be worth re-considering bringing daily bar support back.

Thank you to everybody who came out to the webinar today. Unfortunately I just checked and I forgot to correctly record the webinar, so I will not be able to post it. I really apologize for this, and will post links to content that was covered here for reference.

https://www.quantopian.com/posts/quantopian-plus-chat-with-traders

https://blog.quantopian.com/a-professional-quant-equity-workflow/
http://quantopian.github.io/alphalens/
https://www.quantopian.com/lectures#Spearman-Rank-Correlation
https://www.quantopian.com/lectures#Position-Concentration-Risk
https://www.quantopian.com/lectures#Example:-Long-Short-Equity-Algorithm

https://www.quantopian.com/posts/machine-learning-on-quantopian-part-2-ml-as-a-factor
https://www.quantopian.com/posts/pipeline-factor-library-for-data

Hi Delaney -

No problem. I'm sure there will be follow-on opportunities.

Grant

Hi Delaney and all -

Not sure where to direct this question; this thread seems as good as any. The workflow seems to suggest that one would want to have a single universe, and then apply all factors to all stocks in that universe. However, one could also have factors that work best on specific universes (or even single stocks). What are the pipeline mechanics for doing this? Basically, each factor would have an assigned universe (with some factors having a given universe in common).

Hi Grant,

"However, one could also have factors that work best on specific universes (or even single stocks)."

You do not want this in general because of selection bias, which is a major component of data-mining bias. Also, you do not want to filter you universe after applying the factors and looking at the result because this opens the door to data snooping bias, which is another major component of data-mining bias. If you do these things, you will be essentially trying to fit results on past data.

Instead you want to start with a reasonable universe, a set of alphas you think are good enough and then test in-sample your strategy. If it does not work, you have limited room to maneuver. Changing factors and universes increased data-mining bias even if out-of-sample, results turn out to be good because they are now part of the process.

For example, see this article

Happy Holidays to all!

Hello Michael -

I get your point regarding picking a single, generic, relatively large universe, and then running all factors over it. I agree that there is the risk of data snooping bias, if one just iterates through N mini-universes for a given factor until the returns look good. But say I wanted to have one set of factors for stocks, and one set for ETFs, for example? Or maybe a factor that just operates on recent IPOs? Etc.

It just occurred to me that it is not obvious how the framework handles more than one universe. I guess the alpha combination step would need to be performed N+1 times, where N is the number of universes? After the first combination, we'd end up with N mini-alphas, which would then be combined into a final mega-alpha.