Back to Community
algo framework for event driven strategies?

Recently, I learned that there is something called an event driven strategy:

An event driven strategy is a type of investment strategy that attempts to take advantage of temporary stock mispricing that can occur before or after a corporate event takes place.

Sorta makes sense: the information is flawed or incomplete, the market's interpretation of the information is knuckle-headed, the timing or the diffusion of information is not efficient, the event is leaked either intentionally or inadvertently prior to the event, etc., and so the price ends up out of whack, but eventually comes back into alignment with the "efficient" market and finds its happy place (with respect to whatever irrational exuberance, FUD, or funk d'jour of the market).

The question in my mind is how to apply the Q framework (nicely summarized on A Professional Quant Equity Workflow), to combine event-driven alphas, with a variety of other types of alphas, all on different time scales. For events, one has a set of step functions, whereas for other alpha sources (e.g. fundamentals), the changes are continuous, and thus a regular updating of the portfolio weights makes sense (e.g. daily ranking and re-balancing with a turn-over constraint).

The Q guidance is to dig into the new event-driven FactSet Estimates data, but it is not so clear how to meld it into a framework seemingly designed for continuous ranking and re-balancing. Any thoughts?

2 responses

Hum, for me event-driven strategies requires a lot of discretion in the selection process and are not suitable for quant strats.

Let me explain with a risk arbitrage strategy: these type of strategies aim to extract alpha from the realization of corporate events. Let's take an example of such event: company A places a bid on company B at $45 per stock. Before the announcement, company B stocks were trading around $38. Immediately after the announcement, the price rallies to the bid price, but it will not reach exactly $45 as there is still some uncertainties about whether the takeover will take place or not. The reasons why the takeover could fail are multiple (Target shareholders’ approval may not be obtained, anti-takeover defenses, regulatory issues such as antitrust laws...). This is where event-driven fund managers intervene. They will evaluate the probability that the event really takes place and decide to buy or sell company B stocks accordingly. If they 'price' a takeover, they will buy company B stocks at, let say $44 - $44.5 and expect to make a gain of $1-$0.5 (without leverage). Sometimes, they also hedge the deal with the aquirer's stock.

In short, event driven strategies (Activist, Credit Arbitrage, Distressed Restructuring, Risk Arbitrage, Private Issur/ Reg. D) requires a large degree of independant qualitative evaluation among multi areas of expertise (management, complex valuations, regulatory...)

Maybe I'm wrong, but in my opinion, relying solely on quantitative data is not appropriate for such strategies

@ Mathieu -

Thanks for your feedback. I haven't dug into it yet, but the context here is that Quantopian is implying that the new FactSet Estimates data may have some new alpha in it for their fund. They give an example factor:

The surprise factor is defined to be the percent difference between the estimated EPS and the actual EPS from the most recently published quarterly report (FQ0).

So, as I understand at this point, the report of the actual EPS would be a corporate event around which an event-driven strategy could be built (along with the analyst data which anticipates the actual EPS).

At this point, I'm less concerned with whether the Quantopian intuition is correct (that there is new alpha for the Q fund in the FactSet Estimates data), but rather the mechanics of combining sporadic alpha factors, with alpha factors that are effectively continuous in time (where, subject to a turnover constraint and taking trading costs into account, an algo would rank and re-balance continuously, e.g. daily). The reason I'm asking is that Q incentivizes holistic algos; my hypothesis is that quants who are successful in combining lots of different alpha streams will be the ones who are rewarded (this has been the guidance from Q to-date, but maybe that's changing, now that the Q fund is getting fleshed out).

To address your point, I'm not so sure. Taking a general machine learning (ML) approach, there is an awful lot of corporate event training data out there, going back decades. I'd think that an application-specific ML algo could be devised that would work well enough. Keep in mind that the cost of ML scales favorably relative to full-time professional staff in offices. There's an awful lot of inertia to set up shop with humans, whereas one doesn't even have to own the machines--just pay the bill from the cloud computing provider. This would apply to so-called human "analysts" too, who provide estimates. One has to wonder how much human analysis goes into estimates in the first place. I suspect that there is a trend toward machines doing most of the work, if not all of it. So, one has machines providing the estimates, and then machines trying to profit off of flaws in the machines that made the estimates (perhaps all using the same cloud computing provider). If one looks at major hedge funds, they have relatively few employees: Bridgewater, 1700; Citadel, 1400; AQR, 1,000; Rennaissance, 290; Point72, 1400. So, we are kinda already there-not a lot of staff to run some big operations.