Back to Community
Quantopian Lecture Series: VaR and CVaR (Expected Shortfall)

This is the first lecture co-written by our new CIO, Jonathan Larkin.

Conditional Value at Risk (CVaR) is one of the most powerful tools in modern risk management. It estimates and answer to the question "On the worst p percent of days, how much money can I expect to lose?" It is a way to check if your current portfolio meets risk tolerance levels and to evaluate multiple portfolios when selecting assets. It is also useable for portfolio optimization as we will discuss in future lectures.

All lectures can be found here.

Loading notebook preview...
Notebook previews are currently unavailable.

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

6 responses

I read an excellent paper from JP Morgan about how to model for non-normality in returns. One of the interesting steps they take in preparing the data prior to evaluating CVaR is to eliminate any serial correlation from the data.

In assuming that the returns don't trend, it is possible to misrepresent the risk.

When you "de-mean" the results in this notebook, is this what is being approximated?

Often, eliminating serial correlation refers to the process of taking a delta, described in this lecture:,-Cointegration,-and-Stationarity

Is that what is done in the JPM paper? I'm not actually clear on how you would 'eliminate' auto-correlation, do they then measure CVaR on the new series and try to transform that number back into returns space?

If the returns do not trend, most statistical methods for risk evaluation will be more reliable. However it is still possible to misrepresent the risk as future conditions may change. It's always an estimate, never a certain answer, and a very stable period may be followed by a regime change or period of high volatility.

De-meaning usually is used for normalization to bring all series into the same space. It shouldn't affect any trend in the data as subtracting a mean will preserve the trend.

Hey thanks for the reply. JPM use the Ljung-Box Q-statistic to determine whether or not the reject the null hypothesis that the returns are independently distributed (the alternative being that they exhibit serial correlation) and, if rejected, apply a variation of the Fisher-Geltner-Webb unsmoothing approach.

It is then this set of returns they use in the CVaR calculation. (If you Google for AM_Non-normality_of_market_returns.pdf you'll find the same document, pages 7, 8 and then 14, 15)

I am familiar with Ljung-Box statistics, which are the default way to measure autocorrelation and are used in this lecture in the acf implementation

I am unfamiliar with the un-smoothing method, but after briefly looking over the paper you mention it seems like an interesting approach. I unfortunately do not have time currently to read through the paper. One thing I would mention is that different assets will behave differently, and you'd want to ensure that the universe from which your portfolio behaves in all the ways you expect, not just all equities (unless you're broadly invested in all equities). You also want to ensure that you aren't running into multiple comparisons bias when deciding whether each asset, or set of assets, are autocorrelated.

Very good points on multiple comparisons and checking if the approach is applicable. But yes, it is an interesting technique to add to the toolkit.

Certainly, I also updated my post as I realized I forgot to include a reference to a use-case for Ljung-Box in another lecture.