I am comparing two methods for predicting volatility:
- Standard deviation of last 10 days of daily returns: model free; one parameter
- GARCH(1,1) on last year of daily returns: model with 3 parameters
I use method 1 in my trading, but I see can it gets carried away in periods of elevated volatility. It just assumes the current volatility continues. In reality, elevated volatility typical reverts to the mean rather quickly. This is neatly solved by GARCH(1,1), at the expense of two further parameters to estimate.
Reading a few papers, it seems clear that the GARCH model does better with longer lookbacks than one year. Intuitively this makes sense, as method 1 needs at least 10 days and has only one parameter, so method 2 might need 10**3 days (4 years) for 3 parameters (assuming orthogonal etc).
Combining the above two thoughts, and the re-formulation of GARCH(1,1) in this paper:
Prediction of tomorrow's variance = unconditional variance + beta * (today's prediction of variance - unconditional variance)
You can then substitute the unconditional variance with the long-term sample variance of, for example, the last 4 years of daily returns. You substitute today's prediction of variance with the short-term sample variance of, for example, the last 10 days of daily returns. You then run a simple linear regression to determine beta. I'm only estimating one parameter, but of course, there are two implicit in the model (4 years and 10 days).
Prediction of tomorrow's variance = var_4_years + beta * ( var_10_days - var_4_years )