So the general definition of beta is how much your asset or portfolio returns vary (or how volatile they are) relative to the market. This is what I gather from this conceptual notion: beta is the ratio of the volatility of say, asset A, to the volatility of the market, say B. So fundamentally, I would expect beta to be Var(A) / Var(B) or Std(A) / Std(B). I also wonder if it can be expressed as Covariance(A,B). My first question is what really is the practical the difference between the covariance(A,B) and the ratio of their variances? I presume that the variance(A) and variance(B) only refer to the total variance over time, and the ratio simply looks at these overall volatilities, whereas covariance, based on how it is calculated, looks at each point in time (say, each day) and relates A and B. I would presume that COV(A,B) is the better measure of the notion of beta being how volatile your asset's returns are compared to the market, and I would further presume that r(A,B), the correlation coefficient, is an even better measure, as it is the "standardized" measure of covariance. Please provide critiques on this reasoning.
COV(A,B) or r(A,B) is not the official definition or formula, however. The actual formula, given asset A and market B, is the following: COV (A,B) / VAR (B). This further simplifies to r(A,B)*(std(A)/std(B)).
My second question is, why can't beta be expressed simply as the correlation coefficient r? Why does r need to multiplied by the ratio of the standard deviations of A and B? Doesn't r, the correlation coefficient, already take into account the standard deviation of A and standard deviation of B (r= Cov(A,B) / (std(A)*std(B)))?
Any insight would be greatly appreciated.