Financial institutions need to take volatility clustering into account:
1. To avoid taking on an undesirable level of risk
2. To know the right level of capital they need to hold
3. To meet regulatory requirements
4. To account for mean reversion in returns
Volatility clustering leads to levels of current volatility that can be significantly different from long run averages. When volatility is running high, institutions need to shed risk, and when it is running low, they can afford to increase returns by taking on more risk for a given amount of capital. An institution's response to changes in volatility can be either to adjust risk, or capital, or both. Accounting for volatility clustering helps institutions manage their risk and capital and therefore statements I and II are correct.
Regulatory requirements do not require volatility clustering to be taken into account (at least not yet). Therefore statement III is not correct, and neither is IV which is completely unrelated to volatility clustering.
Which of the following statements is true:
1. Recovery rate assumptions can be easily made fairly accurately given past data available from credit rating agencies.
2. Recovery rate assumptions are difficult to make given the effect of the business cycle, nature of the industry and multiple other factors difficult to model.
3. The standard deviation of observed recovery rates is generally very high, making any estimate likely to differ significantly from realized recovery rates.
4. Estimation errors for recovery rates are not a concern as they are not directionally biased and will cancel each other out over time.
Recovery rates vary a great deal from year to year, and are difficult to predict. Therefore statement III is true. Similarly, any attempt to predict these is hamstrung by a high standard error, which can be as high as the historical mean itself. The error does not cancel itself out due to the effect of the business cycle making the error directionally biased. Thus statement IV is false.
Statement II is true as these are all factors that make forecasting recovery rates for any credit risk model rather difficult. Statement I is false because recovery rates are difficult to predict and assumptions are not easy to make.
As the persistence parameter under EWMA is lowered, which of the following would be true:
The persistence parameter, , is the coefficient of the prior day's variance in EWMA calculations. A higher value of the persistence parameter tends to 'persist' the prior value of variance for longer. Consider an extreme example - if the persistence parameter is equal to 1, the variance under EWMA will never change in response to returns.
1 - is the coefficient of recent market returns. As is lowered, 1 - increases, giving a greater weight to recent market returns or shocks. Therefore, as is lowered, the model will react faster to market shocks and give higher weights to recent returns, and at the same time reduce the weight on prior variance which will tend to persist for a shorter period.
Which of the following is not a limitation of the univariate Gaussian model to capture the codependence structure between risk factros used for VaR calculations?
In the univariate Gaussian model, each risk factor is modeled separately independent of the others, and the dependence between the risk factors is captured by the covariance matrix (or its equivalent combination of the correlation matrix and the variance matrix). Risk factors could include interest rates of different tenors, different equity market levels etc.
While this is a simple enough model, it has a number of limitations.
First, it fails to fit to the empirical distributions of risk factors, notably their fat tails and skewness. Second, a single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured. Third, determining the covariance matrix becomes an extremely difficult task as the number of risk factors increases. The number of covariances increases by the square of the number of variables.
But an inability to capture linear relationships between the factors is not one of the limitations of the univariate Gaussian approach - in fact it is able to do that quite nicely with covariances.
A way to address these limitations is to consider joint distributions of the risk factors that capture the dynamic relationships between the risk factors, and that correlation is not a static number across an entire range of outcomes, but the risk factors can behave differently with each other at different intersection points.
Which of the following belong to the family of generalized extreme value distributions:
1. Frechet
2. Gumbel
3. Weibull
4. Exponential
Extreme value theory focuses on the extreme and rare events, and in the case of VaR calculations, it is focused on the right tail of the loss distribution. In very simple and non-technical terms, EVT says the following:
1. Pull a number of large iid random samples from the population,
2. For each sample, find the maximum,
3. Then the distribution of these maximum values will follow a Generalized Extreme Value distribution.
(In some ways, it is parallel to the central limit theorem which says that the the mean of a large number of random samples pulled from any population follows a normal distribution, regardless of the distribution of the underlying population.)
Generalized Extreme Value (GEV) distributions have three parameters: (shape parameter), (location parameter) and (scale parameter). Based upon the value of , a GEV distribution may either be a Frechet, Weibull or a Gumbel. These are the only three types of extreme value distributions.
Micaela
5 days agoOlga
2 months ago