6 research outputs found

    Extreme Value Theory versus traditional GARCH approaches applied to financial data: a comparative evaluation

    Get PDF
    Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normally distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalised assumption of normally distributed financial returns. Thus it is crucial to model distribution tails properly so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey in 2000 and combine GARCH-type models with the extreme value theory to estimate the tails of three financial index returns ¿ S&P 500, FTSE 100 and NIKKEI 225 ¿ representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are more accurate than those from conventional GARCH models assuming normal or Student¿s t distribution innovations when doing not only in-sample but also out-of-sample estimation. Moreover, these results are robust to alternative GARCH model specifications. The findings of this paper should be useful to investors in general, since their goal is to be able to forecast unforeseen price movements and take advantage of them by positioning themselves in the market according to these predictions

    Dependent bootstrapping for value-at-risk and expected shortfall

    No full text
    Estimation in extreme financial risk is often faced with challenges such as the need for adequate distributional assumptions, considerations for data dependencies, and the lack of tail information. Bootstrapping provides an alternative that overcomes some of these challenges. It does not assume a distributional form and asymptotically replicates the empirical density for resampled data. Moreover, advanced bootstrapping can cater for dependencies and stationarity in the data. In this paper, we evaluate the use of dependent bootstrapping, both for the original financial time series and for its GARCH innovations (under the Gaussian and Student t noise assumptions), in forecasting value-at-risk and expected shortfall. We also assess the effect of using different window sizes for these procedures. The two datasets used are daily returns of the S & P500 from NYSE and the ALSI from JSE

    A robust VaR model under different time periods and weighting schemes

    No full text
    This paper analyses several volatility models by examining their ability to forecast Value-at-Risk (VaR) for two different time periods and two capitalization weighting schemes. Specifically, VaR is calculated for large and small capitalization stocks, based on Dow Jones (DJ) Euro Stoxx indices and is modeled for long and short trading positions by using non parametric, semi parametric and parametric methods. In order to choose one model among the various forecasting methods, a two-stage backtesting procedure is implemented. In the first stage the unconditional coverage test is used to examine the statistical accuracy of the models. In the second stage a loss function is applied to investigate whether the differences between the models, that calculated accurately the VaR, are statistically significant. Under this framework, the combination of a parametric model with the historical simulation produced robust results across the sample periods, market capitalization schemes, trading positions and confidence levels and therefore there is a risk measure that is reliable. Copyright Springer Science+Business Media, LLC 2007Asymmetric power ARCH, Backtesting, Extreme value theory, Filtered historical simulation, Value-at-risk,
    corecore