42 research outputs found

    Bootstrap prediction for returns and volatilities in GARCH models.

    Get PDF
    A new bootstrap procedure to obtain prediction densities of returns and volatilities of GARCH processes is proposed. Financial market participants have shown an increasing interest in prediction intervals as measures of uncertainty. Furthermore, accurate predictions of volatilities are critical for many financial models. The advantages of the proposed method are that it allows incorporation of parameter uncertainty and does not rely on distributional assumptions. The finite sample properties are analyzed by an extensive Monte Carlo simulation. Finally, the technique is applied to the Madrid Stock Market index, IBEX-35.Acknowledgements: We are very grateful for their helpful comments by three anonymous referees, the editor Stephen Pollock and seminar participants at the Universities of Valladolid, New South Wales and Canterbury and the June 2001 Time Series Workshop of Arrabida, the September 2001 International Conference on Modelling Volatility (Perth) and the June 2002 International Symposium on Forecasting (Dublin). We are also grateful to Gregorio Serna for providing the data set analyzed in this paper and to Dolores Redondas for helping us with the figures. Financial support was provided by projects DGES PB96-0111 and BEC2002-03720 from the Spanish Government and Cátedra de Calidad from BBVAPublicad

    Prediction intervals in conditionally heteroscedastic time series with stochastic components.

    Get PDF
    Differencing is a very popular stationary transformation for series with stochastic trends. Moreover, when the differenced series is heteroscedastic, authors commonly model it using an ARMA-GARCH model. The corresponding ARIMA-GARCH model is then used to forecast future values of the original series. However, the heteroscedasticity observed in the stationary transformation should be generated by the transitory and/or the long-run component of the original data. In the former case, the shocks to the variance are transitory and the prediction intervals should converge to homoscedastic intervals with the prediction horizon.We show that, in this case, the prediction intervals constructed from the ARIMA-GARCH models could be inadequate because they never converge to homoscedastic intervals. All of the results are illustrated using simulated and real time series with stochastic levels.ARIMA-GARCH models; Local level model; Nonlinear time series; State space models; Unobserved component models;

    Prediction intervals for fractionally integrated time series and volatility models

    Get PDF
    The two of the main formulations for modeling long range dependence in volatilities associated with financial time series are fractionally integrated generalized autoregressive conditional heteroscedastic (FIGARCH) and hyperbolic generalized autoregressive conditional heteroscedastic (HYGARCH) models. The traditional methods of constructing prediction intervals for volatility models, either employ a Gaussian error assumption or are based on asymptotic theory. However, many empirical studies show that the distribution of errors exhibit leptokurtic behavior. Therefore, the traditional prediction intervals developed for conditional volatility models yield poor coverage. An alternative is to employ residual bootstrap-based prediction intervals. One goal of this dissertation research is to develop methods for constructing such prediction intervals for both returns and volatilities under FIGARCH and HYGARCH model formulations. In addition, this methodology is extended to obtain prediction intervals for autoregressive moving average (ARMA) and fractionally integrated autoregressive moving average (FARIMA) models with a FIGARCH error structure. The residual resampling is done via a sieve bootstrap approach, which approximates the ARMA and FARIMA portions of the models with an AR component. AIC criteria is used to find order of the finite AR approximation on the conditional mean process. The advantage of the sieve bootstrap method is that it does not require any knowledge of the order of the conditional mean process. However, we assume that the order of the FIGARCH part is known. Monte- Carlo simulation studies show that the proposed methods provide coverages closed to the nominal values --Abstract, page iv

    A component GARCH model with time varying weights

    Get PDF
    We present a novel GARCH model that accounts for time varying, state dependent, persistence in the volatility dynamics. The proposed model generalizes the component GARCH model of Ding and Granger (1996). The volatility is modelled as a convex combination of unobserved GARCH components where the combination weights are time varying as a function of appropriately chosen state variables. In order to make inference on the model parameters, we develop a Gibbs sampling algorithm. Adopting a fully Bayesian approach allows to easily obtain medium and long term predictions of relevant risk measures such as value at risk and expected shortfall. Finally we discuss the results of an application to a series of daily returns on the S&P500.GARCH, persistence, volatility components, value-at-risk, expected shortfall

    Bootstrap prediction intervals for VaR and ES in the context of GARCH models

    Get PDF
    In this paper, we propose a new bootstrap procedure to obtain prediction intervals of future Value at Risk (VaR) and Expected Shortfall (ES) in the context of univariate GARCH models. These intervals incorporate the parameter uncertainty associated with the estimation of the conditional variance of returns. Furthermore, they do not depend on any particular assumption on the error distribution. Alternative bootstrap intervals previously proposed in the literature incorporate the first but not the second source of uncertainty when computing the VaR and ES. We also consider an iterated smoothed bootstrap with better properties than traditional ones when computing prediction intervals for quantiles. However, this latter procedure depends on parameters that have to be arbitrarily chosen and is very complicated computationally. We analyze the finite sample performance of the proposed procedure and show that the coverage of our proposed procedure is closer to the nominal than that of the alternatives. All the results are illustrated by obtaining one-step-ahead prediction intervals of the VaR and ES of several real time series of financial returns.Expected Shortfall, Feasible Historical Simulation, Hill estimator, Parameter uncertainty, Quantile intervals, Value at Risk

    Using conditional kernel density estimation for wind power density forecasting

    Get PDF
    Of the various renewable energy resources, wind power is widely recognized as one of the most promising. The management of wind farms and electricity systems can benefit greatly from the availability of estimates of the probability distribution of wind power generation. However, most research has focused on point forecasting of wind power. In this paper, we develop an approach to producing density forecasts for the wind power generated at individual wind farms. Our interest is in intraday data and prediction from 1 to 72 hours ahead. We model wind power in terms of wind speed and wind direction. In this framework, there are two key uncertainties. First, there is the inherent uncertainty in wind speed and direction, and we model this using a bivariate VARMA-GARCH (vector autoregressive moving average-generalized autoregressive conditional heteroscedastic) model, with a Student t distribution, in the Cartesian space of wind speed and direction. Second, there is the stochastic nature of the relationship of wind power to wind speed (described by the power curve), and to wind direction. We model this using conditional kernel density (CKD) estimation, which enables a nonparametric modeling of the conditional density of wind power. Using Monte Carlo simulation of the VARMA-GARCH model and CKD estimation, density forecasts of wind speed and direction are converted to wind power density forecasts. Our work is novel in several respects: previous wind power studies have not modeled a stochastic power curve; to accommodate time evolution in the power curve, we incorporate a time decay factor within the CKD method; and the CKD method is conditional on a density, rather than a single value. The new approach is evaluated using datasets from four Greek wind farms

    Bootstrap for Value at Risk Prediction

    Get PDF
    We evaluate the predictive performance of a variety of value-at-risk (VaR) models for a portfolio consisting of five assets. Traditional VaR models such as historical simulation with bootstrap and filtered historical simulation methods are considered. We suggest a new method for estimating Value at Risk: the filtered historical simulation GJR-GARCH method based on bootstrapping the standardized GJR-GARCH residuals. The predictive performance is evaluated in terms of three criteria, the test of unconditional coverage, independence and conditional coverage and the quadratic loss function suggested. The results show that classical methods are inefficient under moderate departures from normality and that the new method produces the most accurate forecasts of extreme losses

    Bootstrap for Value at Risk Prediction

    Get PDF
    We evaluate the predictive performance of a variety of value-at-risk (VaR) models for a portfolio consisting of five assets. Traditional VaR models such as historical simulation with bootstrap and filtered historical simulation methods are considered. We suggest a new method for estimating Value at Risk: the filtered historical simulation GJR-GARCH method based on bootstrapping the standardized GJR-GARCH residuals. The predictive performance is evaluated in terms of three criteria, the test of unconditional coverage, independence and conditional coverage and the quadratic loss function suggested. The results show that classical methods are inefficient under moderate departures from normality and that the new method produces the most accurate forecasts of extreme losses

    Evaluating Value-at-Risk models via Quantile Regression

    Get PDF
    This paper is concerned with evaluating value at risk estimates. It is well known that using only binary variables, such as whether or not there was an exception, sacrifices too much information. However, most of the specification tests (also called backtests) available in the literature, such as Christoffersen (1998) and Engle and Maganelli (2004) are based on such variables. In this paper we propose a new backtest that does not rely solely on binary variables. It is shown that the new backtest provides a sufficient condition to assess the finite sample performance of a quantile model whereas the existing ones do not. The proposed methodology allows us to identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Our theoretical findings are corroborated through a Monte Carlo simulation and an empirical exercise with daily S&P500 time series.Value-at-Risk, Backtesting, Quantile Regression
    corecore