17 research outputs found
Forecasting Daily Variability of the S and P 100 Stock Index using Historical, Realised and Implied Volatility Measurements
The increasing availability of financial market data at intraday frequencies has not only led to the development of improved volatility measurements but has also inspired research into their potential value as an information source for volatility forecasting. In this paper we explore the forecasting value of historical volatility (extracted from daily return series), of implied volatility (extracted from option pricing data) and of realised volatility (computed as the sum of squared high frequency returns within a day). First we consider unobserved components and long memory models for realised volatility which is regarded as an accurate estimator of volatility. The predictive abilities of realised volatility models are compared with those of stochastic volatility models and generalised autoregressive conditional heteroskedasticity models for daily return series. These historical volatility models are extended to include realised and implied volatility measures as explanatory variables for volatility. The main focus is on forecasting the daily variability of the Standard and Poor's 100 stock index series for which trading data (tick by tick) of almost seven years is analysed. The forecast assessment is based on the hypothesis of whether a forecast model is outperformed by alternative models. In particular, we will use superior predictive ability tests to investigate the relative forecast performances of some models. Since volatilities are not observed, realised volatility is taken as a proxy for actual volatility and is used for computing the forecast error. A stationary bootstrap procedure is required for computing the test statistic and its -value. The empirical results show convincingly that realised volatility models produce far more accurate volatility forecasts compared to models based on daily returns. Long memory models seem to provide the most accurate forecasts
Monte Carlo Likelihood Estimation for Three Multivariate Stochastic Volatility Models
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.Importance sampling, Monte Carlo likelihood, Stochastic volatility,
Monte Carlo Estimation for Nonlinear Non-Gaussian State Space Models
We develop a proposal or importance density for state space models with a nonlinear non-Gaussian observation vector y ∼ p(y¦θ) and an unobserved linear Gaussian signal vector θ ∼ p(θ). The proposal density is obtained from the Laplace approximation of the smoothing density p(θ¦y). We present efficient algorithms to calculate the mode of p(θ¦y) and to sample from the proposal density. The samples can be used for importance sampling and Markov chain Monte Carlo methods. The new results allow the application of these methods to state space models where the observation density p(y¦θ) is not log-concave. Additional results are presented that lead to computationally efficient implementations. We illustrate the methods for the stochastic volatility model with leverage. Copyright 2007, Oxford University Press.
Likelihood-based Analysis for Dynamic Factor Models
We present new results for the likelihood-based analysis of the dynamic factor model that possibly includes intercepts and explanatory variables. The latent factors are modelled by stochastic processes. The idiosyncratic disturbances are specified as autoregressive processes with mutually correlated innovations. The new results lead to computationally efficient procedures for the estimation of the factors and parameter estimation by maximum likelihood and Bayesian methods. An illustration is provided for the analysis of a large panel of macroeconomic time series.EM algorithm; Kalman Filter; Forecasting; Latent Factors; Markov chain Monte Carlo; Principal Components; State Space
On Importance Sampling for State Space Models
We consider likelihood inference and state estimation by means of importance sampling for state space models with a nonlinear non-Gaussian observation y ~ p(y|alpha) and a linear Gaussian state alpha ~ p(alpha). The importance density is chosen to be the Laplace approximation of the smoothing density p(alpha|y). We show that computationally efficient state space methods can be used to perform all necessary computations in all situations. It requires new derivations of the Kalman filter and smoother and the simulation smoother which do not rely on a linear Gaussian observation equation. Furthermore, results are presented that lead to a more effective implementation of importance sampling for state space models. An illustration is given for the stochastic volatility model with leverage.Kalman filter; Likelihood function; Monte Carlo integration; Newton-Raphson; Posterior mode estimation; Simulation smoothing; Stochastic volatility model
Forecasting Daily Variability of the S&P 100 Stock Index using Historical, Realised and Implied Volatility Measurements
The increasing availability of financial market data at intraday frequencies has not only led to the development of improved volatility measurements but has also inspired research into their potential value as an information source for volatility forecasting. In this paper we explore the forecasting value of historical volatility (extracted from daily return series), of implied volatility (extracted from option pricing data) and of realised volatility (computed as the sum of squared high frequency returns within a day). First we consider unobserved components and long memory models for realised volatility which is regarded as an accurate estimator of volatility. The predictive abilities of realised volatility models are compared with those of stochastic volatility models and generalised autoregressive conditional heteroskedasticity models for daily return series. These historical volatility models are extended to include realised and implied volatility measures as explanatory variables for volatility. The main focus is on forecasting the daily variability of the Standard & Poor's 100 stock index series for which trading data (tick by tick) of almost seven years is analysed. The forecast assessment is based on the hypothesis of whether a forecast model is outperformed by alternative models. In particular, we will use superior predictive ability tests to investigate the relative forecast performances of some models. Since volatilities are not observed, realised volatility is taken as a proxy for actual volatility and is used for computing the forecast error. A stationary bootstrap procedure is required for computing the test statistic and its -value. The empirical results show convincingly that realised volatility models produce far more accurate volatility forecasts compared to models based on daily returns. Long memory models seem to provide the most accurate forecasts.Generalised autoregressive conditional heteroskedasticity model; Long memory model; Realised volatility; Stochastic volatility model; Superior predictive ability; Unobserved components