398 research outputs found

    General to specific modelling of exchange rate volatility : a forecast evaluation

    Get PDF
    The general-to-specific (GETS) methodology is widely employed in the modelling of economic series, but less so in financial volatility modelling due to computational complexity when many explanatory variables are involved. This study proposes a simple way of avoiding this problem when the conditional mean can appropriately be restricted to zero, and undertakes an out-of-sample forecast evaluation of the methodology applied to the modelling of weekly exchange rate volatility. Our findings suggest that GETS specifications perform comparatively well in both ex post and ex ante forecasting as long as sufficient care is taken with respect to functional form and with respect to how the conditioning information is used. Also, our forecast comparison provides an example of a discrete time explanatory model being more accurate than realised volatility ex post in 1 step forecasting

    A component GARCH model with time varying weights

    Get PDF
    We present a novel GARCH model that accounts for time varying, state dependent, persistence in the volatility dynamics. The proposed model generalizes the component GARCH model of Ding and Granger (1996). The volatility is modelled as a convex combination of unobserved GARCH components where the combination weights are time varying as a function of appropriately chosen state variables. In order to make inference on the model parameters, we develop a Gibbs sampling algorithm. Adopting a fully Bayesian approach allows to easily obtain medium and long term predictions of relevant risk measures such as value at risk and expected shortfall. Finally we discuss the results of an application to a series of daily returns on the S&P500.GARCH, persistence, volatility components, value-at-risk, expected shortfall

    Modelling Financial High Frequency Data Using Point Processes

    Get PDF
    In this paper, we give an overview of the state-of-the-art in the econometric literature on the modeling of so-called financial point processes. The latter are associated with the random arrival of specific financial trading events, such as transactions, quote updates, limit orders or price changes observable based on financial high-frequency data. After discussing fundamental statistical concepts of point process theory, we review durationbased and intensity-based models of financial point processes. Whereas duration-based approaches are mostly preferable for univariate time series, intensity-based models provide powerful frameworks to model multivariate point processes in continuous time. We illustrate the most important properties of the individual models and discuss major empirical applications.Financial point processes, dynamic duration models, dynamic intensity models.

    Modelling Financial High Frequency Data Using Point Processes

    Get PDF
    In this chapter written for a forthcoming Handbook of Financial Time Series to be published by Springer-Verlag, we review the econometric literature on dynamic duration and intensity processes applied to high frequency financial data, which was boosted by the work of Engle and Russell (1997) on autoregressive duration modelsDuration, Intensity, Point process, High frequency data, ACD models

    BAYESIAN CLUSTERING OF SIMILAR MULTIVARIATE GARCH MODELS

    Get PDF
    We consider the estimation of a large number of GARCH models, say of the order of several hundreds. Especially in the multivariate case, the number of parameters is extremely large. To reduce this number and render estimation feasible, we regroup the series in a small number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster should therefore contain similar series. What makes the problem interesting is that we do not know a piori which series belongs to which cluster. The overall model is therefore a finite mixture of distributions, where the weights of the components are unknown parameters and each component distribution has its own conditional mean and variance specification. Inference is done by the Bayesian approach, using data augmentation techniques. Illustrations are provided.Large financial systems, Multivariate GARCH, Clustering, Bayesian methods, Gibbs sampling, Finite mixture distributions

    On marginal likelihood computation in change-point models

    Get PDF
    Change-point models are useful for modeling time series subject to structural breaks. For interpretation and forecasting, it is essential to estimate correctly the number of change points in this class of models. In Bayesian inference, the number of change points is typically chosen by the marginal likelihood criterion, computed by Chib's method. This method requires to select a value in the parameter space at which the computation is done. We explain in detail how to perform Bayesian inference for a change-point dynamic regression model and how to compute its marginal likelihood. Motivated by our results from three empirical illustrations, a simulation study shows that Chib's method is robust with respect to the choice of the parameter value used in the computations, among posterior mean, mode and quartiles. Furthermore, the performance of the Bayesian information criterion, which is based on maximum likelihood estimates, in selecting the correct model is comparable to that of the marginal likelihood.BIC, change-point model, Chib's method, marginal likelihood

    General to Specific Modelling of Exchange Rate Volatility : a Forecast Evaluation

    Get PDF
    The general-to-specific (GETS) approach to modelling is widely employed in the modelling of economic series, but less so in financial volatility modelling due to computational complexity when many explanatory variables are involved. This study proposes a simple way of avoiding this problem and undertakes an out-of-sample forecast evaluation of the methodology applied to the modelling of weekly exchange rate volatility. Our findings suggest that GETS specifications are especially valuable in conditional forecasting, since the specification that employs actual values on the uncertain information performs particularly well.Exchange Rate Volatility, General to Specific, Forecasting

    Efficient importance sampling for ML estimation of SCD models

    Get PDF
    The evaluation of the likelihood function of the stochastic conditional duration model requires to compute an integral that has the dimension of the sample size. We apply the efficient importance sampling method for computing this integral. We compare EIS-based ML estimation with QML estimation based on the Kalman filter. We find that EIS-ML estimation is more precise statistically, at a cost of an acceptable loss of quickness of computations. We illustrate this with simulated and real data. We show also that the EIS-ML method is easy to apply to extensions of the SCD model.Stochastic conditional duration, importance sampling

    General to specific modelling of exchange rate volatility : a forecast evaluation

    Get PDF
    The general-to-specific (GETS) methodology is widely employed in the modelling of economic series, but less so in financial volatility modelling due to computational complexity when many explanatory variables are involved. This study proposes a simple way of avoiding this problem when the conditional mean can appropriately be restricted to zero, and undertakes an out-of-sample forecast evaluation of the methodology applied to the modelling of weekly exchange rate volatility. Our findings suggest that GETS specifications perform comparatively well in both ex post and ex ante forecasting as long as sufficient care is taken with respect to functional form and with respect to how the conditioning information is used. Also, our forecast comparison provides an example of a discrete time explanatory model being more accurate than realised volatility ex post in 1 step forecasting.Exchange rate volatility, General to specific, Forecasting

    Bayesian Inference in Dynamic Disequilibrium Models : an Application to the Polish Credit Market

    Get PDF
    We review Bayesian inference for dynamic latent variable models using the data augmentation principle. We detail the difficulties of stimulating dynamic latent variables in a Gibbs sampler. We propose an alternative specification of the dynamic disequilibrium model which leads to a simple simulation procedure and renders Bayesian inference fully operational. Identification issues are discussed. We conduct a specification search using the posterior deviance criterion of Spiegelhalter, Best, Carlin, and van der Linde (2002) for a disequilibrium model of the Polish credit market.Latent variables, Disequilibrium models, Bayesian inference, Gibbs sampler, Credit rationing
    corecore