5,569 research outputs found

    Bootstraping financial time series

    Get PDF
    It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate.Publicad

    The Nobel Memorial Prize for Robert F. Engle

    Get PDF
    I review and interpret two of Robert Engle's most important contributions: the theory and application of cointegration, and the theory and application of dynamic volatility models. I treat the latter much more extensively, de-emphasizing technical aspects and focusing instead on the intuition, nuances and importance of the work.

    The Nobel Memorial Prize for Robert F. Engle

    Get PDF
    Engle’s footsteps range widely. His major contributions include early work on band-spectral regression, development and unification of the theory of model specification tests (particularly Lagrange multiplier tests), clarification of the meaning of econometric exogeneity and its relationship to causality, and his later stunningly influential work on common trend modeling (cointegration) and volatility modeling (ARCH, short for AutoRegressive Conditional Heteroskedasticity). More generally, Engle’s cumulative work is a fine example of best-practice applied time-series econometrics: he identifies important dynamic economic phenomena, formulates precise and interesting questions about those phenomena, constructs sophisticated yet simple econometric models for measurement and testing, and consistently obtains results of widespread substantive interest in the scientific, policy, and financial communities.Econometric Theory, Finance

    Practical Volatility Modeling for Financial Market Risk Management

    Get PDF
    Being able to choose most suitable volatility model and distribution specification is a more demanding task. This paper introduce an analyzing procedure using the Kullback-Leibler information criteria (KLIC) as a statistical tool to evaluate and compare the predictive abilities of possibly misspecified density forecast models. The main advantage of this statistical tool is that we use the censored likelihood functions to compute the tail minimum of the KLIC, to compare the performance of a density forecast models in the tails. We include an illustrative simulation and an empirical application to compare a set of distributions, including symmetric/asymmetric distribution, and a family of GARCH volatility models. We highlight the use of our approach to a daily index, the Kuala Lumpur Composite index (KLCI). Our results shows that the choice of the conditional distribution appear to be a more dominant factor in determining the adequacy of density forecasts than the choice of volatility model. Furthermore, the results support the Skewed for KLCI return distribution.Density forecast; Conditional distribution; Forecast accuracy; KLIC; GARCH models

    Capturing the zero: a new class of zero-augmented distributions and multiplicative error processes

    Get PDF
    We propose a novel approach to model serially dependent positive-valued variables which realize a non-trivial proportion of zero outcomes. This is a typical phenomenon in financial time series observed on high frequencies, such as cumulated trading volumes or the time between potentially simultaneously occurring market events. We introduce a flexible pointmass mixture distribution and develop a semiparametric specification test explicitly tailored for such distributions. Moreover, we propose a new type of multiplicative error model (MEM) based on a zero-augmented distribution, which incorporates an autoregressive binary choice component and thus captures the (potentially different) dynamics of both zero occurrences and of strictly positive realizations. Applying the proposed model to high-frequency cumulated trading volumes of liquid NYSE stocks, we show that the model captures both the dynamic and distribution properties of the data very well and is able to correctly predict future distributions. Keywords: High-frequency Data , Point-mass Mixture , Multiplicative Error Model , Excess Zeros , Semiparametric Specification Test , Market Microstructure JEL Classification: C22, C25, C14, C16, C5

    Approaches for multi-step density forecasts with application to aggregated wind power

    Full text link
    The generation of multi-step density forecasts for non-Gaussian data mostly relies on Monte Carlo simulations which are computationally intensive. Using aggregated wind power in Ireland, we study two approaches of multi-step density forecasts which can be obtained from simple iterations so that intensive computations are avoided. In the first approach, we apply a logistic transformation to normalize the data approximately and describe the transformed data using ARIMA--GARCH models so that multi-step forecasts can be iterated easily. In the second approach, we describe the forecast densities by truncated normal distributions which are governed by two parameters, namely, the conditional mean and conditional variance. We apply exponential smoothing methods to forecast the two parameters simultaneously. Since the underlying model of exponential smoothing is Gaussian, we are able to obtain multi-step forecasts of the parameters by simple iterations and thus generate forecast densities as truncated normal distributions. We generate forecasts for wind power from 15 minutes to 24 hours ahead. Results show that the first approach generates superior forecasts and slightly outperforms the second approach under various proper scores. Nevertheless, the second approach is computationally more efficient and gives more robust results under different lengths of training data. It also provides an attractive alternative approach since one is allowed to choose a particular parametric density for the forecasts, and is valuable when there are no obvious transformations to normalize the data.Comment: Corrected version includes updated equation (18). Published in at http://dx.doi.org/10.1214/09-AOAS320 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Volatility forecasting

    Get PDF
    Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1
    corecore