4,463 research outputs found

    Nonfractional Memory: Filtering, Antipersistence, and Forecasting

    Full text link
    The fractional difference operator remains to be the most popular mechanism to generate long memory due to the existence of efficient algorithms for their simulation and forecasting. Nonetheless, there is no theoretical argument linking the fractional difference operator with the presence of long memory in real data. In this regard, one of the most predominant theoretical explanations for the presence of long memory is cross-sectional aggregation of persistent micro units. Yet, the type of processes obtained by cross-sectional aggregation differs from the one due to fractional differencing. Thus, this paper develops fast algorithms to generate and forecast long memory by cross-sectional aggregation. Moreover, it is shown that the antipersistent phenomenon that arises for negative degrees of memory in the fractional difference literature is not present for cross-sectionally aggregated processes. Pointedly, while the autocorrelations for the fractional difference operator are negative for negative degrees of memory by construction, this restriction does not apply to the cross-sectional aggregated scheme. We show that this has implications for long memory tests in the frequency domain, which will be misspecified for cross-sectionally aggregated processes with negative degrees of memory. Finally, we assess the forecast performance of high-order ARAR and ARFIMAARFIMA models when the long memory series are generated by cross-sectional aggregation. Our results are of interest to practitioners developing forecasts of long memory variables like inflation, volatility, and climate data, where aggregation may be the source of long memory

    Combining long memory and level shifts in modeling and forecasting the volatility of asset returns

    Full text link
    We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons

    Combining long memory and level shifts in modeling and forecasting the volatility of asset returns

    Full text link
    We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in most high-frequency measures of volatility, whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons

    On Hodges and Lehmann's "6/Ï€6/\pi result"

    Full text link
    While the asymptotic relative efficiency (ARE) of Wilcoxon rank-based tests for location and regression with respect to their parametric Student competitors can be arbitrarily large, Hodges and Lehmann (1961) have shown that the ARE of the same Wilcoxon tests with respect to their van der Waerden or normal-score counterparts is bounded from above by 6/π≈1.9106/\pi\approx 1.910. In this paper, we revisit that result, and investigate similar bounds for statistics based on Student scores. We also consider the serial version of this ARE. More precisely, we study the ARE, under various densities, of the Spearman-Wald-Wolfowitz and Kendall rank-based autocorrelations with respect to the van der Waerden or normal-score ones used to test (ARMA) serial dependence alternatives

    Rank-based estimation for all-pass time series models

    Full text link
    An autoregressive-moving average model in which all roots of the autoregressive polynomial are reciprocals of roots of the moving average polynomial and vice versa is called an all-pass time series model. All-pass models are useful for identifying and modeling noncausal and noninvertible autoregressive-moving average processes. We establish asymptotic normality and consistency for rank-based estimators of all-pass model parameters. The estimators are obtained by minimizing the rank-based residual dispersion function given by Jaeckel [Ann. Math. Statist. 43 (1972) 1449--1458]. These estimators can have the same asymptotic efficiency as maximum likelihood estimators and are robust. The behavior of the estimators for finite samples is studied via simulation and rank estimation is used in the deconvolution of a simulated water gun seismogram.Comment: Published at http://dx.doi.org/10.1214/009053606000001316 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Efficient Gibbs Sampling for Markov Switching GARCH Models

    Full text link
    We develop efficient simulation techniques for Bayesian inference on switching GARCH models. Our contribution to existing literature is manifold. First, we discuss different multi-move sampling techniques for Markov Switching (MS) state space models with particular attention to MS-GARCH models. Our multi-move sampling strategy is based on the Forward Filtering Backward Sampling (FFBS) applied to an approximation of MS-GARCH. Another important contribution is the use of multi-point samplers, such as the Multiple-Try Metropolis (MTM) and the Multiple trial Metropolize Independent Sampler, in combination with FFBS for the MS-GARCH process. In this sense we ex- tend to the MS state space models the work of So [2006] on efficient MTM sampler for continuous state space models. Finally, we suggest to further improve the sampler efficiency by introducing the antithetic sampling of Craiu and Meng [2005] and Craiu and Lemieux [2007] within the FFBS. Our simulation experiments on MS-GARCH model show that our multi-point and multi-move strategies allow the sampler to gain efficiency when compared with single-move Gibbs sampling.Comment: 38 pages, 7 figure
    • …
    corecore