1,007 research outputs found

    Algorithms for Linear Time Series Analysis: With R Package

    Get PDF
    Our ltsa package implements the Durbin-Levinson and Trench algorithms and provides a general approach to the problems of fitting, forecasting and simulating linear time series models as well as fitting regression models with linear time series errors. For computational efficiency both algorithms are implemented in C and interfaced to R. Examples are given which illustrate the efficiency and accuracy of the algorithms. We provide a second package FGN which illustrates the use of the ltsa package with fractional Gaussian noise (FGN). It is hoped that the ltsa will provide a base for further time series software.

    Computational Aspects of Maximum Likelihood Estimation of Autoregressive Fractionally Integrated Moving Average Models

    Get PDF
    We discuss computational aspects of likelihood-based estimation of univariate ARFIMA (p,d,q) models. We show how efficient computation and simulation is feasible, even for large samples. We also discuss the implementation of analytical bias corrections.Long memory, Bias, Modified profile likelihood, Restricted maximum likelihood estimator, Time-series regression model likelihood

    A Generalized ARFIMA Process with Markov-Switching Fractional Differencing Parameter

    Get PDF
    We propose a general class of Markov-switching-ARFIMA processes in order to combine strands of long memory and Markov-switching literature. Although the coverage of this class of models is broad, we show that these models can be easily estimated with the DLV algorithm proposed. This algorithm combines the Durbin-Levinson and Viterbi procedures. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and ARFIMA(1, d, 1) process is satisfactory. We apply the Markov-switching-ARFIMA models to the U.S. real interest rates, the Nile river level, and the U.S. unemployment rates, respectively. The results are all highly consistent with the conjectures made or empirical results found in the literature. Particularly, we confirm the conjecture in Beran and Terrin (1996) that the observations 1 to about 100 of the Nile river data seem to be more independent than the subsequent observations, and the value of differencing parameter is lower for the first 100 observations than for the subsequent data.Markov chain; ARFIMA process; Viterbi algorithm; Long memory.

    The Finite-Sample Properties of Autoregressive Approximations of Fractionally-Integrated and Non-Invertible Processes

    Get PDF
    This paper investigates the empirical properties of autoregressive approximations to two classes of process for which the usual regularity conditions do not apply; namely the non-invertible and fractionally integrated processes considered in Poskitt (2006). In that paper the theoretical consequences of fitting long autoregressions under regularity conditions that allow for these two situations was considered, and convergence rates for the sample autocovariances and autoregressive coefficients established. We now consider the finite-sample properties of alternative estimators of the AR parameters of the approximating AR(h) process and corresponding estimates of the optimal approximating order h. The estimators considered include the Yule-Walker, Least Squares, and Burg estimators.Autoregression, autoregressive approximation, fractional process,

    Bayesian Lattice Filters for Time-Varying Autoregression and Time-Frequency Analysis

    Full text link
    Modeling nonstationary processes is of paramount importance to many scientific disciplines including environmental science, ecology, and finance, among others. Consequently, flexible methodology that provides accurate estimation across a wide range of processes is a subject of ongoing interest. We propose a novel approach to model-based time-frequency estimation using time-varying autoregressive models. In this context, we take a fully Bayesian approach and allow both the autoregressive coefficients and innovation variance to vary over time. Importantly, our estimation method uses the lattice filter and is cast within the partial autocorrelation domain. The marginal posterior distributions are of standard form and, as a convenient by-product of our estimation method, our approach avoids undesirable matrix inversions. As such, estimation is extremely computationally efficient and stable. To illustrate the effectiveness of our approach, we conduct a comprehensive simulation study that compares our method with other competing methods and find that, in most cases, our approach performs superior in terms of average squared error between the estimated and true time-varying spectral density. Lastly, we demonstrate our methodology through three modeling applications; namely, insect communication signals, environmental data (wind components), and macroeconomic data (US gross domestic product (GDP) and consumption).Comment: 49 pages, 16 figure

    Algorithms for Linear Time Series Analysis: With R Package

    Get PDF
    Our ltsa package implements the Durbin-Levinson and Trench algorithms and provides a general approach to the problems of fitting, forecasting and simulating linear time series models as well as fitting regression models with linear time series errors. For computational efficiency both algorithms are implemented in C and interfaced to R. Examples are given which illustrate the efficiency and accuracy of the algorithms. We provide a second package FGN which illustrates the use of the ltsa package with fractional Gaussian noise (FGN). It is hoped that the ltsa will provide a base for further time series software

    Penalized Estimation of Autocorrelation

    Get PDF
    This dissertation explored the idea of penalized method in estimating the autocorrelation (ACF) and partial autocorrelation (PACF) in order to solve the problem that the sample (partial) autocorrelation underestimates the magnitude of (partial) autocorrelation in stationary time series. Although finite sample bias corrections can be found under specific assumed models, no general formulae are available. We introduce a novel penalized M-estimator for (partial) autocorrelation, with the penalty pushing the estimator toward a target selected from the data. This both encapsulates and differs from previous attempts at penalized estimation for autocorrelation, which shrink the estimator toward the target value of zero. Unlike the regression case, in which the least squares estimator is unbiased and shrinkage is used to reduce mean squared error by introducing bias, in the autocorrelation case the usual estimator has bias toward zero. The penalty can be chosen so that the resulting estimator of autocorrelation is asymptotically normally distributed. Simulation evidence indicates that the proposed estimators of (partial) autocorrelation tend to alleviate the bias and reduce mean squared error compared with the traditional sample ACF/PACF, especially when the time series has strong correlation. One application of the penalized (partial) autocorrelation estimator is portmanteau tests in time series. Target and tuning parameters can be selected to improve time series Portmanteau tests--shrinking small magnitude correlations toward zero controls type I error, while increasing larger magnitude correlations improves power. Specific data based choices for target and tuning parameters are provided for general classes of time series goodness of fit tests. Asymptotic properties of the proposed test statistics are obtained. Simulations show power is improved for all of the most prevalent tests from the literature and the proposed methods are applied to data. Another application of the penalized ACF/PACF considered in this dissertation is the optimal linear prediction of time series. We exploit ideas from high-dimensional autocorrelation matrix estimation and use tapering and banding, as well as a regularized Durbin-Levinson algorithm to derive new predictors that are based on the penalized correlation estimators. We show that the proposed estimators reduce the error in linear prediction of times series. The performance of the proposed methods are demonstrated on simulated data and applied to data

    The Multistep Beveridge-Nelson Decomposition

    Get PDF
    The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-step-ahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.Trend and Cycle; Forecasting; Filtering.
    corecore