5,568 research outputs found

    Testing for flexible nonlinear trends with an integrated or stationary noise component

    Full text link
    This paper proposes a new test for the presence of a nonlinear deterministic trend approximated by a Fourier expansion in a univariate time series for which there is no prior knowledge as to whether the noise component is stationary or contains an autoregressive unit root. Our approach builds on the work of Perron and Yabu (2009a) and is based on a Feasible Generalized Least Squares procedure that uses a super-efficient estimator of the sum of the autoregressive coefficients α when α = 1. The resulting Wald test statistic asymptotically follows a chi-square distribution in both the I(0) and I(1) cases. To improve the finite sample properties of the test, we use a bias-corrected version of the OLS estimator of α proposed by Roy and Fuller (2001). We show that our procedure is substantially more powerful than currently available alternatives. We illustrate the usefulness of our method via an application to modelling the trend of global and hemispheric temperatures

    A sharp analysis on the asymptotic behavior of the Durbin-Watson statistic for the first-order autoregressive process

    Get PDF
    The purpose of this paper is to provide a sharp analysis on the asymptotic behavior of the Durbin-Watson statistic. We focus our attention on the first-order autoregressive process where the driven noise is also given by a first-order autoregressive process. We establish the almost sure convergence and the asymptotic normality for both the least squares estimator of the unknown parameter of the autoregressive process as well as for the serial correlation estimator associated to the driven noise. In addition, the almost sure rates of convergence of our estimates are also provided. It allows us to establish the almost sure convergence and the asymptotic normality for the Durbin-Watson statistic. Finally, we propose a new bilateral statistical test for residual autocorrelation

    Interpolation, outliers and inverse autocorrelations

    Get PDF
    The paper addresses the problem of estimating missing observations in linear, possibly nonstationary, stochastic processes when the model is known. The general case of any possible distribution of missing observations in the time series is considered, and analytical expressions for the optimal estimators and their associated mean squared errors are obtained. These expressions involve solely the elements of the inverse or dual autocorrelation function of the series. This optimal estimator -the conditional expectation of the missing observations given the available ones-is equal oto the estimator that results from filling the missing values in the series with arbitrary numbers, treating these numbers as additive outliers, and removing the outlier effects from the invented numbers using intervention analysis
    corecore