3,828 research outputs found

    How informative are in-sample information criteria to forecasting? the case of Chilean GDP

    Get PDF
    There is no standard economic forecasting procedure that systematically outperforms the others at all horizons and with any dataset. A common way to proceed, in many contexts, is to choose the best model within a family based on a fitting criteria, and then forecast. I compare the out-of-sample performance of a large number of autoregressive integrated moving average (ARIMA) models with some variations, chosen by three commonly used information criteria for model building: Akaike, Schwarz, and Hannan-Quinn. I perform this exercise to identify how to achieve the smallest root mean squared forecast error with models based on information criteria. I use the Chilean GDP dataset, estimating with a rolling window sample to generate one- to four-step ahead forecasts. Also, I examine the role of seasonal adjustment and the Easter effect on out-of-sample performance. After the estimation of more than 20 million models, the results show that Akaike and Schwarz are better criteria for forecasting purposes where the traditional ARMA specification is preferred. Accounting for the Easter effect improves the forecast accuracy only with seasonally adjusted data, and second-order stationarity is best.data mining; forecasting; ARIMA; seasonal adjustment; Easter-effect

    How Informative are In–Sample Information Criteria to Forecasting? The Case of Chilean GDP

    Get PDF
    There is no standard economic forecasting procedure that systematically outperforms the others at all horizons and with any dataset. A common way to proceed, in many contexts, is to choose the best model within a family based on a fitting criteria, and then forecast. I compare the out-of-sample performance of a large number of autoregressive integrated moving average (ARIMA) models with some variations, chosen by three commonly used information criteria for model building: Akaike, Schwarz, and Hannan-Quinn. I perform this exercise to identify how to achieve the smallest root mean squared forecast error with models based on information criteria. I use the Chilean GDP dataset, estimating with a rolling window sample to generate one- to four-step ahead forecasts. Also, I examine the role of seasonal adjustment and the Easter effect on out-of-sample performance. After the estimation of more than 20 million models, the results show that Akaike and Schwarz are better criteria for forecasting purposes where the traditional ARMA specification is preferred. Accounting for the Easter effect improves the forecast accuracy only with seasonally adjusted data, and second-order stationarity is best.

    Computational Aspects of Maximum Likelihood Estimation of Autoregressive Fractionally Integrated Moving Average Models

    Get PDF
    We discuss computational aspects of likelihood-based estimation of univariate ARFIMA (p,d,q) models. We show how efficient computation and simulation is feasible, even for large samples. We also discuss the implementation of analytical bias corrections.Long memory, Bias, Modified profile likelihood, Restricted maximum likelihood estimator, Time-series regression model likelihood

    A globally convergent matricial algorithm for multivariate spectral estimation

    Full text link
    In this paper, we first describe a matricial Newton-type algorithm designed to solve the multivariable spectrum approximation problem. We then prove its global convergence. Finally, we apply this approximation procedure to multivariate spectral estimation, and test its effectiveness through simulation. Simulation shows that, in the case of short observation records, this method may provide a valid alternative to standard multivariable identification techniques such as MATLAB's PEM and MATLAB's N4SID

    Computing and estimating information matrices of weak arma models

    Get PDF
    Numerous time series admit "weak" autoregressive-moving average (ARMA) representations, in which the errors are uncorrelated but not necessarily independent nor martingale differences. The statistical inference of this general class of models requires the estimation of generalized Fisher information matrices. We give analytic expressions and propose consistent estimators of these matrices, at any point of the parameter space. Our results are illustrated by means of Monte Carlo experiments and by analyzing the dynamics of daily returns and squared daily returns of financial series.Asymptotic relative efficiency (ARE); Bahadur's slope; Information matrices; Lagrange Multiplier test; Nonlinear processes; Wald test; Weak ARMA models

    Bootstrap Approximation to Prediction MSE for State-Space Models with Estimated Parameters

    No full text
    We propose a simple but general bootstrap method for estimating the Prediction Mean Square Error (PMSE) of the state vector predictors when the unknown model parameters are estimated from the observed series. As is well known, substituting the model parameters by the sample estimates in the theoretical PMSE expression that assumes known parameter values results in under-estimation of the true PMSE. Methods proposed in the literature to deal with this problem in state-space modelling are inadequate and may not even be operational when fitting complex models, or when some of the parameters are close to their boundary values. The proposed method consists of generating a large number of series from the model fitted to the original observations, re-estimating the model parameters using the same method as used for the observed series and then estimating separately the component of PMSE resulting from filter uncertainty and the component resulting from parameter uncertainty. Application of the method to a model fitted to sample estimates of employment ratios in the U.S.A. that contains eighteen unknown parameters estimated by a three-step procedure yields accurate results. The procedure is applicable to mixed linear models that can be cast into state-space form. (Updated 6th October 2004

    Using Subspace Methods for Estimating ARMA Models for Multivariate Time Series with Conditionally Heteroskedastic Innovations

    Get PDF
    This paper deals with the estimation of linear dynamic models of the ARMA type for the conditional mean for time series with conditionally heteroskedastic innovation process widely used in modelling financial time series. Estimation is performed using subspace methods which are known to have computational advantages as compared to prediction error methods based on criterion minimization. These advantages are especially strong for high dimensional time series. The subspace methods are shown to provide consistent estimators. Moreover asymptotic equivalence to prediction error estimators in terms of the asymptotic variance is proved. Also order estimation techniques are proposed and analyzed. The estimators are not efficient as they do not model the conditional variance. Nevertheless, they can be used to obtain consistent estimators of the innovations. In a second step these estimated residuals can be used in order to levitate the problem of specifying the variance model in particular in the multi-output case. This is demonstrated in an ARCH setting, where it is proved that the estimated innovations can be used in place of the true innovations for testing in a linear least squares context in order to specify the structure of the ARCH model without changing the asymptotic distribution.Multivariate models, conditional heteroskedasticity, ARMA systems, subspace methods

    Adaptive identification and control of structural dynamics systems using recursive lattice filters

    Get PDF
    A new approach for adaptive identification and control of structural dynamic systems by using least squares lattice filters thar are widely used in the signal processing area is presented. Testing procedures for interfacing the lattice filter identification methods and modal control method for stable closed loop adaptive control are presented. The methods are illustrated for a free-free beam and for a complex flexible grid, with the basic control objective being vibration suppression. The approach is validated by using both simulations and experimental facilities available at the Langley Research Center

    Forecasting trends with asset prices

    Full text link
    In this paper, we consider a stochastic asset price model where the trend is an unobservable Ornstein Uhlenbeck process. We first review some classical results from Kalman filtering. Expectedly, the choice of the parameters is crucial to put it into practice. For this purpose, we obtain the likelihood in closed form, and provide two on-line computations of this function. Then, we investigate the asymptotic behaviour of statistical estimators. Finally, we quantify the effect of a bad calibration with the continuous time mis-specified Kalman filter. Numerical examples illustrate the difficulty of trend forecasting in financial time series.Comment: 26 pages, 11 figure
    corecore