956 research outputs found

    The Multistep Beveridge-Nelson Decomposition

    Get PDF
    The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-step-ahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.Trend and Cycle; Forecasting; Filtering.

    Non-Parametric Direct Multi-step Estimation for Forecasting Economic Processes

    Get PDF
    We evaluate the asymptotic and finite-sample properties of direct multi-step estimation (DMS) for forecasting at several horizons. For forecast accuracy gains from DMS in finite samples, mis-specification and non-stationarity of the DGP are necessary, but when a model is well-specified, iterating the one-step ahead forecasts may not be asymptotically preferable. If a model is mis-specified for a non-stationary DGP, omitting either negative residual serial correlation or regime shifts, DMS can forecast more accurately. Monte Carlo simulations clarify the non-linear dependence of the estimation and forecast biases on the parameters of the DGP, and explain existing results.Adaptive estimation, multi-step estimation, dynamic forecasts, model mis-specification.

    The use of preliminary data in econometric forecasting: an application with the Bank of Italy Quarterly Model

    Get PDF
    This paper considers forecasting by econometric and time series models using preliminary (or provisional) data. The standard practice is to ignore the distinction between provisional and final data. We call the forecasts that ignore such a distinction naive forecasts, which are generated as projections from a correctly specified model using the most recent estimates of the unobserved final figures. It is first shown that in dynamic models a multistepahead naive forecast can achieve a lower mean square error than a single-step-ahead one, intuitively because it is less affected by the measurement noise embedded in the preliminary observations. The best forecasts are obtained by combining, in an optimal way, the information provided by the model with the new information contained in the preliminary data. This can be done in the state space framework, as suggested in the literature. Here we consider two simple methods to combine, in general suboptimally, the two sources of information: modifying the forecast initial conditions via standard regressions and using intercept corrections. The issues are explored with reference to the Italian national accounts data and the Bank of Italy Quarterly Econometric Model (BIQM). A series of simulation experiments with the model show that these methods are quite effective in reducing the extra volatility of prediction due to the use of preliminary data.preliminary data, macroeconomic forecasting, Bank of Italy Quarterly Econometric Model

    Approximately normal tests for equal predictive accuracy in nested models

    Get PDF
    Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.

    Approximately Normal Tests for Equal Predictive Accuracy in Nested Models

    Get PDF
    Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.

    The Multistep Beveridge-Nelson Decomposition

    Get PDF
    The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-stepahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth

    Forecasting high waters at Venice Lagoon using chaotic time series analisys and nonlinear neural netwoks

    Get PDF
    Time series analysis using nonlinear dynamics systems theory and multilayer neural networks models have been applied to the time sequence of water level data recorded every hour at 'Punta della Salute' from Venice Lagoon during the years 1980-1994. The first method is based on the reconstruction of the state space attractor using time delay embedding vectors and on the characterisation of invariant properties which define its dynamics. The results suggest the existence of a low dimensional chaotic attractor with a Lyapunov dimension, DL, of around 6.6 and a predictability between 8 and 13 hours ahead. Furthermore, once the attractor has been reconstructed it is possible to make predictions by mapping local-neighbourhood to local-neighbourhood in the reconstructed phase space. To compare the prediction results with another nonlinear method, two nonlinear autoregressive models (NAR) based on multilayer feedforward neural networks have been developed. From the study, it can be observed that nonlinear forecasting produces adequate results for the 'normal' dynamic behaviour of the water level of Venice Lagoon, outperforming linear algorithms, however, both methods fail to forecast the 'high water' phenomenon more than 2-3 hours ahead.Publicad
    corecore