5,747 research outputs found
Confidence Intervals for Half-life Deviations from Purchasing Power Parity
According to the Purchasing Power Parity (PPP) theory, real exchange rate fluctuations are mainly caused by transitory shocks. The theory fits well one empirical feature of the data, namely the short-run volatility of real exchange rates, but also implies that shocks should die away in one to two years (the time interval compatible with price and wage stickiness). Existing point estimates of half-life deviations from PPP are in the order of 3 to 5 years, too big to be reconciled with the PPP. The scope of this paper is to assess how much uncertainty there is around these point estimates. We construct confidence intervals that are robust to high persistence in the presence of small sample sizes. The empirical evidence suggests that the lower bound of the confidence interval is around 4 to 6 quarters for most currencies. With a few exceptions, the results show that the data are not inconsistent with the PPP theory, although we cannot provide conclusive evidence in favor of PPP either.
Are Exchange Rates Really Random Walks? Some Evidence Robust to Parameter Instability
Many authors have documented that it is challenging to explain exchange rate fluctuations with macroeconomic fundamentals: a random walk forecasts future exchange rates better than existing macroeconomic models. This paper applies newly developed tests for nested model that are robust to the presence of parameter instability. The empirical evidence shows that for some countries we can reject the hypothesis that exchange rates are random walks. This raises the possibility that economic models were previously rejected not because the fundamentals are completely unrelated to exchange rate fluctuations, but because the relationship is unstable over time and, thus, difficult to capture by Granger Causality tests or by forecast comparisons. We also analyze forecasts that exploit the time variation in the parameters and find that, in some cases, they can improve over the random walk.forecasting, exchange rates, parameter instability, random walks
Has modelsĂ forecasting performance for US output growth and inflation changed over time, and when?
We evaluate various modelsĂ relative performance in forecasting future US output growth and inflation on a monthly basis. Our approach takes into account the possibility that the modelsĂ relative performance can be varying over time. We show that the modelsĂ relative performance has, in fact, changed dramatically over time, both for revised and real-time data, and investigate possible factors that might explain such changes. In addition, this paper establishes two empirical stylized facts. Namely, most predictors for output growth lost their predictive ability in the mid-1970s, and became essentially useless in the last two decades. When forecasting inflation, instead, fewer predictors are significant (among which, notably, capacity utilization and unemployment), and their predictive ability significantly worsened around the time of the Great Moderation.Output Forecasts, Inflation Forecasts, Model Selection, Structural Change, Forecast Evaluation, Real-time data. Evaluation
Impulse Response Confidence Intervals for Persistent Data: What Have We Learned?
This paper is a comprehensive comparison of existing methods for constructing confidence bands for univariate impulse response functions in the presence of high persistence. Monte Carlo results show that Kilian (1998a), Wright (2000), Gospodinov (2004) and Pesavento and Rossi (2005) have favorable coverage properties, although they differ in terms of robustness at various horizons, median unbiasedness, and reliability in the possible presence of a unit or mildly explosive root. On the other hand, methods like RunkleĂs (1987) bootstrap, Andrews and Chen (1994), and regressions in levels or first differences (even when based on pre-tests) may not have accurate coverage properties. The paper makes recommendations as to the appropriateness of each method in empirical work.Local to unity asymptotics, persistence, impulse response functions
Out-of-sample forecast tests robust to the choice of window size
This paper proposes new methodologies for evaluating out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. The authors show that the tests proposed in the literature may lack the power to detect predictive ability and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models' forecasting ability.Forecasting
Impulse Response Confidence Intervals for Persistent Data: What Have We Learned?
This paper is a comprehensive comparison of existing methods for constructing confidence bands for univariate impulse response functions in the presence of high persistence. Monte Carlo results show that Kilian (1998a), Wright (2000), Gospodinov (2004) and Pesavento and Rossi (2005) have favorable coverage properties, although they differ in terms of robustness at various horizons, median unbiasedness, and reliability in the possible presence of a unit or mildly explosive root. On the other hand, methods like Runkleâs (1987) bootstrap, Andrews and Chen (1994), and regressions in levels or first differences (even when based on pre-tests) may not have accurate coverage properties. The paper makes recommendations as to the appropriateness of each method in empirical work.Local to unity asymptotics, persistence, impulse response functions
Model Comparisons in Unstable Environments
The goal of this paper is to develop formal techniques for analyzing the relative in-sample performance of two competing, misspeci?ed models in the presence of possible data instability. The central idea of our methodology is to propose a measure of the models? local relative performance: the "local Kullback-Leibler Information Criterion" (KLIC), which measures the relative distance of the two models? (misspeci?ed) likelihoods from the true likelihood at a particular point in time. We discuss estimation and inference about the local relative KLIC; in particular, we propose statistical tests to investigate its stability over time. Compared to previous approaches to model selection, which are based on measures of "global performance", our focus is on the entire time path of the models? relative performance, which may contain useful information that is lost when looking for a globally best model. The empirical application provides insights into the time variation in the performance of a representative DSGE model of the European economy relative to that of VARs. implement IRFMEs.Model Selection Tests, Misspeci?cation, Structural Change, Kullback-Leibler Information Criterion
Small Sample Confidence Intervals for Multivariate Impulse Response Functions at Long Horizons
Existing methods for constructing confidence bands for multivatiate impulse response functions depend on auxiliary assumptions on the order of integration of the variables. Thus, they may have poor coverage at long lead times when variables are highly persistent. Solutions that have been proposed in the literature may be computationally challenging. The goal of this paper is to propose a simple method for constructing confidence bands for impulse response functions that are robust to the presence of highly persistent processes. The method uses alternative approximations based on local-to-unity asymptotic theory and allows the lead time of the impulse response function to be a fixed fraction of the sample size. Monte Carlo simulations show that our method has better coverage properties than existing methods. We also investigate the properties of the various methods in terms of the length of their confidence bands. Finally, we show, with empirical applications, that our method may provide different economic interpretations of the data. Applications to real GDP and to nominal versus real sources of fluctuations in exchange rates are discussed.
Detecting and Predicting Forecast Breakdowns
We propose a theoretical framework for assessing whether a forecast model estimated over one period can provide good forecasts over a subsequent period. We formalize this idea by defining a forecast breakdown as a situation in which the out-of-sample performance of the model, judged by some loss function, is significantly worse than its in-sample performance. Our framework, which is valid under general conditions, can be used not only to detect past forecast breakdowns but also to predict future ones. We show that main causes of forecast breakdowns are instabilities in the data generating process and relate the properties of our forecast breakdown test to those of existing structural break tests. The main differences are that our test is robust to the presence of unstable regressors and that it has greater power than previous tests to capture systematic forecast errors caused by recurring breaks that are ignored by the forecast model. As a by-product, we show that our results can be applied to forecast rationality tests and provide the appropriate asymptotic variance estimator that corrects the size distortions of previous forecast rationality tests. The empirical application finds evidence of a forecast breakdown in the PhillipsĂ curve forecasts of U.S. inflation, and links it to inflation volatility and to changes in the monetary policy reaction function of the Fed.
- âŠ