120 research outputs found

    Forecast Comparisons in Unstable Environments

    Get PDF
    We propose new methods for comparing the relative out-of-sample forecasting performance of two competing models in the presence of possible instabilities. The main idea is to develop a measure of the relative ĂŹlocal forecasting performanceĂź for the two models, and to investigate its stability over time by means of statistical tests. We propose two tests (the ĂŹFluctuation testĂź and the test against a ĂŹOne-time ReversalĂź) that analyze the evolution of the modelsĂ­ relative performance over historical samples. In contrast to previous approaches to forecast comparison, which are based on measures of ĂŹglobal performanceĂź, we focus on the entire time path of the modelsĂ­ relative performance, which may contain useful information that is lost when looking for the model that forecasts best on average. We apply our tests to the analysis of the time variation in the out-of-sample forecasting performance of monetary models of exchange rate determination relative to the random walk.Predictive Ability Testing, Instability, Structural Change, Forecast Evaluation

    Tests of Conditional Predictive Ability

    Get PDF
    We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessarily useful for real-time forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for out-of-sample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and non-nested models, which is not currently possible.Forecast Evaluation, Asymptotic Inference, Parameter-reduction Methods

    Model Comparisons in Unstable Environments

    Get PDF
    The goal of this paper is to develop formal techniques for analyzing the relative in-sample performance of two competing, misspeci?ed models in the presence of possible data instability. The central idea of our methodology is to propose a measure of the models? local relative performance: the "local Kullback-Leibler Information Criterion" (KLIC), which measures the relative distance of the two models? (misspeci?ed) likelihoods from the true likelihood at a particular point in time. We discuss estimation and inference about the local relative KLIC; in particular, we propose statistical tests to investigate its stability over time. Compared to previous approaches to model selection, which are based on measures of "global performance", our focus is on the entire time path of the models? relative performance, which may contain useful information that is lost when looking for a globally best model. The empirical application provides insights into the time variation in the performance of a representative DSGE model of the European economy relative to that of VARs. implement IRFMEs.Model Selection Tests, Misspeci?cation, Structural Change, Kullback-Leibler Information Criterion

    Detecting and Predicting Forecast Breakdowns

    Get PDF
    We propose a theoretical framework for assessing whether a forecast model estimated over one period can provide good forecasts over a subsequent period. We formalize this idea by defining a forecast breakdown as a situation in which the out-of-sample performance of the model, judged by some loss function, is significantly worse than its in-sample performance. Our framework, which is valid under general conditions, can be used not only to detect past forecast breakdowns but also to predict future ones. We show that main causes of forecast breakdowns are instabilities in the data generating process and relate the properties of our forecast breakdown test to those of existing structural break tests. The main differences are that our test is robust to the presence of unstable regressors and that it has greater power than previous tests to capture systematic forecast errors caused by recurring breaks that are ignored by the forecast model. As a by-product, we show that our results can be applied to forecast rationality tests and provide the appropriate asymptotic variance estimator that corrects the size distortions of previous forecast rationality tests. The empirical application finds evidence of a forecast breakdown in the PhillipsĂ­ curve forecasts of U.S. inflation, and links it to inflation volatility and to changes in the monetary policy reaction function of the Fed.

    Economic theory and forecasting: lessons from the literature

    Full text link
    Does economic theory help in forecasting key macroeconomic variables? This article aims to provide some insight into the question by drawing lessons from the literature. The definition of "economic theory" includes a broad range of examples, such as accounting identities, disaggregation and spatial restrictions when forecasting aggregate variables, cointegration and forecasting with Dynamic Stochastic General Equilibrium (DSGE) models. We group the lessons into three themes. The first discusses the importance of using the correct econometric tools when answering the question. The second presents examples of theory-based forecasting that have not proven useful, such as theory-driven variable selection and some popular DSGE models. The third set of lessons discusses types of theoretical restrictions that have shown some usefulness in forecasting, such as accounting identities, disaggregation and spatial restrictions, and cointegrating relationships. We conclude by suggesting that economic theory might help in overcoming the widespread instability that affects the forecasting performance of econometric models by guiding the search for stable relationships that could be usefully exploited for forecasting

    Detecting and predicting forecast breakdowns

    Full text link
    We propose a theoretical framework for assessing whether a forecast model estimated over one period can provide good forecasts over a subsequent period. We formalize this idea by defining a forecast breakdown as a situation in which the out-of-sample performance of the model, judged by some loss function, is significantly worse than its in-sample performance. Our framework, which is valid under general conditions, can be used not only to detect past forecast breakdowns but also to predict future ones. We show that main causes of forecast breakdowns are instabilities in the data generating process and relate the properties of our forecast breakdown test to those of existing structural break tests. The empirical application finds evidence of a forecast breakdown in the Phillips’ curve forecasts of U.S. inflation, and links it to inflation volatility and to changes in the monetary policy reaction function of the Fed

    Mixtures of t-distributions for Finance and Forecasting

    Get PDF
    We explore convenient analytic properties of distributions constructed as mixtures of scaled and shifted t-distributions. A feature that makes this family particularly desirable for econometric applications is that it possesses closed-form expressions for its anti-derivatives (e.g., the cumulative density function). We illustrate the usefulness of these distributions in two applications. In the first application, we use a scaled and shifted t-distribution to produce density forecasts of U.S. inflation and show that these forecasts are more accurate, out-of-sample, than density forecasts obtained using normal or standard t-distributions. In the second application, we replicate the option-pricing exercise of Abadir and Rockinger (2003) using a mixture of scaled and shifted t-distributions and obtain comparably good results, while gaining analytical tractability.ARMA-GARCH models, neural networks, nonparametric density estimation, forecast accuracy, option pricing, risk neutral density

    European Central Bank

    Get PDF
    The dynamic behavior of the term structure of interest rates is difficult to replicate with models, and even models with a proven track record of empirical performance have underperformed since the early 2000s. On the other hand, survey expectations are accurate predictors of yields, but only for very short maturities. We argue that this is partly due to the ability of survey participants to incorporate information about the current state of the economy as well as forward-looking information such as that contained in monetary policy announcements. We show how the informational advantage of survey expectations about short yields can be exploited to improve the accuracy of yield curve forecasts given by a base model. We do so by employing a flexible projection method which anchors the model forecasts to the survey expectations in segments of the yield curve where the informational advantage exists and transmits the superior forecasting ability to all remaining yields. The method implicitly incorporates into yield curve forecasts any information that survey participants have access to, without the need to explicitly model it. We document that anchoring delivers large and significant gains in forecast accuracy for the whole yield curve, with improvements of up to 52 % over the years 2000-2012 relative t

    Comparing Density Forecasts via Weighted Likelihood Ratio Tests: Asymptotic and Bootstrap Methods

    Full text link
    This paper proposes and analyzes tests that can be used to compare the accuracy of alternative conditional density forecasts of a variable. The tests are also valid in the broader context of model selection based on out-of-sample predictive ability. We restrict attention to the case of density forecasts derived from non-nested parametric models, with known or estimated parameters. The evaluation makes use of scoring rules, which are loss functions defined over the density forecast and the realizations of the variable. In particular, we consider the logarithmic scoring rule, which leads to the development of asymptotic and bootstrap 'weighted likelihood ratio' tests. The name comes from the fact that the tests compare weighted averages of the scores over the available sample, as a way to focus attention on different regions of the distribution of the variable. For a uniform weight function, the asymptotic test can be interpreted as an extension of Vuong (1989)' s likelihood ratio test for non-nested hypotheses to time series data and to an out-of-sample testing framework. A Monte Carlo simulation explores the size and power properties of this last test in finite samples. An application using S&P500 daily returns shows how the tests can be used to compare the performance of density forecasts obtained from GARCH models with different distributional assumptions

    Models, inattention and expectation updates

    Get PDF
    We formulate a theory of expectation updating that fits the dynamics of accuracy and disagreement in a new survey dataset where agents can update at any time while observing each other’s expectations. Agents use heterogeneous models and can be inattentive but, when updating, they follow Bayes’ rule and assign homogeneous weights to public information. Our empirical findings suggest that agents do not herd and, despite disagreement, they place high faith in their models, whereas during a crisis they lose this faith and undergo a paradigm shift. This simple, “micro-founded” theory could enhance the explanatory power of macroeconomic and finance models
    • 

    corecore