research

Model Comparisons in Unstable Environments

Abstract

The goal of this paper is to develop formal techniques for analyzing the relative in-sample performance of two competing, misspeci?ed models in the presence of possible data instability. The central idea of our methodology is to propose a measure of the models? local relative performance: the "local Kullback-Leibler Information Criterion" (KLIC), which measures the relative distance of the two models? (misspeci?ed) likelihoods from the true likelihood at a particular point in time. We discuss estimation and inference about the local relative KLIC; in particular, we propose statistical tests to investigate its stability over time. Compared to previous approaches to model selection, which are based on measures of "global performance", our focus is on the entire time path of the models? relative performance, which may contain useful information that is lost when looking for a globally best model. The empirical application provides insights into the time variation in the performance of a representative DSGE model of the European economy relative to that of VARs. implement IRFMEs.Model Selection Tests, Misspeci?cation, Structural Change, Kullback-Leibler Information Criterion

    Similar works