research

Forecast evaluation of small nested model sets

Abstract

We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the bench-mark to all the alternative models simultaneously rather than sequentially, and do not require re-estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate, our procedures by comparing forecasts of different models for U.S. inflation. JEL Classification: C32, C53, E37Inflation forecasting, multiple model comparisons, Out-of-Sample, prediction, testing

    Similar works