This paper outlines a testing procedure for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models, and surveys existing related methods in the area of predictive density evaluation, including methods based on the probability integral transform and the Kullback-Leibler Information Criterion. The procedure is closely related to Andrews’ (1997) conditional Kolmogorov test and to White’s (2000) reality check approach, and involves comparing square (approximation) errors associated with models i, i = 1, ..., n, by constructing weighted averages over U of E\ud !"\ud Fi(u|Zt, !†i ) − F0(u|Zt, !0)\ud #2\ud $\ud , where F0(·|·) and Fi(·|·) are true and approximate\ud distributions, u # U, and U is a possibly unbounded set on the real line. Appropriate bootstrap procedures for\ud obtaining critical values for tests constructed using this measure of loss in conjunction with predictions obtained\ud via rolling and recursive estimation schemes are developed. We then apply these bootstrap procedures to the case\ud of obtaining critical values for our predictive accuracy test. A Monte Carlo experiment comparing our bootstrap\ud methods with methods that do not include location bias adjustment terms is provided, and results indicate coverage\ud improvement when our proposed bootstrap procedures are used. Finally, an empirical example comparing alternative\ud predictive densities for U.S. inflation is given
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.