unknown

Predictive density accuracy tests

Abstract

This paper outlines a testing procedure for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models, and surveys existing related methods in the area of predictive density evaluation, including methods based on the probability integral transform and the Kullback-Leibler Information Criterion. The procedure is closely related to Andrews’ (1997) conditional Kolmogorov test and to White’s (2000) reality check approach, and involves comparing square (approximation) errors associated with models i, i = 1, ..., n, by constructing weighted averages over U of E !" Fi(u|Zt, !†i ) − F0(u|Zt, !0) #2 $ , where F0(·|·) and Fi(·|·) are true and approximate distributions, u # U, and U is a possibly unbounded set on the real line. Appropriate bootstrap procedures for obtaining critical values for tests constructed using this measure of loss in conjunction with predictions obtained via rolling and recursive estimation schemes are developed. We then apply these bootstrap procedures to the case of obtaining critical values for our predictive accuracy test. A Monte Carlo experiment comparing our bootstrap methods with methods that do not include location bias adjustment terms is provided, and results indicate coverage improvement when our proposed bootstrap procedures are used. Finally, an empirical example comparing alternative predictive densities for U.S. inflation is given

    Similar works