68,259 research outputs found

    Validating Predictions of Unobserved Quantities

    Full text link
    The ultimate purpose of most computational models is to make predictions, commonly in support of some decision-making process (e.g., for design or operation of some system). The quantities that need to be predicted (the quantities of interest or QoIs) are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the validity of such extrapolative predictions, which is critical to informed decision-making, is challenging. In classical approaches to validation, model outputs for observed quantities are compared to observations to determine if they are consistent. By itself, this consistency only ensures that the model can predict the observed quantities under the conditions of the observations. This limitation dramatically reduces the utility of the validation effort for decision making because it implies nothing about predictions of unobserved QoIs or for scenarios outside of the range of observations. However, there is no agreement in the scientific community today regarding best practices for validation of extrapolative predictions made using computational models. The purpose of this paper is to propose and explore a validation and predictive assessment process that supports extrapolative predictions for models with known sources of error. The process includes stochastic modeling, calibration, validation, and predictive assessment phases where representations of known sources of uncertainty and error are built, informed, and tested. The proposed methodology is applied to an illustrative extrapolation problem involving a misspecified nonlinear oscillator

    The limits to stock return predictability

    Get PDF
    We examine predictive return regressions from a new angle. We ask what observable univariate properties of returns tell us about the “predictive space” that defines the true predictive model: the triplet ¡ λ, R2 x, ρ¢ , where λ is the predictor’s persistence, R2 x is the predictive R-squared, and ρ is the "Stambaugh Correlation" (between innovations in the predictive system). When returns are nearly white noise, and the variance ratio slopes downwards, the predictive space can be tightly constrained. Data on real annual US stock returns suggest limited scope for even the best possible predictive regression to out-predict the univariate representation, particularly over long horizons
    corecore