3 research outputs found

    Introducing doubt in Bayesian model comparison

    Get PDF
    There are things we know, things we know we dont know, and then there are things we dont know we dont know. In this paper we address the latter two issues in a Bayesian framework, introducing the notion of doubt to quantify the degree of (dis)belief in a model given observational data in the absence of explicit alternative models. We demonstrate how a properly calibrated doubt can lead to model discovery when the true model is unknown
    corecore