research

Computing with confidence: a Bayesian approach

Abstract

Bayes’ rule is introduced as a coherent strategy for multiple recomputations of classifier system output, and thus as a basis for assessing the uncertainty associated with a particular system results --- i.e. a basis for confidence in the accuracy of each computed result. We use a Markov-Chain Monte Carlo method for efficient selection of recomputations to approximate the computationally intractable elements of the Bayesian approach. The estimate of the confidence to be placed in any classification result provides a sound basis for rejection of some classification results. We present uncertainty envelopes as one way to derive these confidence estimates from the population of recomputed results. We show that a coarse SURE or UNSURE confidence rating based on a threshold of agreed classifications works well, not only pinpointing those results that are reliable but also in indicating input data problems, such as corrupted or incomplete data, or application of an inadequate classifier model

    Similar works