Subjective Bayesian testing using calibrated prior probabilities

Abstract

This article proposes a calibration scheme for Bayesian testing that coordinates analytically-derived statistical performance considerations with expert opinion. In other words, the scheme is effective and meaningful for incorporating objective elements into subjective Bayesian inference. It explores a novel role for default priors as anchors for calibration rather than substitutes for prior knowledge. Ideas are developed for use with multiplicity adjustments in multiple-model contexts, and to address the issue of prior sensitivity of Bayes factors. Along the way, the performance properties of an existing multiplicity adjustment related to the Poisson distribution are clarified theoretically. Connections of the overall calibration scheme to the Schwarz criterion are also explored. The proposed framework is examined and illustrated on a number of existing data sets related to problems in clinical trials, forensic pattern matching, and log-linear models methodology.This is a manuscript of an article published as Spitzner, Dan J. "Subjective Bayesian testing using calibrated prior probabilities." Brazilian Journal of Probability and Statistics 33, no. 4 (2019): 861-893. Posted with permission of CSAFE.</p

    Similar works