The application of artificial intelligence (AI) models in fields such as
engineering is limited by the known difficulty of quantifying the reliability
of an AI's decision. A well-calibrated AI model must correctly report its
accuracy on in-distribution (ID) inputs, while also enabling the detection of
out-of-distribution (OOD) inputs. A conventional approach to improve
calibration is the application of Bayesian ensembling. However, owing to
computational limitations and model misspecification, practical ensembling
strategies do not necessarily enhance calibration. This paper proposes an
extension of variational inference (VI)-based Bayesian learning that integrates
calibration regularization for improved ID performance, confidence minimization
for OOD detection, and selective calibration to ensure a synergistic use of
calibration regularization and confidence minimization. The scheme is
constructed successively by first introducing calibration-regularized Bayesian
learning (CBNN), then incorporating out-of-distribution confidence minimization
(OCM) to yield CBNN-OCM, and finally integrating also selective calibration to
produce selective CBNN-OCM (SCBNN-OCM). Selective calibration rejects inputs
for which the calibration performance is expected to be insufficient. Numerical
results illustrate the trade-offs between ID accuracy, ID calibration, and OOD
calibration attained by both frequentist and Bayesian learning methods. Among
the main conclusions, SCBNN-OCM is seen to achieve best ID and OOD performance
as compared to existing state-of-the-art approaches at the cost of rejecting a
sufficiently large number of inputs.Comment: Under revie