Trained DNN models are increasingly adopted as integral parts of software
systems, but they often perform deficiently in the field. A particularly
damaging problem is that DNN models often give false predictions with high
confidence, due to the unavoidable slight divergences between operation data
and training data. To minimize the loss caused by inaccurate confidence,
operational calibration, i.e., calibrating the confidence function of a DNN
classifier against its operation domain, becomes a necessary debugging step in
the engineering of the whole system.
Operational calibration is difficult considering the limited budget of
labeling operation data and the weak interpretability of DNN models. We propose
a Bayesian approach to operational calibration that gradually corrects the
confidence given by the model under calibration with a small number of labeled
operation data deliberately selected from a larger set of unlabeled operation
data. The approach is made effective and efficient by leveraging the locality
of the learned representation of the DNN model and modeling the calibration as
Gaussian Process Regression. Comprehensive experiments with various practical
datasets and DNN models show that it significantly outperformed alternative
methods, and in some difficult tasks it eliminated about 71% to 97%
high-confidence (>0.9) errors with only about 10\% of the minimal amount of
labeled operation data needed for practical learning techniques to barely work.Comment: Published in the Proceedings of the 28th ACM Joint European Software
Engineering Conference and Symposium on the Foundations of Software
Engineering (ESEC/FSE 2020