Quantifying Epistemic Uncertainty in Deep Learning

Abstract

Uncertainty quantification is at the core of the reliability and robustness of machine learning. In this paper, we provide a theoretical framework to dissect the uncertainty, especially the epistemic component, in deep learning into procedural variability (from the training procedure) and data variability (from the training data), which is the first such attempt in the literature to our best knowledge. We then propose two approaches to estimate these uncertainties, one based on influence function and one on batching. We demonstrate how our approaches overcome the computational difficulties in applying classical statistical methods. Experimental evaluations on multiple problem settings corroborate our theory and illustrate how our framework and estimation can provide direct guidance on modeling and data collection effort to improve deep learning performance

    Similar works

    Full text

    thumbnail-image

    Available Versions