Researchers in explainable artificial intelligence have developed numerous
methods for helping users understand the predictions of complex supervised
learning models. By contrast, explaining the uncertainty of model
outputs has received relatively little attention. We adapt the popular Shapley
value framework to explain various types of predictive uncertainty, quantifying
each feature's contribution to the conditional entropy of individual model
outputs. We consider games with modified characteristic functions and find deep
connections between the resulting Shapley values and fundamental quantities
from information theory and conditional independence testing. We outline
inference procedures for finite sample error rate control with provable
guarantees, and implement an efficient algorithm that performs well in a range
of experiments on real and simulated data. Our method has applications to
covariate shift detection, active learning, feature selection, and active
feature-value acquisition