In order to trust the predictions of a machine learning algorithm, it is
necessary to understand the factors that contribute to those predictions. In
the case of probabilistic and uncertainty-aware models, it is necessary to
understand not only the reasons for the predictions themselves, but also the
model's level of confidence in those predictions. In this paper, we show how
existing methods in explainability can be extended to uncertainty-aware models
and how such extensions can be used to understand the sources of uncertainty in
a model's predictive distribution. In particular, by adapting permutation
feature importance, partial dependence plots, and individual conditional
expectation plots, we demonstrate that novel insights into model behaviour may
be obtained and that these methods can be used to measure the impact of
features on both the entropy of the predictive distribution and the
log-likelihood of the ground truth labels under that distribution. With
experiments using both synthetic and real-world data, we demonstrate the
utility of these approaches in understanding both the sources of uncertainty
and their impact on model performance