Intelligence relies on an agent's knowledge of what it does not know. This
capability can be assessed based on the quality of joint predictions of labels
across multiple inputs. Conventional neural networks lack this capability and,
since most research has focused on marginal predictions, this shortcoming has
been largely overlooked. We introduce the epistemic neural network (ENN) as an
interface for models that represent uncertainty as required to generate useful
joint predictions. While prior approaches to uncertainty modeling such as
Bayesian neural networks can be expressed as ENNs, this new interface
facilitates comparison of joint predictions and the design of novel
architectures and algorithms. In particular, we introduce the epinet: an
architecture that can supplement any conventional neural network, including
large pretrained models, and can be trained with modest incremental computation
to estimate uncertainty. With an epinet, conventional neural networks
outperform very large ensembles, consisting of hundreds or more particles, with
orders of magnitude less computation. We demonstrate this efficacy across
synthetic data, ImageNet, and some reinforcement learning tasks. As part of
this effort we open-source experiment code