We are interested in estimating the uncertainties of deep neural networks,
which play an important role in many scientific and engineering problems. In
this paper, we present a striking new finding that an ensemble of neural
networks with the same weight initialization, trained on datasets that are
shifted by a constant bias gives rise to slightly inconsistent trained models,
where the differences in predictions are a strong indicator of epistemic
uncertainties. Using the neural tangent kernel (NTK), we demonstrate that this
phenomena occurs in part because the NTK is not shift-invariant. Since this is
achieved via a trivial input transformation, we show that it can therefore be
approximated using just a single neural network -- using a technique that we
call ΞβUQ -- that estimates uncertainty around prediction by
marginalizing out the effect of the biases. We show that ΞβUQ's
uncertainty estimates are superior to many of the current methods on a variety
of benchmarks -- outlier rejection, calibration under distribution shift, and
sequential design optimization of black box functions