Detecting out-of-distribution (OOD) examples is crucial to guarantee the
reliability and safety of deep neural networks in real-world settings. In this
paper, we offer an innovative perspective on quantifying the disparities
between in-distribution (ID) and OOD data -- analyzing the uncertainty that
arises when models attempt to explain their predictive decisions. This
perspective is motivated by our observation that gradient-based attribution
methods encounter challenges in assigning feature importance to OOD data,
thereby yielding divergent explanation patterns. Consequently, we investigate
how attribution gradients lead to uncertain explanation outcomes and introduce
two forms of abnormalities for OOD detection: the zero-deflation abnormality
and the channel-wise average abnormality. We then propose GAIA, a simple and
effective approach that incorporates Gradient Abnormality Inspection and
Aggregation. The effectiveness of GAIA is validated on both commonly utilized
(CIFAR) and large-scale (ImageNet-1k) benchmarks. Specifically, GAIA reduces
the average FPR95 by 23.10% on CIFAR10 and by 45.41% on CIFAR100 compared to
advanced post-hoc methods.Comment: Accepted by NeurIPS202