Detecting out-of-distribution (OOD) samples is essential for ensuring the
reliability of deep neural networks (DNNs) in real-world scenarios. While
previous research has predominantly investigated the disparity between
in-distribution (ID) and OOD data through forward information analysis, the
discrepancy in parameter gradients during the backward process of DNNs has
received insufficient attention. Existing studies on gradient disparities
mainly focus on the utilization of gradient norms, neglecting the wealth of
information embedded in gradient directions. To bridge this gap, in this paper,
we conduct a comprehensive investigation into leveraging the entirety of
gradient information for OOD detection. The primary challenge arises from the
high dimensionality of gradients due to the large number of network parameters.
To solve this problem, we propose performing linear dimension reduction on the
gradient using a designated subspace that comprises principal components. This
innovative technique enables us to obtain a low-dimensional representation of
the gradient with minimal information loss. Subsequently, by integrating the
reduced gradient with various existing detection score functions, our approach
demonstrates superior performance across a wide range of detection tasks. For
instance, on the ImageNet benchmark, our method achieves an average reduction
of 11.15% in the false positive rate at 95% recall (FPR95) compared to the
current state-of-the-art approach. The code would be released