Out-of-distribution (OOD) detection is critical for preventing deep learning
models from making incorrect predictions to ensure the safety of artificial
intelligence systems. Especially in safety-critical applications such as
medical diagnosis and autonomous driving, the cost of incorrect decisions is
usually unbearable. However, neural networks often suffer from the
overconfidence issue, making high confidence for OOD data which are never seen
during training process and may be irrelevant to training data, namely
in-distribution (ID) data. Determining the reliability of the prediction is
still a difficult and challenging task. In this work, we propose
Uncertainty-Estimation with Normalized Logits (UE-NL), a robust learning method
for OOD detection, which has three main benefits. (1) Neural networks with
UE-NL treat every ID sample equally by predicting the uncertainty score of
input data and the uncertainty is added into softmax function to adjust the
learning strength of easy and hard samples during training phase, making the
model learn robustly and accurately. (2) UE-NL enforces a constant vector norm
on the logits to decouple the effect of the increasing output norm from
optimization process, which causes the overconfidence issue to some extent. (3)
UE-NL provides a new metric, the magnitude of uncertainty score, to detect OOD
data. Experiments demonstrate that UE-NL achieves top performance on common OOD
benchmarks and is more robust to noisy ID data that may be misjudged as OOD
data by other methods.Comment: 7 pages, 1 figure, 7 tables, preprin