Out-of-distribution (OOD) detection methods assume that they have test ground
truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
However, in the real world, we do not always have such ground truths, and thus
do not know which sample is correctly detected and cannot compute the metric
like AUROC to evaluate the performance of different OOD detection methods. In
this paper, we are the first to introduce the unsupervised evaluation problem
in OOD detection, which aims to evaluate OOD detection methods in real-world
changing environments without OOD labels. We propose three methods to compute
Gscore as an unsupervised indicator of OOD detection performance. We further
introduce a new benchmark Gbench, which has 200 real-world OOD datasets of
various label spaces to train and evaluate our method. Through experiments, we
find a strong quantitative correlation betwwen Gscore and the OOD detection
performance. Extensive experiments demonstrate that our Gscore achieves
state-of-the-art performance. Gscore also generalizes well with different
IND/OOD datasets, OOD detection methods, backbones and dataset sizes. We
further provide interesting analyses of the effects of backbones and IND/OOD
datasets on OOD detection performance. The data and code will be available