Deep networks often make confident, yet, incorrect, predictions when tested
with outlier data that is far removed from their training distributions.
Likelihoods computed by deep generative models (DGMs) are a candidate metric
for outlier detection with unlabeled data. Yet, previous studies have shown
that DGM likelihoods are unreliable and can be easily biased by simple
transformations to input data. Here, we examine outlier detection with
variational autoencoders (VAEs), among the simplest of DGMs. We propose novel
analytical and algorithmic approaches to ameliorate key biases with VAE
likelihoods. Our bias corrections are sample-specific, computationally
inexpensive, and readily computed for various decoder visible distributions.
Next, we show that a well-known image pre-processing technique -- contrast
stretching -- extends the effectiveness of bias correction to further improve
outlier detection. Our approach achieves state-of-the-art accuracies with nine
grayscale and natural image datasets, and demonstrates significant advantages
-- both with speed and performance -- over four recent, competing approaches.
In summary, lightweight remedies suffice to achieve robust outlier detection
with VAEs.Comment: To appear at CVPR 2022. 20 pages and 19 figure