Modern deep artificial neural networks have achieved great success in the
domain of computer vision and beyond. However, their application to many
real-world tasks is undermined by certain limitations, such as overconfident
uncertainty estimates on out-of-distribution data or performance deterioration
under data distribution shifts. Several types of deep learning models used for
density estimation through probabilistic generative modeling have been shown to
fail to detect out-of-distribution samples by assigning higher likelihoods to
anomalous data. We investigate this failure mode in Variational Autoencoder
models, which are also prone to this, and improve upon the out-of-distribution
generalization performance of the model by employing an alternative training
scheme utilizing negative samples. We present a fully unsupervised version:
when the model is trained in an adversarial manner, the generator's own outputs
can be used as negative samples. We demonstrate empirically the effectiveness
of the approach in reducing the overconfident likelihood estimates of
out-of-distribution inputs on image data