5,275 research outputs found
Do Deep Generative Models Know What They Don't Know?
A neural network deployed in the wild may be asked to make predictions for
inputs that were drawn from a different distribution than that of the training
data. A plethora of work has demonstrated that it is easy to find or synthesize
inputs for which a neural network is highly confident yet wrong. Generative
models are widely viewed to be robust to such mistaken confidence as modeling
the density of the input features can be used to detect novel,
out-of-distribution inputs. In this paper we challenge this assumption. We find
that the density learned by flow-based models, VAEs, and PixelCNNs cannot
distinguish images of common objects such as dogs, trucks, and horses (i.e.
CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher
likelihood to the latter when the model is trained on the former. Moreover, we
find evidence of this phenomenon when pairing several popular image data sets:
FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN.
To investigate this curious behavior, we focus analysis on flow-based
generative models in particular since they are trained and evaluated via the
exact marginal likelihood. We find such behavior persists even when we restrict
the flows to constant-volume transformations. These transformations admit some
theoretical analysis, and we show that the difference in likelihoods can be
explained by the location and variances of the data and the model curvature.
Our results caution against using the density estimates from deep generative
models to identify inputs similar to the training distribution until their
behavior for out-of-distribution inputs is better understood.Comment: ICLR 201
Detecting the Unexpected via Image Resynthesis
Classical semantic segmentation methods, including the recent deep learning
ones, assume that all classes observed at test time have been seen during
training. In this paper, we tackle the more realistic scenario where unexpected
objects of unknown classes can appear at test time. The main trends in this
area either leverage the notion of prediction uncertainty to flag the regions
with low confidence as unknown, or rely on autoencoders and highlight
poorly-decoded regions. Having observed that, in both cases, the detected
regions typically do not correspond to unexpected objects, in this paper, we
introduce a drastically different strategy: It relies on the intuition that the
network will produce spurious labels in regions depicting unexpected objects.
Therefore, resynthesizing the image from the resulting semantic map will yield
significant appearance differences with respect to the input image. In other
words, we translate the problem of detecting unknown classes to one of
identifying poorly-resynthesized image regions. We show that this outperforms
both uncertainty- and autoencoder-based methods
Trends and oscillations in the Indian summer monsoon rainfall over the last two millennia
Observations show that summer rainfall over large parts of South Asia has declined over the past five to six decades. It remains unclear, however, whether this trend is due to natural variability or increased anthropogenic aerosol loading over South Asia. Here we use stable oxygen isotopes in speleothems from northern India to reconstruct variations in Indian monsoon rainfall over the last two millennia. We find that within the long-term context of our record, the current drying trend is not outside the envelope of monsoon’s oscillatory variability, albeit at the lower edge of this variance. Furthermore, the magnitude of multi-decadal oscillatory variability in monsoon rainfall inferred from our proxy record is comparable to model estimates of anthropogenic-forced trends of mean monsoon rainfall in the 21st century under various emission scenarios. Our results suggest that anthropogenic forced changes in monsoon rainfall will remain difficult to detect against a backdrop of large natural variability
- …