14 research outputs found
Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation
We introduce a new loss function for the weakly-supervised training of
semantic image segmentation models based on three guiding principles: to seed
with weak localization cues, to expand objects based on the information about
which classes can occur in an image, and to constrain the segmentations to
coincide with object boundaries. We show experimentally that training a deep
convolutional neural network using the proposed loss function leads to
substantially better segmentations than previous state-of-the-art methods on
the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the
working mechanism of our method by a detailed experimental study that
illustrates how the segmentation quality is affected by each term of the
proposed loss function as well as their combinations.Comment: ECCV 201
Exploiting saliency for object segmentation from image level labels
There have been remarkable improvements in the semantic labelling task in the
recent years. However, the state of the art methods rely on large-scale
pixel-level annotations. This paper studies the problem of training a
pixel-wise semantic labeller network from image-level annotations of the
present object classes. Recently, it has been shown that high quality seeds
indicating discriminative object regions can be obtained from image-level
labels. Without additional information, obtaining the full extent of the object
is an inherently ill-posed problem due to co-occurrences. We propose using a
saliency model as additional information and hereby exploit prior knowledge on
the object extent and image statistics. We show how to combine both information
sources in order to recover 80% of the fully supervised performance - which is
the new state of the art in weakly supervised training for pixel-wise semantic
labelling. The code is available at https://goo.gl/KygSeb.Comment: CVPR 201
Shortcut Detection with Variational Autoencoders
For real-world applications of machine learning (ML), it is essential that
models make predictions based on well-generalizing features rather than
spurious correlations in the data. The identification of such spurious
correlations, also known as shortcuts, is a challenging problem and has so far
been scarcely addressed. In this work, we present a novel approach to detect
shortcuts in image and audio datasets by leveraging variational autoencoders
(VAEs). The disentanglement of features in the latent space of VAEs allows us
to discover feature-target correlations in datasets and semi-automatically
evaluate them for ML shortcuts. We demonstrate the applicability of our method
on several real-world datasets and identify shortcuts that have not been
discovered before.Comment: Accepted at the ICML 2023 Workshop on Spurious Correlations,
Invariance and Stabilit
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness
Several existing works study either adversarial or natural distributional
robustness of deep neural networks separately. In practice, however, models
need to enjoy both types of robustness to ensure reliability. In this work, we
bridge this gap and show that in fact, explicit tradeoffs exist between
adversarial and natural distributional robustness. We first consider a simple
linear regression setting on Gaussian data with disjoint sets of core and
spurious features. In this setting, through theoretical and empirical analysis,
we show that (i) adversarial training with and norms
increases the model reliance on spurious features; (ii) For
adversarial training, spurious reliance only occurs when the scale of the
spurious features is larger than that of the core features; (iii) adversarial
training can have an unintended consequence in reducing distributional
robustness, specifically when spurious correlations are changed in the new test
domain. Next, we present extensive empirical evidence, using a test suite of
twenty adversarially trained models evaluated on five benchmark datasets
(ObjectNet, RIVAL10, Salient ImageNet-1M, ImageNet-9, Waterbirds), that
adversarially trained classifiers rely on backgrounds more than their
standardly trained counterparts, validating our theoretical results. We also
show that spurious correlations in training data (when preserved in the test
domain) can improve adversarial robustness, revealing that previous claims that
adversarial vulnerability is rooted in spurious correlations are incomplete.Comment: Accepted to NeurIPS 202
IST Austria Thesis
Modern computer vision systems heavily rely on statistical machine learning models, which typically require large amounts of labeled data to be learned reliably. Moreover, very recently computer vision research widely adopted techniques for representation learning, which further increase the demand for labeled data. However, for many important practical problems there is relatively small amount of labeled data available, so it is problematic to leverage full potential of the representation learning methods. One way to overcome this obstacle is to invest substantial resources into producing large labelled datasets. Unfortunately, this can be prohibitively expensive in practice. In this thesis we focus on the alternative way of tackling the aforementioned issue. We concentrate on methods, which make use of weakly-labeled or even unlabeled data. Specifically, the first half of the thesis is dedicated to the semantic image segmentation task. We develop a technique, which achieves competitive segmentation performance and only requires annotations in a form of global image-level labels instead of dense segmentation masks. Subsequently, we present a new methodology, which further improves segmentation performance by leveraging tiny additional feedback from a human annotator. By using our methods practitioners can greatly reduce the amount of data annotation effort, which is required to learn modern image segmentation models. In the second half of the thesis we focus on methods for learning from unlabeled visual data. We study a family of autoregressive models for modeling structure of natural images and discuss potential applications of these models. Moreover, we conduct in-depth study of one of these applications, where we develop the state-of-the-art model for the probabilistic image colorization task