70 research outputs found
Budget-aware Semi-Supervised Semantic and Instance Segmentation
Methods that move towards less supervised scenarios are key for image
segmentation, as dense labels demand significant human intervention. Generally,
the annotation burden is mitigated by labeling datasets with weaker forms of
supervision, e.g. image-level labels or bounding boxes. Another option are
semi-supervised settings, that commonly leverage a few strong annotations and a
huge number of unlabeled/weakly-labeled data. In this paper, we revisit
semi-supervised segmentation schemes and narrow down significantly the
annotation budget (in terms of total labeling time of the training set)
compared to previous approaches. With a very simple pipeline, we demonstrate
that at low annotation budgets, semi-supervised methods outperform by a wide
margin weakly-supervised ones for both semantic and instance segmentation. Our
approach also outperforms previous semi-supervised works at a much reduced
labeling cost. We present results for the Pascal VOC benchmark and unify weakly
and semi-supervised approaches by considering the total annotation budget, thus
allowing a fairer comparison between methods.Comment: To appear in CVPR-W 2019 (DeepVision workshop
Harvesting Information from Captions for Weakly Supervised Semantic Segmentation
Since acquiring pixel-wise annotations for training convolutional neural
networks for semantic image segmentation is time-consuming, weakly supervised
approaches that only require class tags have been proposed. In this work, we
propose another form of supervision, namely image captions as they can be found
on the Internet. These captions have two advantages. They do not require
additional curation as it is the case for the clean class tags used by current
weakly supervised approaches and they provide textual context for the classes
present in an image. To leverage such textual context, we deploy a multi-modal
network that learns a joint embedding of the visual representation of the image
and the textual representation of the caption. The network estimates text
activation maps (TAMs) for class names as well as compound concepts, i.e.
combinations of nouns and their attributes. The TAMs of compound concepts
describing classes of interest substantially improve the quality of the
estimated class activation maps which are then used to train a network for
semantic segmentation. We evaluate our method on the COCO dataset where it
achieves state of the art results for weakly supervised image segmentation
Learning Segmentation Masks with the Independence Prior
An instance with a bad mask might make a composite image that uses it look
fake. This encourages us to learn segmentation by generating realistic
composite images. To achieve this, we propose a novel framework that exploits a
new proposed prior called the independence prior based on Generative
Adversarial Networks (GANs). The generator produces an image with multiple
category-specific instance providers, a layout module and a composition module.
Firstly, each provider independently outputs a category-specific instance image
with a soft mask. Then the provided instances' poses are corrected by the
layout module. Lastly, the composition module combines these instances into a
final image. Training with adversarial loss and penalty for mask area, each
provider learns a mask that is as small as possible but enough to cover a
complete category-specific instance. Weakly supervised semantic segmentation
methods widely use grouping cues modeling the association between image parts,
which are either artificially designed or learned with costly segmentation
labels or only modeled on local pairs. Unlike them, our method automatically
models the dependence between any parts and learns instance segmentation. We
apply our framework in two cases: (1) Foreground segmentation on
category-specific images with box-level annotation. (2) Unsupervised learning
of instance appearances and masks with only one image of homogeneous object
cluster (HOC). We get appealing results in both tasks, which shows the
independence prior is useful for instance segmentation and it is possible to
unsupervisedly learn instance masks with only one image.Comment: 7+5 pages, 13 figures, Accepted to AAAI 201
FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference
The main obstacle to weakly supervised semantic image segmentation is the
difficulty of obtaining pixel-level information from coarse image-level
annotations. Most methods based on image-level annotations use localization
maps obtained from the classifier, but these only focus on the small
discriminative parts of objects and do not capture precise boundaries.
FickleNet explores diverse combinations of locations on feature maps created by
generic deep neural networks. It selects hidden units randomly and then uses
them to obtain activation scores for image classification. FickleNet implicitly
learns the coherence of each location in the feature maps, resulting in a
localization map which identifies both discriminative and other parts of
objects. The ensemble effects are obtained from a single network by selecting
random hidden unit pairs, which means that a variety of localization maps are
generated from a single image. Our approach does not require any additional
training steps and only adds a simple layer to a standard convolutional neural
network; nevertheless it outperforms recent comparable techniques on the Pascal
VOC 2012 benchmark in both weakly and semi-supervised settings.Comment: To appear in CVPR 201
CVFC: Attention-Based Cross-View Feature Consistency for Weakly Supervised Semantic Segmentation of Pathology Images
Histopathology image segmentation is the gold standard for diagnosing cancer,
and can indicate cancer prognosis. However, histopathology image segmentation
requires high-quality masks, so many studies now use imagelevel labels to
achieve pixel-level segmentation to reduce the need for fine-grained
annotation. To solve this problem, we propose an attention-based cross-view
feature consistency end-to-end pseudo-mask generation framework named CVFC
based on the attention mechanism. Specifically, CVFC is a three-branch joint
framework composed of two Resnet38 and one Resnet50, and the independent branch
multi-scale integrated feature map to generate a class activation map (CAM); in
each branch, through down-sampling and The expansion method adjusts the size of
the CAM; the middle branch projects the feature matrix to the query and key
feature spaces, and generates a feature space perception matrix through the
connection layer and inner product to adjust and refine the CAM of each branch;
finally, through the feature consistency loss and feature cross loss to
optimize the parameters of CVFC in co-training mode. After a large number of
experiments, An IoU of 0.7122 and a fwIoU of 0.7018 are obtained on the
WSSS4LUAD dataset, which outperforms HistoSegNet, SEAM, C-CAM, WSSS-Tissue, and
OEEM, respectively.Comment: Submitted to BIBM202
- …