379 research outputs found
From Image-level to Pixel-level Labeling with Convolutional Networks
We are interested in inferring object segmentation by leveraging only object
class information, and by considering only minimal priors on the object
segmentation task. This problem could be viewed as a kind of weakly supervised
segmentation task, and naturally fits the Multiple Instance Learning (MIL)
framework: every training image is known to have (or not) at least one pixel
corresponding to the image class label, and the segmentation task can be
rewritten as inferring the pixels belonging to the class of the object (given
one image, and its object class). We propose a Convolutional Neural
Network-based model, which is constrained during training to put more weight on
pixels which are important for classifying the image. We show that at test
time, the model has learned to discriminate the right pixels well enough, such
that it performs very well on an existing segmentation benchmark, by adding
only few smoothing priors. Our system is trained using a subset of the Imagenet
dataset and the segmentation experiments are performed on the challenging
Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model
beats the state of the art results in weakly supervised object segmentation
task by a large margin. We also compare the performance of our model with state
of the art fully-supervised segmentation approaches.Comment: CVPR201
Affinity Attention Graph Neural Network for Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation is receiving great attention due to
its low human annotation cost. In this paper, we aim to tackle bounding box
supervised semantic segmentation, i.e., training accurate semantic segmentation
models using bounding box annotations as supervision. To this end, we propose
Affinity Attention Graph Neural Network (GNN). Following previous
practices, we first generate pseudo semantic-aware seeds, which are then formed
into semantic graphs based on our newly proposed affinity Convolutional Neural
Network (CNN). Then the built graphs are input to our GNN, in which an
affinity attention layer is designed to acquire the short- and long- distance
information from soft graph edges to accurately propagate semantic labels from
the confident seeds to the unlabeled pixels. However, to guarantee the
precision of the seeds, we only adopt a limited number of confident pixel seed
labels for GNN, which may lead to insufficient supervision for training.
To alleviate this issue, we further introduce a new loss function and a
consistency-checking mechanism to leverage the bounding box constraint, so that
more reliable guidance can be included for the model optimization. Experiments
show that our approach achieves new state-of-the-art performances on Pascal VOC
2012 datasets (val: 76.5\%, test: 75.2\%). More importantly, our approach can
be readily applied to bounding box supervised instance segmentation task or
other weakly supervised semantic segmentation tasks, with state-of-the-art or
comparable performance among almot all weakly supervised tasks on PASCAL VOC or
COCO dataset. Our source code will be available at
https://github.com/zbf1991/A2GNN.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence (TAPMI 2021
Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
Recent advancements in vision foundation models (VFMs) have opened up new
possibilities for versatile and efficient visual perception. In this work, we
introduce Seal, a novel framework that harnesses VFMs for segmenting diverse
automotive point cloud sequences. Seal exhibits three appealing properties: i)
Scalability: VFMs are directly distilled into point clouds, obviating the need
for annotations in either 2D or 3D during pretraining. ii) Consistency: Spatial
and temporal relationships are enforced at both the camera-to-LiDAR and
point-to-segment regularization stages, facilitating cross-modal representation
learning. iii) Generalizability: Seal enables knowledge transfer in an
off-the-shelf manner to downstream tasks involving diverse point clouds,
including those from real/synthetic, low/high-resolution, large/small-scale,
and clean/corrupted datasets. Extensive experiments conducted on eleven
different point cloud datasets showcase the effectiveness and superiority of
Seal. Notably, Seal achieves a remarkable 45.0% mIoU on nuScenes after linear
probing, surpassing random initialization by 36.9% mIoU and outperforming prior
arts by 6.1% mIoU. Moreover, Seal demonstrates significant performance gains
over existing methods across 20 different few-shot fine-tuning tasks on all
eleven tested point cloud datasets.Comment: NeurIPS 2023 (Spotlight); 37 pages, 16 figures, 15 tables; Code at
https://github.com/youquanl/Segment-Any-Point-Clou
Learning Semantic Segmentation with Query Points Supervision on Aerial Images
Semantic segmentation is crucial in remote sensing, where high-resolution
satellite images are segmented into meaningful regions. Recent advancements in
deep learning have significantly improved satellite image segmentation.
However, most of these methods are typically trained in fully supervised
settings that require high-quality pixel-level annotations, which are expensive
and time-consuming to obtain. In this work, we present a weakly supervised
learning algorithm to train semantic segmentation algorithms that only rely on
query point annotations instead of full mask labels. Our proposed approach
performs accurate semantic segmentation and improves efficiency by
significantly reducing the cost and time required for manual annotation.
Specifically, we generate superpixels and extend the query point labels into
those superpixels that group similar meaningful semantics. Then, we train
semantic segmentation models, supervised with images partially labeled with the
superpixels pseudo-labels. We benchmark our weakly supervised training approach
on an aerial image dataset and different semantic segmentation architectures,
showing that we can reach competitive performance compared to fully supervised
training while reducing the annotation effort.Comment: Paper presented at the LXCV workshop at ICCV 202
Representative discovery of structure cues for weakly-supervised image segmentation
National Research Foundation (NRF) Singapore under International Research Centre @ Singapore Funding Initiativ
- …