182 research outputs found
Human Detection and Segmentation via Multi-View Consensus
Self-supervised detection and segmentation of foreground objects aims for accuracy without annotated training data. However, existing approaches predominantly rely on restrictive assumptions on appearance and motion. For scenes with dynamic activities and camera motion, we propose a multi-camera framework in which geometric constraints are embedded in the form of multi-view consistency during training via coarse 3D localization in a voxel grid and fine-grained offset regression. In this manner, we learn a joint distribution of proposals over multiple views. At inference time, our method operates on single RGB images. We outperform state-of-the-art techniques both on images that visually depart from those of standard benchmarks and on those of the classical Human3.6M dataset
Learning Features by Watching Objects Move
This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce.Comment: CVPR 201
Lucid Data Dreaming for Video Object Segmentation
Convolutional networks reach top quality in pixel-level video object
segmentation but require a large amount of training data (1k~100k) to deliver
such results. We propose a new training strategy which achieves
state-of-the-art results across three evaluation datasets while using 20x~1000x
less annotated data than competing methods. Our approach is suitable for both
single and multiple object segmentation. Instead of using large training sets
hoping to generalize across domains, we generate in-domain training data using
the provided annotation on the first frame of each video to synthesize ("lucid
dream") plausible future video frames. In-domain per-video training data allows
us to train high quality appearance- and motion-based models, as well as tune
the post-processing stage. This approach allows to reach competitive results
even when training from only a single annotated frame, without ImageNet
pre-training. Our results indicate that using a larger training set is not
automatically better, and that for the video object segmentation task a smaller
training set that is closer to the target domain is more effective. This
changes the mindset regarding how many training samples and general
"objectness" knowledge are required for the video object segmentation task.Comment: Accepted in International Journal of Computer Vision (IJCV
Co-attention Propagation Network for Zero-Shot Video Object Segmentation
Zero-shot video object segmentation (ZS-VOS) aims to segment foreground
objects in a video sequence without prior knowledge of these objects. However,
existing ZS-VOS methods often struggle to distinguish between foreground and
background or to keep track of the foreground in complex scenarios. The common
practice of introducing motion information, such as optical flow, can lead to
overreliance on optical flow estimation. To address these challenges, we
propose an encoder-decoder-based hierarchical co-attention propagation network
(HCPN) capable of tracking and segmenting objects. Specifically, our model is
built upon multiple collaborative evolutions of the parallel co-attention
module (PCM) and the cross co-attention module (CCM). PCM captures common
foreground regions among adjacent appearance and motion features, while CCM
further exploits and fuses cross-modal motion features returned by PCM. Our
method is progressively trained to achieve hierarchical spatio-temporal feature
propagation across the entire video. Experimental results demonstrate that our
HCPN outperforms all previous methods on public benchmarks, showcasing its
effectiveness for ZS-VOS.Comment: accepted by IEEE Transactions on Image Processin
- …