5,858 research outputs found
Efficient multi-level scene understanding in videos
Automatic video parsing is a key step towards human-level dynamic
scene understanding, and a fundamental problem in computer
vision.
A core issue in video understanding is to infer multiple scene
properties of a video in an efficient and consistent manner. This
thesis addresses the problem of holistic scene understanding from
monocular videos, which jointly reason about semantic and
geometric scene properties from multiple levels, including
pixelwise annotation of video frames, object instance
segmentation in spatio-temporal domain, and/or scene-level
description in terms of scene categories and layouts.
We focus on four main issues in the holistic video understanding:
1) what is the representation for consistent semantic and
geometric parsing of videos? 2) how do we integrate high-level
reasoning (e.g., objects) with pixel-wise video parsing? 3) how
can we do efficient inference for multi-level video
understanding? and 4) what is the representation learning
strategy for efficient/cost-aware scene parsing?
We discuss three multi-level video scene segmentation scenarios
based on different aspects of scene properties and efficiency
requirements. The first case addresses the problem of consistent
geometric and semantic video segmentation for outdoor scenes.
We propose a geometric scene layout representation, or a stage
scene model, to efficiently capture the dependency between the
semantic and geometric labels.
We build a unified conditional random field for joint modeling of
the semantic class, geometric label and the stage representation,
and design an alternating inference algorithm to minimize the
resulting energy function. The second case focuses on the problem
of simultaneous pixel-level and object-level segmentation in
videos. We propose to incorporate foreground object information
into pixel labeling by jointly reasoning semantic labels of
supervoxels, object instance tracks and geometric relations
between objects. In order to model objects, we take an exemplar
approach based on a small set of object annotations to generate
a set of object proposals. We then design a conditional random
field framework that jointly models the supervoxel labels and
object instance segments. To scale up our method, we develop an
active inference strategy to improve the efficiency of
multi-level video parsing, which adaptively selects an
informative subset of object proposals and performs inference on
the resulting compact model.
The last case explores the problem of learning a flexible
representation for efficient scene labeling. We propose a dynamic
hierarchical model that allows us to achieve flexible trade-offs
between efficiency and accuracy. Our approach incorporates the
cost of feature computation and model inference, and optimizes
the model performance for any given test-time budget. We evaluate
all our methods on several publicly available video and image
semantic segmentation datasets, and demonstrate superior
performance in efficiency and accuracy.
Keywords: Semantic video segmentation, Multi-level scene
understanding, Efficient inference, Cost-aware scene parsin
Segmentation and semantic labelling of RGBD data with convolutional neural networks and surface fitting
We present an approach for segmentation and semantic labelling of RGBD data exploiting together geometrical cues and deep learning techniques. An initial over-segmentation is performed using spectral clustering and a set of non-uniform rational B-spline surfaces is fitted on the extracted segments. Then a convolutional neural network (CNN) receives in input colour and geometry data together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a vector of descriptors for each sample. In the next step, an iterative merging algorithm recombines the output of the over-segmentation into larger regions matching the various elements of the scene. The couples of adjacent segments with higher similarity according to the CNN features are candidate to be merged and the surface fitting accuracy is used to detect which couples of segments belong to the same surface. Finally, a set of labelled segments is obtained by combining the segmentation output with the descriptors from the CNN. Experimental results show how the proposed approach outperforms state-of-the-art methods and provides an accurate segmentation and labelling
Self-Supervised Relative Depth Learning for Urban Scene Understanding
As an agent moves through the world, the apparent motion of scene elements is
(usually) inversely proportional to their depth. It is natural for a learning
agent to associate image patterns with the magnitude of their displacement over
time: as the agent moves, faraway mountains don't move much; nearby trees move
a lot. This natural relationship between the appearance of objects and their
motion is a rich source of information about the world. In this work, we start
by training a deep network, using fully automatic supervision, to predict
relative scene depth from single images. The relative depth training images are
automatically derived from simple videos of cars moving through a scene, using
recent motion segmentation techniques, and no human-provided labels. This proxy
task of predicting relative depth from a single image induces features in the
network that result in large improvements in a set of downstream tasks
including semantic segmentation, joint road segmentation and car detection, and
monocular (absolute) depth estimation, over a network trained from scratch. The
improvement on the semantic segmentation task is greater than those produced by
any other automatically supervised methods. Moreover, for monocular depth
estimation, our unsupervised pre-training method even outperforms supervised
pre-training with ImageNet. In addition, we demonstrate benefits from learning
to predict (unsupervised) relative depth in the specific videos associated with
various downstream tasks. We adapt to the specific scenes in those tasks in an
unsupervised manner to improve performance. In summary, for semantic
segmentation, we present state-of-the-art results among methods that do not use
supervised pre-training, and we even exceed the performance of supervised
ImageNet pre-trained models for monocular depth estimation, achieving results
that are comparable with state-of-the-art methods
- …