5,131 research outputs found
Layered Interpretation of Street View Images
We propose a layered street view model to encode both depth and semantic
information on street view images for autonomous driving. Recently, stixels,
stix-mantics, and tiered scene labeling methods have been proposed to model
street view images. We propose a 4-layer street view model, a compact
representation over the recently proposed stix-mantics model. Our layers encode
semantic classes like ground, pedestrians, vehicles, buildings, and sky in
addition to the depths. The only input to our algorithm is a pair of stereo
images. We use a deep neural network to extract the appearance features for
semantic classes. We use a simple and an efficient inference algorithm to
jointly estimate both semantic classes and layered depth values. Our method
outperforms other competing approaches in Daimler urban scene segmentation
dataset. Our algorithm is massively parallelizable, allowing a GPU
implementation with a processing speed about 9 fps.Comment: The paper will be presented in the 2015 Robotics: Science and Systems
Conference (RSS
Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning
Depth-aware panoptic segmentation is an emerging topic in computer vision
which combines semantic and geometric understanding for more robust scene
interpretation. Recent works pursue unified frameworks to tackle this challenge
but mostly still treat it as two individual learning tasks, which limits their
potential for exploring cross-domain information. We propose a deeply unified
framework for depth-aware panoptic segmentation, which performs joint
segmentation and depth estimation both in a per-segment manner with identical
object queries. To narrow the gap between the two tasks, we further design a
geometric query enhancement method, which is able to integrate scene geometry
into object queries using latent representations. In addition, we propose a
bi-directional guidance learning approach to facilitate cross-task feature
learning by taking advantage of their mutual relations. Our method sets the new
state of the art for depth-aware panoptic segmentation on both Cityscapes-DVPS
and SemKITTI-DVPS datasets. Moreover, our guidance learning approach is shown
to deliver performance improvement even under incomplete supervision labels.Comment: to be published in ICCV 202
Joint Object and Part Segmentation using Deep Learned Potentials
Segmenting semantic objects from images and parsing them into their
respective semantic parts are fundamental steps towards detailed object
understanding in computer vision. In this paper, we propose a joint solution
that tackles semantic object and part segmentation simultaneously, in which
higher object-level context is provided to guide part segmentation, and more
detailed part-level localization is utilized to refine object segmentation.
Specifically, we first introduce the concept of semantic compositional parts
(SCP) in which similar semantic parts are grouped and shared among different
objects. A two-channel fully convolutional network (FCN) is then trained to
provide the SCP and object potentials at each pixel. At the same time, a
compact set of segments can also be obtained from the SCP predictions of the
network. Given the potentials and the generated segments, in order to explore
long-range context, we finally construct an efficient fully connected
conditional random field (FCRF) to jointly predict the final object and part
labels. Extensive evaluation on three different datasets shows that our
approach can mutually enhance the performance of object and part segmentation,
and outperforms the current state-of-the-art on both tasks
- …