85 research outputs found
Multi-Kernel Object Tracking
In this paper, we present an object tracking algorithm for the low-frame-rate video in which objects have fast motion. The conventional mean-shift tracking fails in case the relocation of an object is large and its regions between the consecutive frames do not overlap. We provide a solution to this problem by using multiple kernels centered at the high motion areas. In addition, we improve the convergence properties of the mean-shift by integrating two likelihood terms, background and template similarities, in the iterative update mechanism. Our simulations prove the effectiveness of the proposed method
Deep Hierarchical Parsing for Semantic Segmentation
This paper proposes a learning-based approach to scene parsing inspired by
the deep Recursive Context Propagation Network (RCPN). RCPN is a deep
feed-forward neural network that utilizes the contextual information from the
entire image, through bottom-up followed by top-down context propagation via
random binary parse trees. This improves the feature representation of every
super-pixel in the image for better classification into semantic categories. We
analyze RCPN and propose two novel contributions to further improve the model.
We first analyze the learning of RCPN parameters and discover the presence of
bypass error paths in the computation graph of RCPN that can hinder contextual
propagation. We propose to tackle this problem by including the classification
loss of the internal nodes of the random parse trees in the original RCPN loss
function. Secondly, we use an MRF on the parse tree nodes to model the
hierarchical dependency present in the output. Both modifications provide
performance boosts over the original RCPN and the new system achieves
state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler
urban datasets.Comment: IEEE CVPR 201
Layered Interpretation of Street View Images
We propose a layered street view model to encode both depth and semantic
information on street view images for autonomous driving. Recently, stixels,
stix-mantics, and tiered scene labeling methods have been proposed to model
street view images. We propose a 4-layer street view model, a compact
representation over the recently proposed stix-mantics model. Our layers encode
semantic classes like ground, pedestrians, vehicles, buildings, and sky in
addition to the depths. The only input to our algorithm is a pair of stereo
images. We use a deep neural network to extract the appearance features for
semantic classes. We use a simple and an efficient inference algorithm to
jointly estimate both semantic classes and layered depth values. Our method
outperforms other competing approaches in Daimler urban scene segmentation
dataset. Our algorithm is massively parallelizable, allowing a GPU
implementation with a processing speed about 9 fps.Comment: The paper will be presented in the 2015 Robotics: Science and Systems
Conference (RSS
- …