370 research outputs found
Confidence Propagation through CNNs for Guided Sparse Depth Regression
Generally, convolutional neural networks (CNNs) process data on a regular
grid, e.g. data generated by ordinary cameras. Designing CNNs for sparse and
irregularly spaced input data is still an open research problem with numerous
applications in autonomous driving, robotics, and surveillance. In this paper,
we propose an algebraically-constrained normalized convolution layer for CNNs
with highly sparse input that has a smaller number of network parameters
compared to related work. We propose novel strategies for determining the
confidence from the convolution operation and propagating it to consecutive
layers. We also propose an objective function that simultaneously minimizes the
data error while maximizing the output confidence. To integrate structural
information, we also investigate fusion strategies to combine depth and RGB
information in our normalized convolution network framework. In addition, we
introduce the use of output confidence as an auxiliary information to improve
the results. The capabilities of our normalized convolution network framework
are demonstrated for the problem of scene depth completion. Comprehensive
experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The
results clearly demonstrate that the proposed approach achieves superior
performance while requiring only about 1-5% of the number of parameters
compared to the state-of-the-art methods.Comment: 14 pages, 14 Figure
Propagating Confidences through CNNs for Sparse Data Regression
In most computer vision applications, convolutional neural networks (CNNs)
operate on dense image data generated by ordinary cameras. Designing CNNs for
sparse and irregularly spaced input data is still an open problem with numerous
applications in autonomous driving, robotics, and surveillance. To tackle this
challenging problem, we introduce an algebraically-constrained convolution
layer for CNNs with sparse input and demonstrate its capabilities for the scene
depth completion task. We propose novel strategies for determining the
confidence from the convolution operation and propagating it to consecutive
layers. Furthermore, we propose an objective function that simultaneously
minimizes the data error while maximizing the output confidence. Comprehensive
experiments are performed on the KITTI depth benchmark and the results clearly
demonstrate that the proposed approach achieves superior performance while
requiring three times fewer parameters than the state-of-the-art methods.
Moreover, our approach produces a continuous pixel-wise confidence map enabling
information fusion, state inference, and decision support.Comment: To appear in the British Machine Vision Conference (BMVC2018
Deep Motion Features for Visual Tracking
Robust visual tracking is a challenging computer vision problem, with many
real-world applications. Most existing approaches employ hand-crafted
appearance features, such as HOG or Color Names. Recently, deep RGB features
extracted from convolutional neural networks have been successfully applied for
tracking. Despite their success, these features only capture appearance
information. On the other hand, motion cues provide discriminative and
complementary information that can improve tracking performance. Contrary to
visual tracking, deep motion features have been successfully applied for action
recognition and video classification tasks. Typically, the motion features are
learned by training a CNN on optical flow images extracted from large amounts
of labeled videos.
This paper presents an investigation of the impact of deep motion features in
a tracking-by-detection framework. We further show that hand-crafted, deep RGB,
and deep motion features contain complementary information. To the best of our
knowledge, we are the first to propose fusing appearance information with deep
motion features for visual tracking. Comprehensive experiments clearly suggest
that our fusion approach with deep motion features outperforms standard methods
relying on appearance information alone.Comment: ICPR 2016. Best paper award in the "Computer Vision and Robot Vision"
trac
Discriminative Scale Space Tracking
Accurate scale estimation of a target is a challenging research problem in
visual object tracking. Most state-of-the-art methods employ an exhaustive
scale search to estimate the target size. The exhaustive search strategy is
computationally expensive and struggles when encountered with large scale
variations. This paper investigates the problem of accurate and robust scale
estimation in a tracking-by-detection framework. We propose a novel scale
adaptive tracking approach by learning separate discriminative correlation
filters for translation and scale estimation. The explicit scale filter is
learned online using the target appearance sampled at a set of different
scales. Contrary to standard approaches, our method directly learns the
appearance change induced by variations in the target scale. Additionally, we
investigate strategies to reduce the computational cost of our approach.
Extensive experiments are performed on the OTB and the VOT2014 datasets.
Compared to the standard exhaustive scale search, our approach achieves a gain
of 2.5% in average overlap precision on the OTB dataset. Additionally, our
method is computationally efficient, operating at a 50% higher frame rate
compared to the exhaustive scale search. Our method obtains the top rank in
performance by outperforming 19 state-of-the-art trackers on OTB and 37
state-of-the-art trackers on VOT2014.Comment: To appear in TPAMI. This is the journal extension of the
VOT2014-winning DSST tracking metho
The Poisson Scale-Space: A Unified Approach to Phase-Based Image Processing in Scale-Space
In this paper we address the topics of scale-space and phase-based signal processing in a common framework. The involved linear scale-space is no longer based on the Gaussian kernel but on the Poisson kernel. The resulting scale-space representation is directly related to the monogenic signal, a 2D generalization of the analytic signal. Hence, the local phase arises as a natural concept in this framework which results in several advanced relationships that can be used in image processing
Комп’ютерна система реєстрації та аналізу біоелектричних сигналів
We present a new method for matching a region between an input and a query image, based on the P-channel representation of pixel-based image features such as grayscale and color information, local gradient orientation and local spatial coordinates. We introduce the concept of integral P-channels, which conciliates the concepts of P-channel and integral images. Using integral images, the P-channel representation of a given region is extracted with a few arithmetic operations. This enables a fast nearest-neighbor search in all possible target regions. We present extensive experimental results and show that our approach compares favorably to existing methods for region matching such as histograms or region covariance.DIPLEC
- …