13,563 research outputs found
High-Performance and Tunable Stereo Reconstruction
Traditional stereo algorithms have focused their efforts on reconstruction
quality and have largely avoided prioritizing for run time performance. Robots,
on the other hand, require quick maneuverability and effective computation to
observe its immediate environment and perform tasks within it. In this work, we
propose a high-performance and tunable stereo disparity estimation method, with
a peak frame-rate of 120Hz (VGA resolution, on a single CPU-thread), that can
potentially enable robots to quickly reconstruct their immediate surroundings
and maneuver at high-speeds. Our key contribution is a disparity estimation
algorithm that iteratively approximates the scene depth via a piece-wise planar
mesh from stereo imagery, with a fast depth validation step for semi-dense
reconstruction. The mesh is initially seeded with sparsely matched keypoints,
and is recursively tessellated and refined as needed (via a resampling stage),
to provide the desired stereo disparity accuracy. The inherent simplicity and
speed of our approach, with the ability to tune it to a desired reconstruction
quality and runtime performance makes it a compelling solution for applications
in high-speed vehicles.Comment: Accepted to International Conference on Robotics and Automation
(ICRA) 2016; 8 pages, 5 figure
Combining Stereo Disparity and Optical Flow for Basic Scene Flow
Scene flow is a description of real world motion in 3D that contains more
information than optical flow. Because of its complexity there exists no
applicable variant for real-time scene flow estimation in an automotive or
commercial vehicle context that is sufficiently robust and accurate. Therefore,
many applications estimate the 2D optical flow instead. In this paper, we
examine the combination of top-performing state-of-the-art optical flow and
stereo disparity algorithms in order to achieve a basic scene flow. On the
public KITTI Scene Flow Benchmark we demonstrate the reasonable accuracy of the
combination approach and show its speed in computation.Comment: Commercial Vehicle Technology Symposium (CVTS), 201
Fast Multi-frame Stereo Scene Flow with Motion Segmentation
We propose a new multi-frame method for efficiently computing scene flow
(dense depth and optical flow) and camera ego-motion for a dynamic scene
observed from a moving stereo camera rig. Our technique also segments out
moving objects from the rigid scene. In our method, we first estimate the
disparity map and the 6-DOF camera motion using stereo matching and visual
odometry. We then identify regions inconsistent with the estimated camera
motion and compute per-pixel optical flow only at these regions. This flow
proposal is fused with the camera motion-based flow proposal using fusion moves
to obtain the final optical flow and motion segmentation. This unified
framework benefits all four tasks - stereo, optical flow, visual odometry and
motion segmentation leading to overall higher accuracy and efficiency. Our
method is currently ranked third on the KITTI 2015 scene flow benchmark.
Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3
orders of magnitude faster than the top six methods. We also report a thorough
evaluation on challenging Sintel sequences with fast camera and object motion,
where our method consistently outperforms OSF [Menze and Geiger, 2015], which
is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo
Scene Flow Benchmark in November 201
SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences
While most scene flow methods use either variational optimization or a strong
rigid motion assumption, we show for the first time that scene flow can also be
estimated by dense interpolation of sparse matches. To this end, we find sparse
matches across two stereo image pairs that are detected without any prior
regularization and perform dense interpolation preserving geometric and motion
boundaries by using edge information. A few iterations of variational energy
minimization are performed to refine our results, which are thoroughly
evaluated on the KITTI benchmark and additionally compared to state-of-the-art
on MPI Sintel. For application in an automotive context, we further show that
an optional ego-motion model helps to boost performance and blends smoothly
into our approach to produce a segmentation of the scene into static and
dynamic parts.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
201
Real-time self-adaptive deep stereo
Deep convolutional neural networks trained end-to-end are the
state-of-the-art methods to regress dense disparity maps from stereo pairs.
These models, however, suffer from a notable decrease in accuracy when exposed
to scenarios significantly different from the training set, e.g., real vs
synthetic images, etc.). We argue that it is extremely unlikely to gather
enough samples to achieve effective training/tuning in any target domain, thus
making this setup impractical for many applications. Instead, we propose to
perform unsupervised and continuous online adaptation of a deep stereo network,
which allows for preserving its accuracy in any environment. However, this
strategy is extremely computationally demanding and thus prevents real-time
inference. We address this issue introducing a new lightweight, yet effective,
deep stereo architecture, Modularly ADaptive Network (MADNet) and developing a
Modular ADaptation (MAD) algorithm, which independently trains sub-portions of
the network. By deploying MADNet together with MAD we introduce the first
real-time self-adaptive deep stereo system enabling competitive performance on
heterogeneous datasets.Comment: Accepted at CVPR2019 as oral presentation. Code Available
https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stere
- …