2,990 research outputs found
High-Performance and Tunable Stereo Reconstruction
Traditional stereo algorithms have focused their efforts on reconstruction
quality and have largely avoided prioritizing for run time performance. Robots,
on the other hand, require quick maneuverability and effective computation to
observe its immediate environment and perform tasks within it. In this work, we
propose a high-performance and tunable stereo disparity estimation method, with
a peak frame-rate of 120Hz (VGA resolution, on a single CPU-thread), that can
potentially enable robots to quickly reconstruct their immediate surroundings
and maneuver at high-speeds. Our key contribution is a disparity estimation
algorithm that iteratively approximates the scene depth via a piece-wise planar
mesh from stereo imagery, with a fast depth validation step for semi-dense
reconstruction. The mesh is initially seeded with sparsely matched keypoints,
and is recursively tessellated and refined as needed (via a resampling stage),
to provide the desired stereo disparity accuracy. The inherent simplicity and
speed of our approach, with the ability to tune it to a desired reconstruction
quality and runtime performance makes it a compelling solution for applications
in high-speed vehicles.Comment: Accepted to International Conference on Robotics and Automation
(ICRA) 2016; 8 pages, 5 figure
Fast Multi-frame Stereo Scene Flow with Motion Segmentation
We propose a new multi-frame method for efficiently computing scene flow
(dense depth and optical flow) and camera ego-motion for a dynamic scene
observed from a moving stereo camera rig. Our technique also segments out
moving objects from the rigid scene. In our method, we first estimate the
disparity map and the 6-DOF camera motion using stereo matching and visual
odometry. We then identify regions inconsistent with the estimated camera
motion and compute per-pixel optical flow only at these regions. This flow
proposal is fused with the camera motion-based flow proposal using fusion moves
to obtain the final optical flow and motion segmentation. This unified
framework benefits all four tasks - stereo, optical flow, visual odometry and
motion segmentation leading to overall higher accuracy and efficiency. Our
method is currently ranked third on the KITTI 2015 scene flow benchmark.
Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3
orders of magnitude faster than the top six methods. We also report a thorough
evaluation on challenging Sintel sequences with fast camera and object motion,
where our method consistently outperforms OSF [Menze and Geiger, 2015], which
is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo
Scene Flow Benchmark in November 201
3D RECONSTRUCTION FROM STEREO/RANGE IMAGES
3D reconstruction from stereo/range image is one of the most fundamental and extensively researched topics in computer vision. Stereo research has recently experienced somewhat of a new era, as a result of publically available performance testing such as the Middlebury data set, which has allowed researchers to compare their algorithms against all the state-of-the-art algorithms. This thesis investigates into the general stereo problems in both the two-view stereo and multi-view stereo scopes. In the two-view stereo scope, we formulate an algorithm for the stereo matching problem with careful handling of disparity, discontinuity and occlusion. The algorithm works with a global matching stereo model based on an energy minimization framework. The experimental results are evaluated on the Middlebury data set, showing that our algorithm is the top performer. A GPU approach of the Hierarchical BP algorithm is then proposed, which provides similar stereo quality to CPU Hierarchical BP while running at real-time speed. A fast-converging BP is also proposed to solve the slow convergence problem of general BP algorithms. Besides two-view stereo, ecient multi-view stereo for large scale urban reconstruction is carefully studied in this thesis. A novel approach for computing depth maps given urban imagery where often large parts of surfaces are weakly textured is presented. Finally, a new post-processing step to enhance the range images in both the both the spatial resolution and depth precision is proposed
DeMoN: Depth and Motion Network for Learning Monocular Stereo
In this paper we formulate structure from motion as a learning problem. We
train a convolutional network end-to-end to compute depth and camera motion
from successive, unconstrained image pairs. The architecture is composed of
multiple stacked encoder-decoder networks, the core part being an iterative
network that is able to improve its own predictions. The network estimates not
only depth and motion, but additionally surface normals, optical flow between
the images and confidence of the matching. A crucial component of the approach
is a training loss based on spatial relative differences. Compared to
traditional two-frame structure from motion methods, results are more accurate
and more robust. In contrast to the popular depth-from-single-image networks,
DeMoN learns the concept of matching and, thus, better generalizes to
structures not seen during training.Comment: Camera ready version for CVPR 2017. Supplementary material included.
Project page:
http://lmb.informatik.uni-freiburg.de/people/ummenhof/depthmotionnet
Disparity refinement process based on RANSAC plane fitting for machine vision applications
This paper presents a new disparity map refinement process for stereo matching algorithm and the refinement stage that will be implemented by partitioning the place or mask image and re-projected to the preliminary disparity images. This process is to refine the noise and sparse of initial disparity map from weakly textured. The plane fitting algorithm is using Random Sample Consensus. Two well-known stereo matching algorithms have been tested on this framework with different filtering techniques applied at disparity refinement stage. The framework is evaluated on three Middlebury datasets. The experimental results show that the proposed framework produces better-quality and more accurate than normal flow state-of-the-art stereo matching algorithms. The performance evaluations are based on standard image quality metrics i.e. structural similarity index measure, peak signal-to-noise ratio and mean square error.Keywords: computer vision; disparity refinement; image segmentation; RANSAC; stereo.
- …