2,632 research outputs found
Motion Cooperation: Smooth Piece-Wise Rigid Scene Flow from RGB-D Images
We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between
the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results
than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined
until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects.
We evaluate our approach with both synthetic and
real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Spanish Government under the grant programs FPI-MICINN 2012 and DPI2014- 55826-R (co-founded by the European Regional Development Fund), as well as by the EU ERC grant Convex Vision (grant agreement no. 240168)
Multiframe Scene Flow with Piecewise Rigid Motion
We introduce a novel multiframe scene flow approach that jointly optimizes
the consistency of the patch appearances and their local rigid motions from
RGB-D image sequences. In contrast to the competing methods, we take advantage
of an oversegmentation of the reference frame and robust optimization
techniques. We formulate scene flow recovery as a global non-linear least
squares problem which is iteratively solved by a damped Gauss-Newton approach.
As a result, we obtain a qualitatively new level of accuracy in RGB-D based
scene flow estimation which can potentially run in real-time. Our method can
handle challenging cases with rigid, piecewise rigid, articulated and moderate
non-rigid motion, and does not rely on prior knowledge about the types of
motions and deformations. Extensive experiments on synthetic and real data show
that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
Multiframe Scene Flow with Piecewise Rigid Motion
We introduce a novel multiframe scene flow approach that jointly optimizes
the consistency of the patch appearances and their local rigid motions from
RGB-D image sequences. In contrast to the competing methods, we take advantage
of an oversegmentation of the reference frame and robust optimization
techniques. We formulate scene flow recovery as a global non-linear least
squares problem which is iteratively solved by a damped Gauss-Newton approach.
As a result, we obtain a qualitatively new level of accuracy in RGB-D based
scene flow estimation which can potentially run in real-time. Our method can
handle challenging cases with rigid, piecewise rigid, articulated and moderate
non-rigid motion, and does not rely on prior knowledge about the types of
motions and deformations. Extensive experiments on synthetic and real data show
that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
Cascaded Scene Flow Prediction using Semantic Segmentation
Given two consecutive frames from a pair of stereo cameras, 3D scene flow
methods simultaneously estimate the 3D geometry and motion of the observed
scene. Many existing approaches use superpixels for regularization, but may
predict inconsistent shapes and motions inside rigidly moving objects. We
instead assume that scenes consist of foreground objects rigidly moving in
front of a static background, and use semantic cues to produce pixel-accurate
scene flow estimates. Our cascaded classification framework accurately models
3D scenes by iteratively refining semantic segmentation masks, stereo
correspondences, 3D rigid motion estimates, and optical flow fields. We
evaluate our method on the challenging KITTI autonomous driving benchmark, and
show that accounting for the motion of segmented vehicles leads to
state-of-the-art performance.Comment: International Conference on 3D Vision (3DV), 2017 (oral presentation
Joint Optical Flow and Temporally Consistent Semantic Segmentation
The importance and demands of visual scene understanding have been steadily
increasing along with the active development of autonomous systems.
Consequently, there has been a large amount of research dedicated to semantic
segmentation and dense motion estimation. In this paper, we propose a method
for jointly estimating optical flow and temporally consistent semantic
segmentation, which closely connects these two problem domains and leverages
each other. Semantic segmentation provides information on plausible physical
motion to its associated pixels, and accurate pixel-level temporal
correspondences enhance the accuracy of semantic segmentation in the temporal
domain. We demonstrate the benefits of our approach on the KITTI benchmark,
where we observe performance gains for flow and segmentation. We achieve
state-of-the-art optical flow results, and outperform all published algorithms
by a large margin on challenging, but crucial dynamic objects.Comment: 14 pages, Accepted for CVRSUAD workshop at ECCV 201
Simultaneous Stereo Video Deblurring and Scene Flow Estimation
Videos for outdoor scene often show unpleasant blur effects due to the large
relative motion between the camera and the dynamic objects and large depth
variations. Existing works typically focus monocular video deblurring. In this
paper, we propose a novel approach to deblurring from stereo videos. In
particular, we exploit the piece-wise planar assumption about the scene and
leverage the scene flow information to deblur the image. Unlike the existing
approach [31] which used a pre-computed scene flow, we propose a single
framework to jointly estimate the scene flow and deblur the image, where the
motion cues from scene flow estimation and blur information could reinforce
each other, and produce superior results than the conventional scene flow
estimation or stereo deblurring methods. We evaluate our method extensively on
two available datasets and achieve significant improvement in flow estimation
and removing the blur effect over the state-of-the-art methods.Comment: Accepted to IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR) 201
- …