2,961 research outputs found

    Fast Multi-frame Stereo Scene Flow with Motion Segmentation

    Full text link
    We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks - stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [Menze and Geiger, 2015], which is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo Scene Flow Benchmark in November 201

    SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion

    Full text link
    In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth field. At each time step, this provides a diffusion equation on the unit Riemannian sphere that is numerically solved to obtain a real time depth field estimation of the entire field of view. Two asymptotic observers are derived from the governing equations of dynamics, respectively based on optical flow and depth estimations: implemented on noisy sequences of synthetic images as well as on real data, they perform a more robust and accurate depth estimation. This approach is complementary to most methods employing state observers for range estimation, which uniquely concern single or isolated feature points.Comment: Submitte

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108

    Estimation of vector fields in unconstrained and inequality constrained variational problems for segmentation and registration

    Get PDF
    Vector fields arise in many problems of computer vision, particularly in non-rigid registration. In this paper, we develop coupled partial differential equations (PDEs) to estimate vector fields that define the deformation between objects, and the contour or surface that defines the segmentation of the objects as well.We also explore the utility of inequality constraints applied to variational problems in vision such as estimation of deformation fields in non-rigid registration and tracking. To solve inequality constrained vector field estimation problems, we apply tools from the Kuhn-Tucker theorem in optimization theory. Our technique differs from recently popular joint segmentation and registration algorithms, particularly in its coupled set of PDEs derived from the same set of energy terms for registration and segmentation. We present both the theory and results that demonstrate our approach

    High-speed Video from Asynchronous Camera Array

    Get PDF
    This paper presents a method for capturing high-speed video using an asynchronous camera array. Our method sequentially fires each sensor in a camera array with a small time offset and assembles captured frames into a high-speed video according to the time stamps. The resulting video, however, suffers from parallax jittering caused by the viewpoint difference among sensors in the camera array. To address this problem, we develop a dedicated novel view synthesis algorithm that transforms the video frames as if they were captured by a single reference sensor. Specifically, for any frame from a non-reference sensor, we find the two temporally neighboring frames captured by the reference sensor. Using these three frames, we render a new frame with the same time stamp as the non-reference frame but from the viewpoint of the reference sensor. Specifically, we segment these frames into super-pixels and then apply local content-preserving warping to warp them to form the new frame. We employ a multi-label Markov Random Field method to blend these warped frames. Our experiments show that our method can produce high-quality and high-speed video of a wide variety of scenes with large parallax, scene dynamics, and camera motion and outperforms several baseline and state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201
    corecore