786 research outputs found

    A Projective Framework for Structure and Motion Recovery from Two Views of a Piecewise Planar Scene

    Get PDF
    In this paper, we consider the problem of finding an optimal reconstruction from two views of a piecewise planar scene. We consider the general case of uncalibrated cameras, hence place us in a projective framework. In this case, there is no meaningful metric information about the object space that could be used to define optimization criteria. Taking into account that the images are then the only spaces where an optimization process makes sense, there is a need at each step of the reconstruction process, from the detection of planar structures to motion estimation and actual 3D reconstruction, of a consistent image level representation of geometric 3D structures. In our case, we need to represent camera motion and 3D points that are subject to coplanarity constraints. It is well known that camera motion between two views can be represented on the image level via the epipolar geometry (fundamental matrix). Coplanarity constraints can be expressed via a collection of 2D homographies. Unfortunately, these algebraic entities are over-parameterized in the sense that the 2D homographie- s must in addition obey constraints imposed by the epipolar geometry. We are thus looking for a minimal and consistent representation of motion (epipolar geometry) and structure (points+homographies) that in addition should be easy to use for minimizing reprojection error in a bundle adjustment manner. In this paper, we propose such a representation and use it to devise fast and accurate estimation methods for each step of the reconstructio- n process, including image point matching, plane detection and optimal triangulation of planes and points on planes. We make extensive use of the quasi-linear optimization principle. A great number of experimental results show that the new methods give superior results compared to approaches that do not estimate motion and multi-planar structure simultaneously and consistently, even in cases when the observed scene is not perfectly coplanar

    Multiframe Scene Flow with Piecewise Rigid Motion

    Full text link
    We introduce a novel multiframe scene flow approach that jointly optimizes the consistency of the patch appearances and their local rigid motions from RGB-D image sequences. In contrast to the competing methods, we take advantage of an oversegmentation of the reference frame and robust optimization techniques. We formulate scene flow recovery as a global non-linear least squares problem which is iteratively solved by a damped Gauss-Newton approach. As a result, we obtain a qualitatively new level of accuracy in RGB-D based scene flow estimation which can potentially run in real-time. Our method can handle challenging cases with rigid, piecewise rigid, articulated and moderate non-rigid motion, and does not rely on prior knowledge about the types of motions and deformations. Extensive experiments on synthetic and real data show that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October 201

    Multiframe Scene Flow with Piecewise Rigid Motion

    Full text link
    We introduce a novel multiframe scene flow approach that jointly optimizes the consistency of the patch appearances and their local rigid motions from RGB-D image sequences. In contrast to the competing methods, we take advantage of an oversegmentation of the reference frame and robust optimization techniques. We formulate scene flow recovery as a global non-linear least squares problem which is iteratively solved by a damped Gauss-Newton approach. As a result, we obtain a qualitatively new level of accuracy in RGB-D based scene flow estimation which can potentially run in real-time. Our method can handle challenging cases with rigid, piecewise rigid, articulated and moderate non-rigid motion, and does not rely on prior knowledge about the types of motions and deformations. Extensive experiments on synthetic and real data show that our method outperforms state-of-the-art.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October 201

    Pose Invariant Gait Analysis And Reconstruction

    Get PDF
    One of the unique advantages of human gait is that it can be perceived from a distance. A varied range of research has been undertaken within the field of gait recognition. However, in almost all circumstances subjects have been constrained to walk fronto-parallel to the camera with a single walking speed. In this thesis we show that gait has sufficient properties that allows us to exploit the structure of articulated leg motion within single view sequences, in order to remove the unknown subject pose and reconstruct the underlying gait signature, with no prior knowledge of the camera calibration. Articulated leg motion is approximately planar, since almost all of the perceived motion is contained within a single limb swing plane. The variation of motion out of this plane is subtle and negligible in comparison to this major plane of motion. Subsequently, we can model human motion by employing a cardboard person assumption. A subject's body and leg segments may be represented by repeating spatio-temporal motion patterns within a set of bilaterally symmetric limb planes. The static features of gait are defined as quantities that remain invariant over the full range of walking motions. In total, we have identified nine static features of articulated leg motion, corresponding to the fronto-parallel view of gait, that remain invariant to the differences in the mode of subject motion. These features are hypothetically unique to each individual, thus can be used as suitable parameters for biometric identification. We develop a stratified approach to linear trajectory gait reconstruction that uses the rigid bone lengths of planar articulated leg motion in order to reconstruct the fronto-parallel view of gait. Furthermore, subject motion commonly occurs within a fixed ground plane and is imaged by a static camera. In general, people tend to walk in straight lines with constant velocity. Imaged gait can then be split piecewise into natural segments of linear motion. If two or more sufficiently different imaged trajectories are available then the calibration of the camera can be determined. Subsequently, the total pattern of gait motion can be globally parameterised for all subjects within an image sequence. We present the details of a sparse method that computes the maximum likelihood estimate of this set of parameters, then conclude with a reconstruction error analysis corresponding to an example image sequence of subject motion

    Rate-Distortion Efficient Piecewise Planar 3D Scene Representation from 2-D Images

    Get PDF
    Cataloged from PDF version of article.In any practical application of the 2-D-to-3-D conversion that involves storage and transmission, representation effi- ciency has an undisputable importance that is not reflected in the attention the topic received. In order to address this problem, a novel algorithm, which yields efficient 3-D representations in the rate distortion sense, is proposed. The algorithm utilizes two views of a scene to build a mesh-based representation incrementally, via adding new vertices, while minimizing a distortion measure. The experimental results indicate that, in scenes that can be approximated by planes, the proposed algorithm is superior to the dense depth map and, in some practical situations, to the block motion vector-based representations in the rate-distortion sense

    Observability/Identifiability of Rigid Motion under Perspective Projection

    Get PDF
    The "visual motion" problem consists of estimating the motion of an object viewed under projection. In this paper we address the feasibility of such a problem. We will show that the model which defines the visual motion problem for feature points in the euclidean 3D space lacks of both linear and local (weak) observability. The locally observable manifold is covered with three levels of lie differentiations. Indeed, by imposing metric constraints on the state-space, it is possible to reduce the set of indistinguishable states. We will then analyze a model for visual motion estimation in terms of identification of an Exterior Differential System, with the parameters living on a topological manifold, called the "essential manifold", which includes explicitly in its definition the forementioned metric constraints. We will show that rigid motion is globally observable/identifiable under perspective projection with zero level of lie differentiation under some general position conditions. Such conditions hold when the viewer does not move on a quadric surface containing all the visible points

    Linearized Motion Estimation for Articulated Planes

    Full text link

    GSLAM: Initialization-robust Monocular Visual SLAM via Global Structure-from-Motion

    Full text link
    Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.Comment: 3DV 2017 Project Page: https://frobelbest.github.io/gsla
    corecore