32 research outputs found

    Coordinate-Free Carlsson-Weinshall Duality and Relative Multi-ViewGeometry

    Get PDF
    International audienceWe present a coordinate-free description of Carlsson-Weinshall duality betweenscene points and camera pinholes and use it to derive a new characterizationof primal/dual multi-view geometry. In the case of three views, a particularset of reduced trilinearities provide a novel parameterization of camerageometry that, unlike existing ones, is subject only to very simple internalconstraints. These trilinearities lead to new ``quasi-linear'' algorithms forprimal and dual structure from motion. We include some preliminary experimentswith real and synthetic data

    The joint image handbook

    Get PDF
    International audienceGiven multiple perspective photographs, point correspondences form the " joint image " , effectively a replica of three-dimensional space distributed across its two-dimensional projections. This set can be characterized by multilinear equations over image coordinates, such as epipolar and trifocal constraints. We revisit in this paper the geometric and algebraic properties of the joint image, and address fundamental questions such as how many and which multilinearities are necessary and/or sufficient to determine camera geometry and/or image correspondences. The new theoretical results in this paper answer these questions in a very general setting and, in turn, are intended to serve as a " handbook " reference about multilinearities for practitioners

    Lens Distortion Calibration Using Point Correspondences

    Get PDF
    This paper describes a new method for lens distortion calibration using only point correspondences in multiple views, without the need to know either the 3D location of the points or the camera locations. The standard lens distortion model is a model of the deviations of a real camera from the ideal pinhole or projective camera model.Given multiple views of a set of corresponding points taken by ideal pinhole cameras there exist epipolar and trilinear constraints among pairs and triplets of these views. In practice, due to noise in the feature detection and due to lens distortion these constraints do not hold exactly and we get some error. The calibration is a search for the lens distortion parameters that minimize this error. Using simulation and experimental results with real images we explore the properties of this method. We describe the use of this method with the standard lens distortion model, radial and decentering, but it could also be used with any other parametric distortion models. Finally we demonstrate that lens distortion calibration improves the accuracy of 3D reconstruction

    Image-Based View Synthesis

    Get PDF
    We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware

    Direct Estimation of Motion and Extended Scene Structure from a Moving Stereo Rig

    Get PDF
    We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure

    Windowed Factorization and Merging

    Get PDF
    In this work, an online 3D reconstruction algorithm is proposed which attempts to solve the structure from motion problem for occluded and degenerate data. To deal with occlusion the temporal consistency of data within a limited window is used to compute local reconstructions. These local reconstructions are transformed and merged to obtain an estimation of the 3D object shape. The algorithm is shown to accurately reconstruct a rotating and translating artificial sphere and a rotating toy dinosaur from a video. The proposed algorithm (WIFAME) provides a versatile framework to deal with missing data in the structure from motion problem

    Direct Methods for Estimation of Structure and Motion from Three Views

    Get PDF
    We describe a new direct method for estimating structure and motion from image intensities of multiple views. We extend the direct methods of Horn- and-Weldon to three views. Adding the third view enables us to solve for motion, and compute a dense depth map of the scene, directly from image spatio -temporal derivatives in a linear manner without first having to find point correspondences or compute optical flow. We describe the advantages and limitations of this method which are then verified through simulation and experiments with real images

    On Degeneracy of Linear Reconstruction from Three Views: Linear Line Complex and Applications

    Get PDF
    This paper investigates the linear degeneracies of projective structure estimation from point and line features across three views. We show that the rank of the linear system of equations for recovering the trilinear tensor of three views reduces to 23 (instead of 26) in the case when the scene is a Linear Line Complex (set of lines in space intersecting at a common line) and is 21 when the scene is planar. The LLC situation is only linearly degenerate, and we show that one can obtain a unique solution when the admissibility constraints of the tensor are accounted for. The line configuration described by an LLC, rather than being some obscure case, is in fact quite typical. It includes, as a particular example, the case of a camera moving down a hallway in an office environment or down an urban street. Furthermore, an LLC situation may occur as an artifact such as in direct estimation from spatio-temporal derivatives of image brightness. Therefore, an investigation into degeneracies and their remedy is important also in practice
    corecore