1,668 research outputs found

    A structure from motion inequality

    Full text link
    We state an elementary inequality for the structure from motion problem for m cameras and n points. This structure from motion inequality relates space dimension, camera parameter dimension, the number of cameras and number points and global symmetry properties and provides a rigorous criterion for which reconstruction is not possible with probability 1. Mathematically the inequality is based on Frobenius theorem which is a geometric incarnation of the fundamental theorem of linear algebra. The paper also provides a general mathematical formalism for the structure from motion problem. It includes the situation the points can move while the camera takes the pictures.Comment: 15 pages, 22 figure

    Projective Structure from Two Uncalibrated Images: Structure from Motion and RecRecognition

    Get PDF
    This paper addresses the problem of recovering relative structure, in the form of an invariant, referred to as projective structure, from two views of a 3D scene. The invariant structure is computed without any prior knowledge of camera geometry, or internal calibration, and with the property that perspective and orthographic projections are treated alike, namely, the system makes no assumption regarding the existence of perspective distortions in the input images

    Quantitative 3d reconstruction from scanning electron microscope images based on affine camera models

    Get PDF
    Scanning electron microscopes (SEMs) are versatile imaging devices for the micro-and nanoscale that find application in various disciplines such as the characterization of biological, mineral or mechanical specimen. Even though the specimen’s two-dimensional (2D) properties are provided by the acquired images, detailed morphological characterizations require knowledge about the three-dimensional (3D) surface structure. To overcome this limitation, a reconstruction routine is presented that allows the quantitative depth reconstruction from SEM image sequences. Based on the SEM’s imaging properties that can be well described by an affine camera, the proposed algorithms rely on the use of affine epipolar geometry, self-calibration via factorization and triangulation from dense correspondences. To yield the highest robustness and accuracy, different sub-models of the affine camera are applied to the SEM images and the obtained results are directly compared to confocal laser scanning microscope (CLSM) measurements to identify the ideal parametrization and underlying algorithms. To solve the rectification problem for stereo-pair images of an affine camera so that dense matching algorithms can be applied, existing approaches are adapted and extended to further enhance the yielded results. The evaluations of this study allow to specify the applicability of the affine camera models to SEM images and what accuracies can be expected for reconstruction routines based on self-calibration and dense matching algorithms. © MDPI AG. All rights reserved

    Robust Fundamental Matrix Determination without Correspondences

    Get PDF
    Estimation of the fundamental matrix is key to many problems in computer vision as it allows recovery of the epipolar geometry between camera images of the same scene. The estimation from feature correspondences has been widely addressed in the literature, particularly in the presence of outliers. In this paper, we propose a new robust method to estimate the fundamental matrix from two sets of features without any correspondence information. The method operates in the frequency domain and the underlying estimation process considers all features simultaneously, thus yielding a high robustness with respect to noise and outliers. In addition, we show that the method is well-suited to widely separate viewpoints

    Algebraic Functions For Recognition

    Get PDF
    In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment --- yielding a direct method that cuts through the computations of camera transformation, scene structure and epipolar geometry. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in re-projection tasks

    Estimation of Epipolar Geometry via the Radon Transform

    Get PDF
    One of the key problems in computer vision is the recovery of epipolar geometry constraints between different camera views. The majority of existing techniques rely on point correspondences, which are typically perturbed by mismatches and noise, hence limiting the accuracy of these techniques. To overcome these limitations, we propose a novel approach that estimates epipolar geometry constraints based on a statistical model in the Radon domain. The method requires no correspondences, explicit constraints on the data or assumptions regarding the scene structure. Results are presented on both synthetic and real data that show the method's robustness to noise and outliers

    Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views

    Get PDF
    We investigate the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. We then use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved)

    A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper we bring the tools of the Simultaneous Localization and Map Building (SLAM) problem from a rigid to a deformable domain and use them to simultaneously recover the 3D shape of non-rigid surfaces and the sequence of poses of a moving camera. Under the assumption that the surface shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, can be probabilistically formulated as a maximum a posteriori estimate and solved using an iterative least squares optimization. In addition, the probabilistic formulation we propose is very general and allows introducing different constraints without requiring any extra complexity. As a proof of concept, we show that local inextensibility constraints that prevent the surface from stretching can be easily integrated. An extensive evaluation on synthetic and real data, demonstrates that our method has several advantages over current non-rigid shape from motion approaches. In particular, we show that our solution is robust to large amounts of noise and outliers and that it does not need to track points over the whole sequence nor to use an initialization close from the ground truth.Peer ReviewedPostprint (author's final draft
    • …
    corecore