17 research outputs found

    Camera calibration from a translation + planar motion

    Get PDF
    This paper addresses the problem of camera calibration by exploiting image invariants under camera/object rotation. A novel translation + planar motion is studied here. The 3 × 3 homography mapping corresponding points before and after the motion is exploited to obtain image invariants under perspective projection. The homography is found to form a "rotation conic" under different rotation angles. Apart from the imaged circular points, this conic can also be exploited to find the vanishing point of the rotation axis and this provides extra constraints for camera calibration. A square calibration pattern, which is invariant under a rotation about its center by multiples of π/2 radians, is introduced as a special instantiation of the translation + planar motion. Experiments on synthetic and real data show good precisions in calibration results.postprintThe 8th IASTED International Conference on Signal and Image Processing (SIP 2006), Honolulu, HI., 14-16 August, 2006. In Proceedings of the 8th IASTED International Conference on Signal and Image Processing, 2006, p. 195-20

    Trifocal Relative Pose from Lines at Points and its Efficient Solution

    Full text link
    We present a new minimal problem for relative pose estimation mixing point features with lines incident at points observed in three views and its efficient homotopy continuation solver. We demonstrate the generality of the approach by analyzing and solving an additional problem with mixed point and line correspondences in three views. The minimal problems include correspondences of (i) three points and one line and (ii) three points and two lines through two of the points which is reported and analyzed here for the first time. These are difficult to solve, as they have 216 and - as shown here - 312 solutions, but cover important practical situations when line and point features appear together, e.g., in urban scenes or when observing curves. We demonstrate that even such difficult problems can be solved robustly using a suitable homotopy continuation technique and we provide an implementation optimized for minimal problems that can be integrated into engineering applications. Our simulated and real experiments demonstrate our solvers in the camera geometry computation task in structure from motion. We show that new solvers allow for reconstructing challenging scenes where the standard two-view initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while most authors were in residence at Brown University's Institute for Computational and Experimental Research in Mathematics -- ICERM, in Providence, R

    QUARCH: A New Quasi-Affine Reconstruction Stratum From Vague Relative Camera Orientation Knowledge

    Get PDF
    International audienceWe present a new quasi-affine reconstruction of a scene and its application to camera self-calibration. We refer to this reconstruction as QUARCH (QUasi-Affine Reconstruction with respect to Camera centers and the Hodographs of horopters). A QUARCH can be obtained by solving a semidefinite programming problem when, (i) the images have been captured by a moving camera with constant intrinsic parameters, and (ii) a vague knowledge of the relative orientation (under or over 120°) between camera pairs is available. The resulting reconstruction comes close enough to an affine one allowing thus an easy upgrade of the QUARCH to its affine and metric counterparts. We also present a constrained Levenberg-Marquardt method for nonlinear optimization subject to Linear Matrix Inequality (LMI) constraints so as to ensure that the QUARCH LMIs are satisfied during optimization. Experiments with synthetic and real data show the benefits of QUARCH in reliably obtaining a metric reconstruction

    QUARCH: A New Quasi-Affine Reconstruction Stratum From Vague Relative Camera Orientation Knowledge

    Get PDF
    International audienceWe present a new quasi-affine reconstruction of a scene and its application to camera self-calibration. We refer to this reconstruction as QUARCH (QUasi-Affine Reconstruction with respect to Camera centers and the Hodographs of horopters). A QUARCH can be obtained by solving a semidefinite programming problem when, (i) the images have been captured by a moving camera with constant intrinsic parameters, and (ii) a vague knowledge of the relative orientation (under or over 120°) between camera pairs is available. The resulting reconstruction comes close enough to an affine one allowing thus an easy upgrade of the QUARCH to its affine and metric counterparts. We also present a constrained Levenberg-Marquardt method for nonlinear optimization subject to Linear Matrix Inequality (LMI) constraints so as to ensure that the QUARCH LMIs are satisfied during optimization. Experiments with synthetic and real data show the benefits of QUARCH in reliably obtaining a metric reconstruction

    Model-free Consensus Maximization for Non-Rigid Shapes

    Full text link
    Many computer vision methods use consensus maximization to relate measurements containing outliers with the correct transformation model. In the context of rigid shapes, this is typically done using Random Sampling and Consensus (RANSAC) by estimating an analytical model that agrees with the largest number of measurements (inliers). However, small parameter models may not be always available. In this paper, we formulate the model-free consensus maximization as an Integer Program in a graph using `rules' on measurements. We then provide a method to solve it optimally using the Branch and Bound (BnB) paradigm. We focus its application on non-rigid shapes, where we apply the method to remove outlier 3D correspondences and achieve performance superior to the state of the art. Our method works with outlier ratio as high as 80\%. We further derive a similar formulation for 3D template to image matching, achieving similar or better performance compared to the state of the art.Comment: ECCV1

    Estimating intrinsic camera parameters from the fundamental matrix using an evolutionary approach

    Get PDF
    Calibration is the process of computing the intrinsic (internal) camera parameters from a series of images. Normally calibration is done by placing predefined targets in the scene or by having special camera motions, such as rotations. If these two restrictions do not hold, then this calibration process is called autocalibration because it is done automatically, without user intervention. Using autocalibration, it is possible to create 3D reconstructions from a sequence of uncalibrated images without having to rely on a formal camera calibration process. The fundamental matrix describes the epipolar geometry between a pair of images, and it can be calculated directly from 2D image correspondences. We show that autocalibration from a set of fundamental matrices can simply be transformed into a global minimization problem utilizing a cost function. We use a stochastic optimization approach taken from the field of evolutionary computing to solve this problem. A number of experiments are performed on published and standardized data sets that show the effectiveness of the approach. The basic assumption of this method is that the internal (intrinsic) camera parameters remain constant throughout the image sequence, that is, the images are taken from the same camera without varying such quantities as the focal length. We show that for the autocalibration of the focal length and aspect ratio, the evolutionary method achieves results comparable to published methods but is simpler to implement and is efficient enough to handle larger image sequences

    Methods for Structure from Motion

    Get PDF

    Robust ego-localization using monocular visual odometry

    Get PDF
    corecore