21 research outputs found

    Metric 3D-reconstruction from Unordered and Uncalibrated Image Collections

    Get PDF
    In this thesis the problem of Structure from Motion (SfM) for uncalibrated and unordered image collections is considered. The proposed framework is an adaptation of the framework for calibrated SfM proposed by Olsson-Enqvist (2011) to the uncalibrated case. Olsson-Enqvist's framework consists of three main steps; pairwise relative rotation estimation, rotation averaging, and geometry estimation with known rotations. For this to work with uncalibrated images we also perform auto-calibration during the first step. There is a well-known degeneracy for pairwise auto-calibration which occurs when the two principal axes meet in a point. This is unfortunately common for real images. To mitigate this the rotation estimation is instead performed by estimating image triplets. For image triplets the degenerate congurations are less likely to occur in practice. This is followed by estimation of the pairs which did not get a successful relative rotation from the previous step. The framework is successfully applied to an uncalibrated and unordered collection of images of the cathedral in Lund. It is also applied to the well-known Oxford dinosaur sequence which consists of turntable motion. Image pairs from the turntable motion are in a degenerate conguration for auto-calibration since they both view the same point on the rotation axis

    Is Dual Linear Self-Calibration Artificially Ambiguous?

    Get PDF
    International audienceThis purely theoretical work investigates the problem of artificial singularities in camera self-calibration. Self-calibration allows one to upgrade a projective reconstruction to metric and has a concise and well-understood formulation based on the Dual Absolute Quadric (DAQ), a rank-3 quadric envelope satisfying (nonlinear) 'spectral constraints': it must be positive of rank 3. The practical scenario we consider is the one of square pixels, known principal point and varying unknown focal length, for which generic Critical Motion Sequences (CMS) have been thoroughly derived. The standard linear self-calibration algorithm uses the DAQ paradigm but ignores the spectral constraints. It thus has artificial CMSs, which have barely been studied so far. We propose an algebraic model of singularities based on the confocal quadric theory. It allows to easily derive all types of CMSs. We first review the already known generic CMSs, for which any self-calibration algorithm fails. We then describe all CMSs for the standard linear self-calibration algorithm; among those are artificial CMSs caused by the above spectral constraints being neglected. We then show how to detect CMSs. If this is the case it is actually possible to uniquely identify the correct self-calibration solution, based on a notion of signature of quadrics. The main conclusion of this paper is that a posteriori enforcing the spectral constraints in linear self-calibration is discriminant enough to resolve all artificial CMSs

    A case against Kruppa's equations for camera self-calibration

    Full text link

    The Extraction and Use of Image Planes for Three-dimensional Metric Reconstruction

    Get PDF
    The three-dimensional (3D) metric reconstruction of a scene from two-dimensional images is a fundamental problem in Computer Vision. The major bottleneck in the process of retrieving such structure lies in the task of recovering the camera parameters. These parameters can be calculated either through a pattern-based calibration procedure, which requires an accurate knowledge of the scene, or using a more flexible approach, known as camera autocalibration, which exploits point correspondences across images. While pattern-based calibration requires the presence of a calibration object, autocalibration constraints are often cast into nonlinear optimization problems which are often sensitive to both image noise and initialization. In addition, autocalibration fails for some particular motions of the camera. To overcome these problems, we propose to combine scene and autocalibration constraints and address in this thesis (a) the problem of extracting geometric information of the scene from uncalibrated images, (b) the problem of obtaining a robust estimate of the affine calibration of the camera, and (c) the problem of upgrading and refining the affine calibration into a metric one. In particular, we propose a method for identifying the major planar structures in a scene from images and another method to recognize parallel pairs of planes whenever these are available. The identified parallel planes are then used to obtain a robust estimate of both the affine and metric 3D structure of the scene without resorting to the traditional error prone calculation of vanishing points. We also propose a refinement method which, unlike existing ones, is capable of simultaneously incorporating plane parallelism and perpendicularity constraints in the autocalibration process. Our experiments demonstrate that the proposed methods are robust to image noise and provide satisfactory results

    Motion estimation from spheres

    Get PDF
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, v. 1, p. 1238-1243This paper addresses the problem of recovering epipolar geometry from spheres. Previous works have exploited epipolar tangencies induced by frontier points on the spheres for motion recovery. It will be shown in this paper that besides epipolar tangencies, N2 point features can be extracted from the apparent contours of the N spheres when N > 2. An algorithm for recovering the fundamental matrices from such point features and the epipolar tangencies from 3 or more spheres is developed, with the point features providing a homography over the view pairs and the epipolar tangencies determining the epipoles. In general, there will be two solutions to the locations of the epipoles. One of the solutions corresponds to the true camera configuration, while the other corresponds to a mirrored configuration. Several methods are proposed to select the right solution. Experiments on using 3 and 4 spheres demonstrate that our algorithm can be carried out easily and can achieve a high precision. © 2006 IEEE.published_or_final_versio

    Euclidean Structure from N>=2 Parallel Circles: Theory and Algorithms

    Get PDF
    International audienceOur problem is that of recovering, in one view, the 2D Euclidean structure, induced by the projections of N parallel circles. This structure is a prerequisite for camera calibration and pose computation. Until now, no general method has been described for N > 2. The main contribution of this work is to state the problem in terms of a system of linear equations to solve.We give a closed-form solution as well as bundle adjustment-like refinements, increasing the technical applicability and numerical stability. Our theoretical approach generalizes and extends all those described in existing works for N = 2 in several respects, as we can treat simultaneously pairs of orthogonal lines and pairs of circles within a unified framework. The proposed algorithm may be easily implemented, using well-known numerical algorithms. Its performance is illustrated by simulations and experiments with real images

    New Results on Triangulation, Polynomial Equation Solving and Their Application in Global Localization

    Get PDF
    This thesis addresses the problem of global localization from images. The overall goal is to find the location and the direction of a camera given an image taken with the camera relative a 3D world model. In order to solve the problem several subproblems have to be handled. The two main steps for constructing a system for global localization consist of model building and localization. For the model construction phase we give a new method for triangulation that guarantees that the globally optimal position is attained under the assumption of Gaussian noise in the image measurements. A common framework for the triangulation of points, lines and conics is presented. The second contribution of the thesis is in the field of solving systems of polynomial equations. Many problems in geometrical computer vision lead to computing the real roots of a system of polynomial equations, and several such geometry problems appear in the localization problem. The method presented in the thesis gives a significant improvement in the numerics when Gröbner basis methods are applied. Such methods are often plagued by numerical problems, but by using the fact that the complete Gröbner basis is not needed, the numerics can be improved. In the final part of the thesis we present several new minimal, geometric problems that have not been solved previously. These minimal cases make use of both two and three dimensional correspondences at the same time. The solutions to these minimal problems form the basis of a localization system which aims at improving robustness compared to the state of the art

    Auto-Calibration and Three-Dimensional Reconstruction for Zooming Cameras

    Get PDF
    This dissertation proposes new algorithms to recover the calibration parameters and 3D structure of a scene, using 2D images taken by uncalibrated stationary zooming cameras. This is a common configuration, usually encountered in surveillance camera networks, stereo camera systems, and event monitoring vision systems. This problem is known as camera auto-calibration (also called self-calibration) and the motivation behind this work is to obtain the Euclidean three-dimensional reconstruction and metric measurements of the scene, using only the captured images. Under this configuration, the problem of auto-calibrating zooming cameras differs from the classical auto-calibration problem of a moving camera in two major aspects. First, the camera intrinsic parameters are changing due to zooming. Second, because cameras are stationary in our case, using classical motion constraints, such as a pure translation for example, is not possible. In order to simplify the non-linear complexity of this problem, i.e., auto-calibration of zooming cameras, we have followed a geometric stratification approach. In particular, we have taken advantage of the movement of the camera center, that results from the zooming process, to locate the plane at infinity and, consequently to obtain an affine reconstruction. Then, using the assumption that typical cameras have rectangular or square pixels, the calculation of the camera intrinsic parameters have become possible, leading to the recovery of the Euclidean 3D structure. Being linear, the proposed algorithms were easily extended to the case of an arbitrary number of images and cameras. Furthermore, we have devised a sufficient constraint for detecting scene parallel planes, a useful information for solving other computer vision problems

    Rank classification of linear line structure in determining trifocal tensor.

    Get PDF
    Zhao, Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (p. 111-117) and index.Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Objective of the study --- p.2Chapter 1.3 --- Challenges and our approach --- p.4Chapter 1.4 --- Original contributions --- p.6Chapter 1.5 --- Organization of this dissertation --- p.6Chapter 2 --- Related Work --- p.9Chapter 2.1 --- Critical configuration for motion estimation and projective reconstruction --- p.9Chapter 2.1.1 --- Point feature --- p.9Chapter 2.1.2 --- Line feature --- p.12Chapter 2.2 --- Camera motion estimation --- p.14Chapter 2.2.1 --- Line tracking --- p.15Chapter 2.2.2 --- Determining camera motion --- p.19Chapter 3 --- Preliminaries on Three-View Geometry and Trifocal Tensor --- p.23Chapter 3.1 --- Projective spaces P3 and transformations --- p.23Chapter 3.2 --- The trifocal tensor --- p.24Chapter 3.3 --- Computation of the trifocal tensor-Normalized linear algorithm --- p.31Chapter 4 --- Linear Line Structures --- p.33Chapter 4.1 --- Models of line space --- p.33Chapter 4.2 --- Line structures --- p.35Chapter 4.2.1 --- Linear line space --- p.37Chapter 4.2.2 --- Ruled surface --- p.37Chapter 4.2.3 --- Line congruence --- p.38Chapter 4.2.4 --- Line complex --- p.38Chapter 5 --- Critical Configurations of Three Views Revealed by Line Correspondences --- p.41Chapter 5.1 --- Two-view degeneracy --- p.41Chapter 5.2 --- Three-view degeneracy --- p.42Chapter 5.2.1 --- Introduction --- p.42Chapter 5.2.2 --- Linear line space --- p.44Chapter 5.2.3 --- Linear ruled surface --- p.54Chapter 5.2.4 --- Linear line congruence --- p.55Chapter 5.2.5 --- Linear line complex --- p.57Chapter 5.3 --- Retrieving tensor in critical configurations --- p.60Chapter 5.4 --- Rank classification of non-linear line structures --- p.61Chapter 6 --- Camera Motion Estimation Framework --- p.63Chapter 6.1 --- Line extraction --- p.64Chapter 6.2 --- Line tracking --- p.65Chapter 6.2.1 --- Preliminary geometric tracking --- p.65Chapter 6.2.2 --- Experimental results --- p.69Chapter 6.3 --- Camera motion estimation framework using EKF --- p.71Chapter 7 --- Experimental Results --- p.75Chapter 7.1 --- Simulated data experiments --- p.75Chapter 7.2 --- Real data experiments --- p.76Chapter 7.2.1 --- Linear line space --- p.80Chapter 7.2.2 --- Linear ruled surface --- p.84Chapter 7.2.3 --- Linear line congruence --- p.84Chapter 7.2.4 --- Linear line complex --- p.91Chapter 7.3 --- Empirical observation: ruled plane for line transfer --- p.93Chapter 7.4 --- Simulation for non-linear line structures --- p.94Chapter 8 --- Conclusions and Future Work --- p.97Chapter 8.1 --- Summary --- p.97Chapter 8.2 --- Future work --- p.99Chapter A --- Notations --- p.101Chapter B --- Tensor --- p.103Chapter C --- Matrix Decomposition and Estimation Techniques --- p.104Chapter D --- MATLAB Files --- p.107Chapter D.1 --- Estimation matrix --- p.107Chapter D.2 --- Line transfer --- p.109Chapter D.3 --- Simulation --- p.10

    Camera self-calibration and analysis of singular cases

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore