8 research outputs found

    Autocalibration with the Minimum Number of Cameras with Known Pixel Shape

    Get PDF
    In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi

    Circular motion geometry using minimal data

    Full text link

    Quelle information géométrique peut-on obtenir à partir d'une ou plusieurs images prises par projection perspective ?

    Get PDF
    Les travaux présentés dans cet article ont été réalisés au sein du projet Movi du laboratoire Lifia à Grenoble, par Boubakeur Boufama, Pascal Brand, Patrick Gros, Luce Morin, Long Quan et Francoise Veillon, le tout avec la participation et sous la direction de Roger Mohr. Les contributions de chacun seront précisées dans le fil du texte par les réferences bibliographiques, auxquelles le lecteur est invité à se reporter pour les détails techniques qui ne seront pas tous donnés ici. L'ensemble du travail a été réalisé dans le cadre du projet Esprit - Bra Viva.National audienceEn vision par ordinateur, on considère une camera qui prend des images. En supposant simplement que cette opération de prise de vue est d'un certain type géométrique, et plus précisément que c'est une projection perspective, on peut calculer à partir d'une ou de plusieurs images des quantités géométriques caractéristiques de la scène observée. Apres avoir étudié quelques modèles géométriques de cameras, les informations géométriques que l'on peut tirer d'une, deux, trois ou plusieurs images sont etudiées successivement

    Automatic visual recognition using parallel machines

    Get PDF
    Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity. In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods. Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration. A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture

    Rank classification of linear line structure in determining trifocal tensor.

    Get PDF
    Zhao, Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (p. 111-117) and index.Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Objective of the study --- p.2Chapter 1.3 --- Challenges and our approach --- p.4Chapter 1.4 --- Original contributions --- p.6Chapter 1.5 --- Organization of this dissertation --- p.6Chapter 2 --- Related Work --- p.9Chapter 2.1 --- Critical configuration for motion estimation and projective reconstruction --- p.9Chapter 2.1.1 --- Point feature --- p.9Chapter 2.1.2 --- Line feature --- p.12Chapter 2.2 --- Camera motion estimation --- p.14Chapter 2.2.1 --- Line tracking --- p.15Chapter 2.2.2 --- Determining camera motion --- p.19Chapter 3 --- Preliminaries on Three-View Geometry and Trifocal Tensor --- p.23Chapter 3.1 --- Projective spaces P3 and transformations --- p.23Chapter 3.2 --- The trifocal tensor --- p.24Chapter 3.3 --- Computation of the trifocal tensor-Normalized linear algorithm --- p.31Chapter 4 --- Linear Line Structures --- p.33Chapter 4.1 --- Models of line space --- p.33Chapter 4.2 --- Line structures --- p.35Chapter 4.2.1 --- Linear line space --- p.37Chapter 4.2.2 --- Ruled surface --- p.37Chapter 4.2.3 --- Line congruence --- p.38Chapter 4.2.4 --- Line complex --- p.38Chapter 5 --- Critical Configurations of Three Views Revealed by Line Correspondences --- p.41Chapter 5.1 --- Two-view degeneracy --- p.41Chapter 5.2 --- Three-view degeneracy --- p.42Chapter 5.2.1 --- Introduction --- p.42Chapter 5.2.2 --- Linear line space --- p.44Chapter 5.2.3 --- Linear ruled surface --- p.54Chapter 5.2.4 --- Linear line congruence --- p.55Chapter 5.2.5 --- Linear line complex --- p.57Chapter 5.3 --- Retrieving tensor in critical configurations --- p.60Chapter 5.4 --- Rank classification of non-linear line structures --- p.61Chapter 6 --- Camera Motion Estimation Framework --- p.63Chapter 6.1 --- Line extraction --- p.64Chapter 6.2 --- Line tracking --- p.65Chapter 6.2.1 --- Preliminary geometric tracking --- p.65Chapter 6.2.2 --- Experimental results --- p.69Chapter 6.3 --- Camera motion estimation framework using EKF --- p.71Chapter 7 --- Experimental Results --- p.75Chapter 7.1 --- Simulated data experiments --- p.75Chapter 7.2 --- Real data experiments --- p.76Chapter 7.2.1 --- Linear line space --- p.80Chapter 7.2.2 --- Linear ruled surface --- p.84Chapter 7.2.3 --- Linear line congruence --- p.84Chapter 7.2.4 --- Linear line complex --- p.91Chapter 7.3 --- Empirical observation: ruled plane for line transfer --- p.93Chapter 7.4 --- Simulation for non-linear line structures --- p.94Chapter 8 --- Conclusions and Future Work --- p.97Chapter 8.1 --- Summary --- p.97Chapter 8.2 --- Future work --- p.99Chapter A --- Notations --- p.101Chapter B --- Tensor --- p.103Chapter C --- Matrix Decomposition and Estimation Techniques --- p.104Chapter D --- MATLAB Files --- p.107Chapter D.1 --- Estimation matrix --- p.107Chapter D.2 --- Line transfer --- p.109Chapter D.3 --- Simulation --- p.10

    Dense real-time 3D reconstruction from multiple images

    Get PDF
    The rapid increase in computer graphics and acquisition technologies has led to the widespread use of 3D models. Techniques for 3D reconstruction from multiple views aim to recover the structure of a scene and the position and orientation (motion) of the camera using only the geometrical constraints in 2D images. This problem, known as Structure from Motion (SfM) has been the focus of a great deal of research effort in recent years; however, the automatic, dense, real-time and accurate reconstruction of a scene is still a major research challenge. This thesis presents work that targets the development of efficient algorithms to produce high quality and accurate reconstructions, introducing new computer vision techniques for camera motion calibration, dense SfM reconstruction and dense real-time 3D reconstruction. In SfM, a second challenge is to build an effective reconstruction framework that provides dense and high quality surface modelling. This thesis develops a complete, automatic and flexible system with a simple user-interface of `raw images to 3D surface representation'. As part of the proposed image reconstruction approach, this thesis introduces an accurate and reliable region-growing algorithm to propagate the dense matching points from the sparse key points among all stereo pairs. This dense 3D reconstruction proposal addresses the deficiencies of existing SfM systems built on sparsely distributed 3D point clouds which are insufficient for reconstructing a complete 3D model of a scene. The existing SfM reconstruction methods perform a bundle adjustment optimization of the global geometry in order to obtain an accurate model. Such an optimization is very computational expensive and cannot be implemented in a real-time application. Extended Kalman Filter (EKF) Simultaneous Localization and Mapping (SLAM) considers the problem of concurrently estimating in real-time the structure of the surrounding world, perceived by moving sensors (cameras), simultaneously localizing in it. However, standard EKF-SLAM techniques are susceptible to errors introduced during the state prediction and measurement prediction linearization.

    Invariants of 6 Points from 3 Uncalibrated Images

    No full text
    International audienceThere are three projective invariants of a set of six points in general position in space. It is well known that these invariants cannot be recovered from one image, however an invariant relationship does exist between space invariants and image invariants. This invariant relationship will first be derived for a single image. Then this invariant relationship is used to derive the space invariants, when multiple images are available. This paper establishes that the minimum number of images for computing these invariants is three, and invariants from three images can have as many as three solutions. Algorithms are presented for computing these invariants in closed form. The accuracy and stability with respect to image noise, selection of the triplets of images and distance between viewing positions are studied both through real and simulated images. Application of these invariants is also presented, this extends the results of projective reconstruction of Faugeras [6] and Hartley et al. [10] and the method of epipolar geometry determination of Sturm [18] for two uncalibrated images to the case of three uncalibrated images
    corecore