175 research outputs found

    Astrometry with the Wide-Field InfraRed Space Telescope

    Get PDF
    The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of delivering precise astrometry for faint sources over the enormous field of view of its main camera, the Wide-Field Imager (WFI). This unprecedented combination will be transformative for the many scientific questions that require precise positions, distances, and velocities of stars. We describe the expectations for the astrometric precision of the WFIRST WFI in different scenarios, illustrate how a broad range of science cases will see significant advances with such data, and identify aspects of WFIRST's design where small adjustments could greatly improve its power as an astrometric instrument.Comment: version accepted to JATI

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    Self-Calibration of Cameras with Euclidean Image Plane in Case of Two Views and Known Relative Rotation Angle

    Full text link
    The internal calibration of a pinhole camera is given by five parameters that are combined into an upper-triangular 3×33\times 3 calibration matrix. If the skew parameter is zero and the aspect ratio is equal to one, then the camera is said to have Euclidean image plane. In this paper, we propose a non-iterative self-calibration algorithm for a camera with Euclidean image plane in case the remaining three internal parameters --- the focal length and the principal point coordinates --- are fixed but unknown. The algorithm requires a set of N≥7N \geq 7 point correspondences in two views and also the measured relative rotation angle between the views. We show that the problem generically has six solutions (including complex ones). The algorithm has been implemented and tested both on synthetic data and on publicly available real dataset. The experiments demonstrate that the method is correct, numerically stable and robust.Comment: 13 pages, 7 eps-figure

    Is Dual Linear Self-Calibration Artificially Ambiguous?

    Get PDF
    International audienceThis purely theoretical work investigates the problem of artificial singularities in camera self-calibration. Self-calibration allows one to upgrade a projective reconstruction to metric and has a concise and well-understood formulation based on the Dual Absolute Quadric (DAQ), a rank-3 quadric envelope satisfying (nonlinear) 'spectral constraints': it must be positive of rank 3. The practical scenario we consider is the one of square pixels, known principal point and varying unknown focal length, for which generic Critical Motion Sequences (CMS) have been thoroughly derived. The standard linear self-calibration algorithm uses the DAQ paradigm but ignores the spectral constraints. It thus has artificial CMSs, which have barely been studied so far. We propose an algebraic model of singularities based on the confocal quadric theory. It allows to easily derive all types of CMSs. We first review the already known generic CMSs, for which any self-calibration algorithm fails. We then describe all CMSs for the standard linear self-calibration algorithm; among those are artificial CMSs caused by the above spectral constraints being neglected. We then show how to detect CMSs. If this is the case it is actually possible to uniquely identify the correct self-calibration solution, based on a notion of signature of quadrics. The main conclusion of this paper is that a posteriori enforcing the spectral constraints in linear self-calibration is discriminant enough to resolve all artificial CMSs

    An Enhanced Structure-from-Motion Paradigm based on the Absolute Dual Quadric and Images of Circular Points

    Get PDF
    International audienceThis work aims at introducing a new unified Structure-from-Motion (SfM) paradigm in which images of circular point-pairs can be combined with images of natural points. An imaged circular point-pair encodes the 2D Euclidean structure of a world plane and can easily be derived from the image of a planar shape, especially those including circles. A classical SfM method generally runs two steps: first a projective factorization of all matched image points (into projective cameras and points) and second a camera self-calibration that updates the obtained world from projective to Euclidean. This work shows how to introduce images of circular points in these two SfM steps while its key contribution is to provide the theoretical foundations for combining “classical” linear self-calibration constraints with additional ones derived from such images. We show that the two proposed SfM steps clearly contribute to better results than the classical approach. We validate our contributions on synthetic and real images

    Projector Self-Calibration using the Dual Absolute Quadric

    Get PDF
    The applications for projectors have increased dramatically since their origins in cinema. These include augmented reality, information displays, 3D scanning, and even archiving and surgical intervention. One common thread between all of these applications is the nec- essary step of projector calibration. Projector calibration can be a challenging task, and requires significant effort and preparation to ensure accuracy and fidelity. This is especially true in large scale, multi-projector installations used for projection mapping. Generally, the cameras for projector-camera systems are calibrated off-site, and then used in-field un- der the assumption that the intrinsics have remained constant. However, the assumption of off-site calibration imposes several hard restrictions. Among these, is that the intrinsics remain invariant between the off-site calibration process and the projector calibration site. This assumption is easily invalidated upon physical impact, or changing of lenses. To ad- dress this, camera self-calibration has been proposed for the projector calibration problem. However, current proposed methods suffer from degenerate conditions that are easily en- countered in practical projector calibration setups, resulting in undesirable variability and a distinct lack of robustness. In particular, the condition of near-intersecting optical axes of the camera positions used to capture the scene resulted in high variability and significant error in the recovered camera focal lengths. As such, a more robust method was required. To address this issue, an alternative camera self-calibration method is proposed. In this thesis we demonstrate our method of projector calibration with unknown and uncalibrated cameras via autocalibration using the Dual Absolute Quadric (DAQ). This method results in a significantly more robust projector calibration process, especially in the presence of correspondence noise when compared with previous methods. We use the DAQ method to calibrate the cameras using projector-generated correspondences, by upgrading an ini- tial projective calibration to metric, and subsequently calibrating the projector using the recovered metric structure of the scene. Our experiments provide strong evidence of the brittle behaviour of existing methods of projector self-calibration by evaluating them in near-degenerate conditions using both synthetic and real data. Further, they also show that the DAQ can be used successfully to calibrate a projector-camera system and reconstruct the surface used for projection mapping robustly, where previous methods fail

    The Extraction and Use of Image Planes for Three-dimensional Metric Reconstruction

    Get PDF
    The three-dimensional (3D) metric reconstruction of a scene from two-dimensional images is a fundamental problem in Computer Vision. The major bottleneck in the process of retrieving such structure lies in the task of recovering the camera parameters. These parameters can be calculated either through a pattern-based calibration procedure, which requires an accurate knowledge of the scene, or using a more flexible approach, known as camera autocalibration, which exploits point correspondences across images. While pattern-based calibration requires the presence of a calibration object, autocalibration constraints are often cast into nonlinear optimization problems which are often sensitive to both image noise and initialization. In addition, autocalibration fails for some particular motions of the camera. To overcome these problems, we propose to combine scene and autocalibration constraints and address in this thesis (a) the problem of extracting geometric information of the scene from uncalibrated images, (b) the problem of obtaining a robust estimate of the affine calibration of the camera, and (c) the problem of upgrading and refining the affine calibration into a metric one. In particular, we propose a method for identifying the major planar structures in a scene from images and another method to recognize parallel pairs of planes whenever these are available. The identified parallel planes are then used to obtain a robust estimate of both the affine and metric 3D structure of the scene without resorting to the traditional error prone calculation of vanishing points. We also propose a refinement method which, unlike existing ones, is capable of simultaneously incorporating plane parallelism and perpendicularity constraints in the autocalibration process. Our experiments demonstrate that the proposed methods are robust to image noise and provide satisfactory results

    Camera Self-Calibration Using the Kruppa Equations and the SVD of the Fundamental Matrix: The Case of Varying Intrinsic Parameters

    Get PDF
    Estimation of the camera intrinsic calibration parameters is a prerequisite to a wide variety of vision tasks related to motion and stereo analysis. A major breakthrough related to the intrinsic calibration problem was the introduction in the early nineties of the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images. Until recently, however, most research efforts have been focused on applying the autocalibration paradigm to estimating constant intrinsic calibration parameters. Therefore, such approaches are inapplicable to cases where the intrinsic parameters undergo continuous changes due to focusing and/or zooming. In this paper, our previous work for autocalibration in the case of constant camera intrinsic parameters is extended and a novel autocalibration method capable of handling variable intrinsic parameters is proposed. The method relies upon the Singular Value Decomposition of the fundamental matrix, which leads to a particularly simple form of the Kruppa equations. In contrast to the classical formulation that yields an over-determined system of constraints, a purely algebraic derivation is proposed here which provides a straightforward answer to the problem of determining which constraints to employ among the set of available ones. Additionally, the new formulation does not employ the epipoles, which are known to be difficult to estimate accurately. The intrinsic calibration parameters are recovered from the developed constraints through a nonlinear minimization scheme that explicitly takes into consideration the uncertainty associated with the estimates of the employed fundamental matrices. Detailed experimental results using both simulated and real image sequences demonstrate the feasibility of the approach
    • …
    corecore