82 research outputs found

    Calibration routine for a telecentric stereo vision system considering affine mirror ambiguity

    Get PDF
    A robust calibration approach for a telecentric stereo camera system for three-dimensional (3-D) surface measurements is presented, considering the effect of affine mirror ambiguity. By optimizing the parameters of a rigid body transformation between two marker planes and transforming the two-dimensional (2-D) data into one coordinate frame, a 3-D calibration object is obtained, avoiding high manufacturing costs. Based on the recent contributions in the literature, the calibration routine consists of an initial parameter estimation by affine reconstruction to provide good start values for a subsequent nonlinear stereo refinement based on a Levenberg–Marquardt optimization. To this end, the coordinates of the calibration target are reconstructed in 3-D using the Tomasi–Kanade factorization algorithm for affine cameras with Euclidean upgrade. The reconstructed result is not properly scaled and not unique due to affine ambiguity. In order to correct the erroneous scaling, the similarity transformation between one of the 2-D calibration plane points and the corresponding 3-D points is estimated. The resulting scaling factor is used to rescale the 3-D point data, which then allows in combination with the 2-D calibration plane data for a determination of the start values for the subsequent nonlinear stereo refinement. As the rigid body transformation between the 2-D calibration planes is also obtained, a possible affine mirror ambiguity in the affine reconstruction result can be robustly corrected. The calibration routine is validated by an experimental calibration and various plausibility tests. Due to the usage of a calibration object with metric information, the determined camera projection matrices allow for a triangulation of correctly scaled metric 3-D points without the need for an individual camera magnification determination

    Projector Self-Calibration using the Dual Absolute Quadric

    Get PDF
    The applications for projectors have increased dramatically since their origins in cinema. These include augmented reality, information displays, 3D scanning, and even archiving and surgical intervention. One common thread between all of these applications is the nec- essary step of projector calibration. Projector calibration can be a challenging task, and requires significant effort and preparation to ensure accuracy and fidelity. This is especially true in large scale, multi-projector installations used for projection mapping. Generally, the cameras for projector-camera systems are calibrated off-site, and then used in-field un- der the assumption that the intrinsics have remained constant. However, the assumption of off-site calibration imposes several hard restrictions. Among these, is that the intrinsics remain invariant between the off-site calibration process and the projector calibration site. This assumption is easily invalidated upon physical impact, or changing of lenses. To ad- dress this, camera self-calibration has been proposed for the projector calibration problem. However, current proposed methods suffer from degenerate conditions that are easily en- countered in practical projector calibration setups, resulting in undesirable variability and a distinct lack of robustness. In particular, the condition of near-intersecting optical axes of the camera positions used to capture the scene resulted in high variability and significant error in the recovered camera focal lengths. As such, a more robust method was required. To address this issue, an alternative camera self-calibration method is proposed. In this thesis we demonstrate our method of projector calibration with unknown and uncalibrated cameras via autocalibration using the Dual Absolute Quadric (DAQ). This method results in a significantly more robust projector calibration process, especially in the presence of correspondence noise when compared with previous methods. We use the DAQ method to calibrate the cameras using projector-generated correspondences, by upgrading an ini- tial projective calibration to metric, and subsequently calibrating the projector using the recovered metric structure of the scene. Our experiments provide strong evidence of the brittle behaviour of existing methods of projector self-calibration by evaluating them in near-degenerate conditions using both synthetic and real data. Further, they also show that the DAQ can be used successfully to calibrate a projector-camera system and reconstruct the surface used for projection mapping robustly, where previous methods fail

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Astrometry with the Wide-Field InfraRed Space Telescope

    Get PDF
    The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of delivering precise astrometry for faint sources over the enormous field of view of its main camera, the Wide-Field Imager (WFI). This unprecedented combination will be transformative for the many scientific questions that require precise positions, distances, and velocities of stars. We describe the expectations for the astrometric precision of the WFIRST WFI in different scenarios, illustrate how a broad range of science cases will see significant advances with such data, and identify aspects of WFIRST's design where small adjustments could greatly improve its power as an astrometric instrument.Comment: version accepted to JATI

    A factorization approach to inertial affine structure from motion

    Full text link
    We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives

    A factorization approach to inertial affine structure from motion

    Full text link
    We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives

    3D facial merging for virtual human reconstruction

    Full text link
    There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces
    corecore