1,416 research outputs found

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    Realization Of A Spatial Augmented Reality System - A Digital Whiteboard Using a Kinect Sensor and a PC Projector

    Get PDF
    Recent rapid development of cost-effective, accurate digital imaging sensors, high-speed computational hardware, and tractable design software has given rise to the growing field of augmented reality in the computer vision realm. The system design of a 'Digital Whiteboard' system is presented with the intention of realizing a practical, cost-effective and publicly available spatial augmented reality system. A Microsoft Kinect sensor and a PC projector coupled with a desktop computer form a type of spatial augmented reality system that creates a projection based graphical user interface that can turn any wall or planar surface into a 'Digital Whiteboard'. The system supports two kinds of user inputs consisting of depth and infra-red information. An infra-red collimated light source, like that of a laser pointer pen, serves as a stylus for user input. The user can point and shine the infra-red stylus on the selected planar region and the reflection of the infra-red light source is registered by the system using the infra-red camera of the Kinect. Using the geometric transformation between the Kinect and the projector, obtained with system calibration, the projector displays contours corresponding to the movement of the stylus on the 'Digital Whiteboard' region, according to a smooth curve fitting algorithm. The described projector-based spatial augmented reality system provides new unique possibilities for user interaction with digital content

    Non-contact free-form shape measurement for coordinate measuring machines

    Get PDF
    Precision measurement of manufactured parts commonly uses contact measurement methods. A Coordinate Measuring Machine (CMM) mounted probe touches the surface of the part, recording the probe’s tip position at each contact. Recently, devices have been developed that continuously scan the probe tip across the surface, allowing points to be measured more quickly. Contact measurement is accurate and fast for shapes that are easily parameterized such as a sphere or a plane, but is slow and requires considerable user input for more general objects such as those with free-form surfaces. Phase stepping fringe projection and photogrammetry are common non-contact shape measurement methods. Photogrammetry builds a 3D model of feature points from images of an object taken from multiple perspectives. In phase stepping fringe projection a series of sinusoidal patterns, with a phase shift between each, is projected towards an object. A camera records a corresponding series of images. The phase of the pattern at each imaged point is calculated and converted to a 3D representation of the object’s surface. Techniques combining phase stepping fringe projection and photogrammetry were developed and are described here. The eventual aim is to develop an optical probe for a CMM to enable non-contact measurement of objects in an industrial setting. For the CMM to accurately report its position the probe must be small, light, and robust. The methods currently used to provide a phase shift require either an accurately calibrated translation stage to move an internal component, or a programmable projector. Neither of these implementations can be practically mounted on a CMM due to size and weight limits or the delicate parts required. A CMM probe consisting of a single camera and a fringe projector was developed. The fringe projector projects a fixed fringe pattern. Phase steps are created by moving the CMM mounted probe, taking advantage of the geometry of the fringe projection system. New techniques to calculate phase from phase stepped images created by relative motion of probe and object are proposed, mathematically modelled, and tested experimentally. Novel techniques for absolute measurement of surfaces by viewing an object from different perspectives are developed. A prototype probe is used to demonstrate measurements of a variety of objects.Engineering and Physical Sciences Research Council (EPSRC) Grant No. GR/T11289/0

    Frequency-based image analysis of random patterns: an alternative way to classical stereocorrelation

    Get PDF
    The paper presents an alternative way to classical stereocorrelation. First, 2D image processing of random patterns is described. Sub-pixel displacements are determined using phase analysis. Then distortion evaluation is presented. The distortion is identified without any assumption on the lens model because of the use of a grid technique approach. Last, shape measurement and shape variation is caught by fringe projection. Analysis is based on two pin-hole assumptions for the video-projector and the camera. Then, fringe projection is coupled to in-plane displacement to give rise to 3D measurement set-up. Metrological characterization shows a resolution comparable to classical (stereo) correlation technique (1/100th pixel). Spatial resolution seems to be an advantage of the method, because of the use of temporal phase stepping (shape measurement, 1 pixel) and windowed Fourier transform (in plane displacements measurement, 9 pixels). Two examples are given. First one is the study of skin properties; second one is a study on leather fabric. In both cases, results are convincing, and have been exploited to give mechanical interpretation

    Automated calibration of multi-sensor optical shape measurement system

    Get PDF
    A multi-sensor optical shape measurement system (SMS) based on the fringe projection method and temporal phase unwrapping has recently been commercialised as a result of its easy implementation, computer control using a spatial light modulator, and fast full-field measurement. The main advantage of a multi-sensor SMS is the ability to make measurements for 360° coverage without the requirement for mounting the measured component on translation and/or rotation stages. However, for greater acceptance in industry, issues relating to a user-friendly calibration of the multi-sensor SMS in an industrial environment for presentation of the measured data in a single coordinate system need to be addressed. The calibration of multi-sensor SMSs typically requires a calibration artefact, which consequently leads to significant user input for the processing of calibration data, in order to obtain the respective sensor's optimal imaging geometry parameters. The imaging geometry parameters provide a mapping from the acquired shape data to real world Cartesian coordinates. However, the process of obtaining optimal sensor imaging geometry parameters (which involves a nonlinear numerical optimization process known as bundle adjustment), requires labelling regions within each point cloud as belonging to known features of the calibration artefact. This thesis describes an automated calibration procedure which ensures that calibration data is processed through automated feature detection of the calibration artefact, artefact pose estimation, automated control point selection, and finally bundle adjustment itself. [Continues.

    MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEM

    Get PDF
    Fingerprints are the most accurate and widely used biometrics for human identification due to their uniqueness, rapid and easy means of acquisition. Contact based techniques of fingerprint acquisition like traditional ink and live scan methods are not user friendly, reduce capture area and cause deformation of fingerprint features. Also, improper skin conditions and worn friction ridges lead to poor quality fingerprints. A non-contact, high resolution, high speed scanning system has been developed to acquire a 3D scan of a finger using structured light illumination technique. The 3D scanner system consists of three cameras and a projector, with each camera producing a 3D scan of the finger. By merging the 3D scans obtained from the three cameras a nail to nail fingerprint scan is obtained. However, the scans from the cameras do not merge perfectly. The main objective of this thesis is to calibrate the system well such that 3D scans obtained from the three cameras merge or align automatically. This error in merging is reduced by compensating for radial distortion present in the projector of the scanner system. The error in merging after radial distortion correction is then measured using the projector coordinates of the scanner system
    • …
    corecore