152 research outputs found

    A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    Get PDF
    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy

    Structured-light based sensing using a single fixed fringe grating: Fringe boundary detection and 3-D reconstruction

    Get PDF
    Advanced electronic manufacturing requires the 3-D inspection of very small surfaces like the solder bumps on wafers for direct die-to-die bonding. Yet the microscopic size and highly specular and textureless nature of the surfaces make the task difficult. It is also demanded that the size of the entire inspection system be small so as to minimize restraint on the operation of the various moving parts involved in the manufacturing process. In this paper, we describe a new 3-D reconstruction mechanism for the task. The mechanism is based upon the well-known concept of structured-light projection, but adapted to a new configuration that owns a particularly small system size and operates in a different manner. Unlike the traditional mechanisms which involve an array of light sources that occupy a rather extended physical space, the proposed mechanism consists of only a single light source plus a binary grating for projecting binary pattern. To allow the projection at each position of the inspected surface to vary and form distinct binary code, the binary grating is shifted in space. In every shift, a separate image of the illuminated surface is taken. With the use of pattern projection, and of discrete coding instead of analog coding in the projection, issues like texture-absence, image saturation, and image noise of the inspected surfaces are much lessened. Experimental results on a variety of objects are presented to illustrate the effectiveness of this mechanism. © 2008 IEEE.published_or_final_versio

    System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Get PDF
    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration

    System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Get PDF
    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration

    digital image correlation based on projected pattern for high frequency vibration measurements

    Get PDF
    Abstract The dynamic characterization of mechanical components is a crucial issue in industry, especially in the field of rotating machinery. High frequency loads are typical in this field and experimental tools need to fulfill severe specifications to be able to analyze these high-speed phenomena. In this work, an experimental setup, based on a Digital Image Correlation (DIC) technique with a projected speckle pattern, is presented. The proposed approach allows the measurement of vibrational response characterized by a single sinusoidal component having a frequency up to 500 Hz and an amplitude lower than 10 μm

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Sensor architectures and technologies for upper limb 3d surface reconstruction: A review

    Get PDF
    3D digital models of the upper limb anatomy represent the starting point for the design process of bespoke devices, such as orthoses and prostheses, which can be modeled on the actual patient’s anatomy by using CAD (Computer Aided Design) tools. The ongoing research on optical scanning methodologies has allowed the development of technologies that allow the surface reconstruction of the upper limb anatomy through procedures characterized by minimum discomfort for the patient. However, the 3D optical scanning of upper limbs is a complex task that requires solving problematic aspects, such as the difficulty of keeping the hand in a stable position and the presence of artefacts due to involuntary movements. Scientific literature, indeed, investigated different approaches in this regard by either integrating commercial devices, to create customized sensor architectures, or by developing innovative 3D acquisition techniques. The present work is aimed at presenting an overview of the state of the art of optical technologies and sensor architectures for the surface acquisition of upper limb anatomies. The review analyzes the working principles at the basis of existing devices and proposes a categorization of the approaches based on handling, pre/post-processing effort, and potentialities in real-time scanning. An in-depth analysis of strengths and weaknesses of the approaches proposed by the research community is also provided to give valuable support in selecting the most appropriate solution for the specific application to be addressed

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Snapshot Three-Dimensional Surface Imaging With Multispectral Fringe Projection Profilometry

    Get PDF
    Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements, including motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of an object. Estimation of phase maps is commonly performed with spatial phase retrieval algorithms that use a series of complex data processing stages. Researchers must have advanced data analysis skills to process FPP data due to a lack of availability of simple research-oriented software tools. Chapter 2 describes a comprehensive FPP software tool called PhaseWareTM that allows novice to experienced users to perform pre-processing of fringe patterns, phase retrieval, phase unwrapping, and finally post-processing. The sequential process of acquiring fringe patterns from an object is necessary to sample the surface densely enough to accurately estimate surface profiles. Sequential fringe acquisition performs poorly if the object is in motion between fringe projections. To overcome this limitation, we developed a novel method of FPP called multispectral fringe projection profilometry (MFPP), where multiple fringe patterns are composited into a multispectral illumination pattern and a single multispectral camera is used to capture the frame. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. Although the first attempt at MFPP significantly improved acquisition speed, it did not fully satisfy the condition for temporal phase retrieval, which requires at least three phase-shifted fringe patterns to characterize a surface. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with spectrally matched multispectral imaging, the refined MFPP method resulted in complete data for temporal phase retrieval using only a single camera exposure, thereby maintaining the high sampling speed for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery. Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements such as motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of the object. To estimate surface profiles accurately, sequential acquisition of fringe patterns is required; however, sequential fringe projection and acquisition perform poorly if the object is in motion during the projection. To overcome this limitation, we developed a novel method of FPP maned multispectral fringe projection profilometry (MFPP). The proposed method provides multispectral illumination patterns using a multispectral filter array (MFA) to generate multiple fringe patterns from a single illumination and capture the composite pattern using a single multispectral camera. Therefore, a single camera acquisition can provide multiple fringe patterns, and this directly increases the speed of imaging by a factor equal to the number of fringe patterns included in the composite pattern. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. The first attempt at MFPP significantly improved acquisition speed by a factor of eight by providing eight different fringe patterns in four different directions, which permits the system to detect more morphological details. However, the phase retrieval algorithm used in this method was based on the spatial phase stepping process that had a few limitations, including high sensitive to the quality of the fringe patterns and being a global process, as it spreads the effect of the noisy pixels across the entire result. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with a spectrally matched multispectral camera, the refined MFPP method provided the needed data for the temporal phase retrieval algorithm using only a single camera exposure. Thus, it delivers high accuracy and pixel-wise measurement (thanks to the temporal phase stepping algorithms) while maintaining a high sampling rate for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery
    • …
    corecore