96 research outputs found

    COLOR MULTIPLEXED SINGLE PATTERN SLI

    Get PDF
    Structured light pattern projection techniques are well known methods of accurately capturing 3-Dimensional information of the target surface. Traditional structured light methods require several different patterns to recover the depth, without ambiguity or albedo sensitivity, and are corrupted by object movement during the projection/capture process. This thesis work presents and discusses a color multiplexed structured light technique for recovering object shape from a single image thus being insensitive to object motion. This method uses single pattern whose RGB channels are each encoded with a unique subpattern. The pattern is projected on to the target and the reflected image is captured using high resolution color digital camera. The image is then separated into individual color channels and analyzed for 3-D depth reconstruction through use of phase decoding and unwrapping algorithms thereby establishing the viability of the color multiplexed single pattern technique. Compared to traditional methods (like PMP, Laser Scan etc) only one image/one-shot measurement is required to obtain the 3-D depth information of the object, requires less expensive hardware and normalizes albedo sensitivity and surface color reflectance variations. A cosine manifold and a flat surface are measured with sufficient accuracy demonstrating the feasibility of a real-time system

    Real-time 3-D Reconstruction by Means of Structured Light Illumination

    Get PDF
    Structured light illumination (SLI) is the process of projecting a series of light striped patterns such that, when viewed at an angle, a digital camera can reconstruct a 3-D model of a target object\u27s surface. But by relying on a series of time multiplexed patterns, SLI is not typically associated with video applications. For this purpose of acquiring 3-D video, a common SLI technique is to drive the projector/camera pair at very high frame rates such that any object\u27s motion is small over the pattern set. But at these high frame rates, the speed at which the incoming video can be processed becomes an issue. So much so that many video-based SLI systems record camera frames to memory and then apply off-line processing. In order to overcome this processing bottleneck and produce 3-D point clouds in real-time, we present a lookup-table (LUT) based solution that in our experiments, using a 640 by 480 video stream, can generate intermediate phase data at 1063.8 frames per second and full 3-D coordinate point clouds at 228.3 frames per second. These achievements are 25 and 10 times faster than previously reported studies. At the same time, a novel dual-frequency pattern is developed which combines a high-frequency sinusoid component with a unit-frequency sinusoid component, where the high-frequency component is used to generate robust phase information and the unit-frequency component is used to reduce phase unwrapping ambiguities. Finally, we developed a gamma model for SLI, which can correct the non-linear distortion caused by the optical devices. For three-step phase measuring profilometry (PMP), analysis of the root mean squared error of the corrected phase showed a 60х reduction in phase error when the gamma calibration is performed versus 33х reduction without calibration

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Real Time Structured Light and Applications

    Get PDF

    Spatial Augmented Reality Using Structured Light Illumination

    Get PDF
    Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB

    Acquisition of 3D shapes of moving objects using fringe projection profilometry

    Get PDF
    Three-dimensional (3D) shape measurement for object surface reconstruction has potential applications in many areas, such as security, manufacturing and entertainment. As an effective non-contact technique for 3D shape measurements, fringe projection profilometry (FPP) has attracted significant research interests because of its high measurement speed, high measurement accuracy and ease to implement. Conventional FPP analysis approaches are applicable to the calculation of phase differences for static objects. However, 3D shape measurement for dynamic objects remains a challenging task, although they are highly demanded in many applications. The study of this thesis work aims to enhance the measurement accuracy of the FPP techniques for the 3D shape of objects subject to movement in the 3D space. The 3D movement of objects changes not only the position of the object but also the height information with respect to the measurement system, resulting in motion-induced errors with the use of existing FPP technology. The thesis presents the work conducted for solutions of this challenging problem

    Snapshot Three-Dimensional Surface Imaging With Multispectral Fringe Projection Profilometry

    Get PDF
    Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements, including motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of an object. Estimation of phase maps is commonly performed with spatial phase retrieval algorithms that use a series of complex data processing stages. Researchers must have advanced data analysis skills to process FPP data due to a lack of availability of simple research-oriented software tools. Chapter 2 describes a comprehensive FPP software tool called PhaseWareTM that allows novice to experienced users to perform pre-processing of fringe patterns, phase retrieval, phase unwrapping, and finally post-processing. The sequential process of acquiring fringe patterns from an object is necessary to sample the surface densely enough to accurately estimate surface profiles. Sequential fringe acquisition performs poorly if the object is in motion between fringe projections. To overcome this limitation, we developed a novel method of FPP called multispectral fringe projection profilometry (MFPP), where multiple fringe patterns are composited into a multispectral illumination pattern and a single multispectral camera is used to capture the frame. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. Although the first attempt at MFPP significantly improved acquisition speed, it did not fully satisfy the condition for temporal phase retrieval, which requires at least three phase-shifted fringe patterns to characterize a surface. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with spectrally matched multispectral imaging, the refined MFPP method resulted in complete data for temporal phase retrieval using only a single camera exposure, thereby maintaining the high sampling speed for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery. Fringe Projection Profilometry (FPP) is a popular method for non-contact optical surface measurements such as motion tracking. The technique derives 3D surface maps from phase maps estimated from the distortions of fringe patterns projected onto the surface of the object. To estimate surface profiles accurately, sequential acquisition of fringe patterns is required; however, sequential fringe projection and acquisition perform poorly if the object is in motion during the projection. To overcome this limitation, we developed a novel method of FPP maned multispectral fringe projection profilometry (MFPP). The proposed method provides multispectral illumination patterns using a multispectral filter array (MFA) to generate multiple fringe patterns from a single illumination and capture the composite pattern using a single multispectral camera. Therefore, a single camera acquisition can provide multiple fringe patterns, and this directly increases the speed of imaging by a factor equal to the number of fringe patterns included in the composite pattern. Chapter 3 introduces this new technique and shows how it can be used to perform 3D profilometry at video frame rates. The first attempt at MFPP significantly improved acquisition speed by a factor of eight by providing eight different fringe patterns in four different directions, which permits the system to detect more morphological details. However, the phase retrieval algorithm used in this method was based on the spatial phase stepping process that had a few limitations, including high sensitive to the quality of the fringe patterns and being a global process, as it spreads the effect of the noisy pixels across the entire result. To overcome this limitation, Chapter 4 introduces an enhanced version of MFPP that utilized a specially designed multispectral illuminator to simultaneously project four p/2 phase-shifted fringe patterns onto an object. Combined with a spectrally matched multispectral camera, the refined MFPP method provided the needed data for the temporal phase retrieval algorithm using only a single camera exposure. Thus, it delivers high accuracy and pixel-wise measurement (thanks to the temporal phase stepping algorithms) while maintaining a high sampling rate for profilometry of moving objects. In conclusion, MFPP overcomes the limitations of sequential sampling imposed by FPP with temporal phase extraction without sacrificing data quality or accuracy of the reconstructed surface profiles. Since MFPP utilizes no moving parts and is based on MEMS technology, it is applicable to miniaturization for use in mobile devices and may be useful for space-constrained applications such as robotic surgery

    Real-time 3D surface-shape measurement using fringe projection and system-geometry constraints

    Get PDF
    Optical three-dimensional (3D) surface-shape measurement has diverse applications in engineering, computer vision and medical science. Fringe projection profilometry (FPP), uses a camera-projector system to permit high-accuracy full-field 3D surface-shape measurement by projecting fringe patterns onto an object surface, capturing images of the deformed patterns, and computing the 3D surface geometry. A wrapped phase map can be computed from the camera images by phase analysis techniques. Phase-unwrapping can solve the phase ambiguity of the wrapped phase map and permit determination of camera-projector correspondences. The object surface geometry can then be reconstructed by stereovision techniques after system calibration. For real-time 3D measurement, geometry-constraint based methods may be a preferred technique over other phase-unwrapping methods, since geometry-constraint methods can handle surface discontinuities, which are problematic for spatial phase unwrapping, and they do not require additional patterns, which are needed in temporal phase unwrapping. However, the fringe patterns used in geometry-constraint based methods are usually designed with a low frequency in order to maximize the reliability of correspondence determination. Although using high-frequency fringe patterns have proven to be effective in increasing the measurement accuracy by suppressing the phase error, high-frequency fringe patterns may reduce the reliability and thus are not commonly used. To address the limitations of current geometry-constraint based methods, a new fringe projection method for surface-shape measurement was developed using modulation of background and amplitude intensities of the fringe patterns to permit identification of the fringe order, and thus unwrap the phase, for high-frequency fringe patterns. Another method was developed with background modulation only, using four high-frequency phase-shifted fringe patterns. The pattern frequency is determined using a new fringe-wavelength geometry-constraint model that allows only two point candidates in the measurement volume. The correct corresponding point is selected with high reliability using a binary pattern computed from the background intensity. Equations of geometry-constraint parameters permit parameter calculation prior to measurement, thus reducing computational cost during measurement. In a further development, a new real-time 3D measurement method was devised using new background-modulated modified Fourier transform profilometry (FTP) fringe patterns and geometry constraints. The new method reduced the number of fringe patterns required for 3D surface reconstruction to two. A short camera-projector baseline allows reliable corresponding-point selection, even with high-frequency fringe patterns, and a new calibration approach reduces error induced by the short baseline. Experiments demonstrated the ability of the methods to perform real-time 3D measurement for a surface with geometric discontinuity, and for spatially isolated objects. Although multi-image FPP techniques can achieve higher accuracy than single-image methods, they suffer from motion artifacts when measuring dynamic object surfaces that are either moving or deforming. To reduce the motion-induced measurement error for multi-image FPP techniques, a new method was developed to first estimate the motion-induced phase shift errors by computing the differences between phase maps over a multiple measurement sequence. Then, a phase map with reduced motion-induced error is computed using the estimated phase shift errors. This motion-induced error compensation is computed pixel-wise for non-homogeneous surface motion. Experiments demonstrated the ability of the method to reduce motion-induced error in real-time, for real-time shape measurement of surfaces with high depth variation, and moving and deforming surfaces

    Novel Approaches in Structured Light Illumination

    Get PDF
    Among the various approaches to 3-D imaging, structured light illumination (SLI) is widely spread. SLI employs a pair of digital projector and digital camera such that the correspondences can be found based upon the projecting and capturing of a group of designed light patterns. As an active sensing method, SLI is known for its robustness and high accuracy. In this dissertation, I study the phase shifting method (PSM), which is one of the most employed strategy in SLI. And, three novel approaches in PSM have been proposed in this dissertation. First, by regarding the design of patterns as placing points in an N-dimensional space, I take the phase measuring profilometry (PMP) as an example and propose the edge-pattern strategy which achieves maximum signal to noise ratio (SNR) for the projected patterns. Second, I develop a novel period information embedded pattern strategy for fast, reliable 3-D data acquisition and reconstruction. The proposed period coded phase shifting strategy removes the depth ambiguity associated with traditional phase shifting patterns without reducing phase accuracy or increasing the number of projected patterns. Thus, it can be employed for high accuracy realtime 3-D system. Then, I propose a hybrid approach for high quality 3-D reconstructions with only a small number of illumination patterns by maximizing the use of correspondence information from the phase, texture, and modulation data derived from multi-view, PMP-based, SLI images, without rigorously synchronizing the cameras and projectors and calibrating the device gammas. Experimental results demonstrate the advantages of the proposed novel strategies for 3-D SLI systems
    corecore