15 research outputs found

    System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Get PDF
    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration

    System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    Get PDF
    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration

    System and Method for 3D Imaging using Structured Light Illumination

    Get PDF
    A biometrics system captures and processes a handprint image using a structured light illumination to create a 2D representation equivalent of a rolled inked handprint. The biometrics system includes an enclosure with a scan volume for placement of the hand. A reference plane with a backdrop pattern forms one side of the scan volume. The backdrop pattern is preferably a random noise pattern and the coordinates of the backdrop pattern are predetermined at system provisioning. The biometrics system further includes at least one projection unit for projecting a structured light pattern onto a hand positioned in the scan volume on or in front of the backdrop pattern and at least two cameras for capturing a plurality of images of the hand, wherein each of the plurality of images includes at least a portion of the hand and the backdrop pattern. A processing unit calculates 3D coordinates of the hand from the plurality of images using the predetermined coordinates of the backdrop pattern to align the plurality of images and mapping the 3D coordinates to a 2D flat surface to create a 2D representation equivalent of a rolled inked handprint. The processing unit can also adjust calibration parameters for each hand scan from calculating coordinates of the portion of backdrop pattern in the at least one image and comparing with the predetermined coordinates of the backdrop pattern

    System and Method for 3D Imaging using Structured Light Illumination

    Get PDF
    A biometrics system captures and processes a handprint image using a structured light illumination to create a 2D representation equivalent of a rolled inked handprint. A processing unit calculates 3D coordinates of the hand from the plurality of images and maps the 3D coordinates to a 2D flat surface to create a 2D representation equivalent of a rolled inked handprint

    Lock and Hold Structured Light Illumination

    Get PDF
    A method, system, and associated program code, for 3-dimensional image acquisition, using structured light illumination, of a surface-of-interest under observation by at least one camera. One aspect includes: illuminating the surface-of-interest, while static/at rest, with structured light to obtain initial depth map data therefor; while projecting a hold pattern comprised of a plurality of snake-stripes at the static surface-of-interest, assigning an identity to and an initial lock position of each of the snake-stripes of the hold pattern; and while projecting the hold pattern, tracking, from frame-to-frame each of the snake-stripes. Another aspect includes: projecting a hold pattern comprised of a plurality of snake-stripes; as the surface-of-interest moves into a region under observation by at least one camera that also comprises the projected hold pattern, assigning an identity to and an initial lock position of each snake-stripe as it sequentially illuminates the surface-of-interest; and while projecting the hold pattern, tracking, from frame-to-frame, each snake-stripe while it passes through the region. Yet another aspect includes: projecting, in sequence at the surface-of-interest positioned within a region under observation by at least one camera, a plurality of snake-stripes of a hold pattern by opening/moving a shutter cover; as each of the snake-stripes sequentially illuminates the surface-of-interest, assigning an identity to and an initial lock position of that snake-stripe; and while projecting the hold pattern, tracking, from frame-to-frame, each of the snake-stripes once it has illuminated the surface-of-interest and entered the region

    Dual-Frequency Phase Multiplexing (DFPM) and Period Coded Phase Measuring (PCPM) Pattern Strategies in 3-D Structured Light Systems, and Lookup Table (LUT) Based Data Processing

    Get PDF
    A computer-implemented process, system, and computer-readable storage medium having stored thereon, program code and instructions for 3-D triangulation-based image acquisition of a contoured surface/object-of-interest under observation by at least one camera . . . For the remainder of this abstract, please download this patent

    Channel Capacity Model of Binary Encoded Structured Light-Stripe Illumination

    No full text
    this paper we have presented a theoretical basis for modeling and optimizing light-stripe techniques. Fig. 9. Combined encoding for determining interlace boundaries. Fig. 10. Noninterlaced ~top! and interlaced ~bottom! range images. 10 June 1998 y Vol. 37, No. 17 y APPLIED OPTICS 3695 This optimization was demonstrated through analysis of the entropy regions in stripe boundaries. Through minimization of the sampling period between high-entropy regions it is possible to attain an optimal spatial frequency while minimizing sampling error. This theory was demonstrated with a numerical light-stripe model. For validating theoretical results, experimental data were presented to demonstrate maximum frequencies and maximum lateral resolutions for our example. One can obtain further enhancements to the lateral resolution by interlacing valid regions in light structures through multiplexing. Inasmuch as the deviation of the stripe center location is typically less than the lateral spacing, interlacing achieves equivalent range resolution yet surpasses lateral sampling resolution. Therefore interlacing should permit much higher lateral resolution in most applications. Although interlacing increases the number of frames required for encoding, our research demonstrates that the system encoding rate is still significantly increased. This result has a profound effect on light-stripe methodologies, especially successive striping techniques, by permitting optimization of lateral and range measurements. Partial funding for this research was provided by NASA cooperative agreement NCCW-60 through Western Kentucky University and the Center for Manufacturing Systems, University of Kentucky. The authors thank William Chimitt for assistance in the imaging process
    corecore