5,914 research outputs found

    Illumination-Based Synchronization of High-Speed Vision Sensors

    Get PDF
    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters

    Photon Counting and Direct ToF Camera Prototype Based on CMOS SPADs

    Get PDF
    This paper presents a camera prototype for 2D/3D image capture in low illumination conditions based on single-photon avalanche-diode (SPAD) image sensor for direct time-offlight (d-ToF). The imager is a 64×64 array with in-pixel TDC for high frame rate acquisition. Circuit design techniques are combined to ensure successful 3D image capturing under low sensitivity conditions and high level of uncorrelated noise such as dark count and background illumination. Among them an innovative time gated front-end for the SPAD detector, a reverse start-stop scheme and real-time image reconstruction at Ikfps are incorporated by the imager. To the best of our knowledge, this is the first ToF camera based on a SPAD sensor fabricated and proved for 3D image reconstruction in a standard CMOS process without any opto-flavor or high voltage option. It has a depth resolution of 1cm at an illumination power from less than 6nW/mm 2 down to 0.1nW/mm 2 .Office of Naval Research (USA) N000141410355Ministerio de Economía y Competitividad TEC2015-66878-C3- 1-RJunta de Andalucía P12-TIC 233

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Improved Contrast Sensitivity DVS and its Application to Event-Driven Stereo Vision

    Get PDF
    This paper presents a new DVS sensor with one order of magnitude improved contrast sensitivity over previous reported DVSs. This sensor has been applied to a bio-inspired event-based binocular system that performs 3D event-driven reconstruction of a scene. Events from two DVS sensors are matched by using precise timing information of their ocurrence. To improve matching reliability, satisfaction of epipolar geometry constraint is required, and simultaneously available information on the orientation is used as an additional matching constraint.Ministerio de Economía y Competitividad PRI-PIMCHI-2011-0768Ministerio de Economía y Competitividad TEC2009-10639-C04-01Junta de Andalucía TIC-609

    A Signal Normalization Technique for Illumination-Based Synchronization of 1,000-fps Real-Time Vision Sensors in Dynamic Scenes

    Get PDF
    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. In this paper, an illumination-based synchronization derived from the phase-locked loop (PLL) mechanism based on the signal normalization method is proposed and evaluated. To eliminate the system dependency due to the amplitude fluctuation of the reference illumination, which may be caused by the moving objects or relative positional distance change between the light source and the observed objects, the fluctuant amplitude of the reference signal is normalized framely by the estimated maximum amplitude between the reference signal and its quadrature counterpart to generate a stable synchronization in highly dynamic scenes. Both simulated results and real world experimental results demonstrated successful synchronization result that 1,000-Hz frame rate vision sensors can be successfully synchronized to a LED illumination or its reflected light with satisfactory stability and only 28-μs jitters

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Toward-1mm depth precision with a solid state full-field range imaging system

    Get PDF
    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome
    corecore