1,136 research outputs found

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    CMOS Architectures and circuits for high-speed decision-making from image flows

    Get PDF
    We present architectures, CMOS circuits and CMOS chips to process image flows at very high speed. This is achieved by exploiting bio-inspiration and performing processing tasks in parallel manner and concurrently with image acquisition. A vision system is presented which makes decisions within sub-msec range. This is very well suited for defense and security applications requiring segmentation and tracking of rapidly moving objects

    The focal plane instrumentation for the DUNE mission

    Full text link
    DUNE (Dark Universe Explorer) is a proposed mission to measure parameters of dark energy using weak gravitational lensing The particular challenges of both optical and infrared focal planes and the DUNE baseline solution is discussed. The DUNE visible Focal Plane Array (VFP) consists of 36 large format red-sensitive CCDs, arranged in a 9x4 array together with the associated mechanical support structure and electronics processing chains. Four additional CCDs dedicated to attitude control measurements are located at the edge of the array. All CCDs are 4096 pixel red-enhanced e2v CCD203-82 devices with square 12 μ\mum pixels, operating from 550-920nm. Combining four rows of CCDs provides a total exposure time of 1500s. The VFP will be used in a closed-loop system by the spacecraft, which operates in a drift scan mode, in order to synchronize the scan and readout rates. The Near Infrared (NIR) FPA consists of a 5 x 12 mosaic of 60 Hawaii 2RG detector arrays from Teledyne, NIR bandpass filters for the wavelength bands Y, J, and H, the mechanical support structure, and the detector readout and signal processing electronics. The FPA is operated at a maximum temperature of 140 K for low dark current of 0.02e-/s. Each sensor chip assembly has 2048 x 2048 square pixels of 18 μ\mum size (0.15 arcsec), sensitive in the 0.8 to 1.7 μ\mum wavelength range. As the spacecraft is scanning the sky, the image motion on the NIR FPA is stabilized by a de-scanning mirror during the integration time of 300 s per detector. The total integration time of 1500 seconds is split among the three NIR wavelengths bands. DUNE has been proposed to ESA's Cosmic Vision program and has been jointly selected with SPACE for an ESA Assessment Phase which has led to the joint Euclid mission concept.Comment: 9 pages; To appear in Proc. of SPIE Astronomical Telescopes and Instrumentation (23 - 28 June 2008, Marseille, France

    Validation of a Confocal Light Sheet Microscope using Push Broom Translation for Biomedical Applications

    Get PDF
    There exists a need for research of optical methods capable of image cytometry suitable for point-of-care technology. To propose am optical approach with no moving parts for simplification of mechanical components for the further development of the technology to the poin-of-care, a linear sensor with push broom translation method. Push broom translation is a method of moving objects by the sensor for an extended field of view. A polydimethylsiloxane (PDMS) microfluidic chamber with a syringe pump was used to deliver objects by the sensor. The volumetric rate of the pump was correlated to the integration time of the sensor to ensure images were realistically being formed, termed aspect ratio. An electro-chemical microfluidic system was then also investigated, redox-magnetohydrodynamics (R-MHD), to eliminate the mechanical syringe pump which showed deviations in linear speeds at the specimen plane. To image with adequate signal to background ratio within the deep chamber of the R-MHD device, an epitaxial light sheet confocal microscope (e-LSCM) was used to improve axial resolution. The linear sensor, having small pixels, blocked out-of-plane light while eliminating the need for a mechanical aperture which is used for traditional point-scanning confocal microscopy. The particular linear sensor used has binning modes that were used to vary the axial resolution by increasing the sensor aperture. This approach was validated by using a mirror translated in the axial direction and measuring remitted light intensity. The resulting curve estimated the real axial resolution of the microscope, which compared favorably to theoretical values. The R-MHD and the e-LSCM were then synchronized to perform continuous imaging of fluorescent microspheres and cells in suspension. This study combines epitaxial light sheet confocal microscopy and electro-chemical microfluidics as a robust approach which could be used in future point-of-care image cytometry applications

    Advanced photon counting techniques for long-range depth imaging

    Get PDF
    The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging applications. The work of this Thesis concentrates on the development and investigation of functional TCSPC-based long-range scanning time-of-flight (TOF) depth imaging systems. Although these systems have several different configurations and functions, all can facilitate depth profiling of remote targets at low light levels and with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode (SPAD) module were employed for developing kilometre-range TOF depth imaging systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor (CMOS) 32×32 Si-SPAD detector array was developed. This system was used with structured illumination to examine the potential for covert, eye-safe and high-speed depth imaging. In order to improve the light coupling efficiency onto the detectors, the arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using flip-chip bonding technology. This approach led to the improvement in the fill factor by up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR system was developed using a tunable broadband pulsed supercontinuum laser source which can provide simultaneous multispectral illumination, at wavelengths of 531, 570, 670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to provide the determination of physiological parameters as a function of the tree depth profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth images were estimated using spatial correlation techniques in order to reduce the aggregate number of photon required for depth reconstruction with low error. A depth imaging system was characterised and re-configured to reduce the effects of scintillation due to atmospheric turbulence. In addition, depth images were analysed in terms of spatial and depth resolution

    MOSFET Modulated Dual Conversion Gain CMOS Image Sensors

    Get PDF
    In recent years, vision systems based on CMOS image sensors have acquired significant ground over those based on charge-coupled devices (CCD). The main advantages of CMOS image sensors are their high level of integration, random accessibility, and low-voltage, low-power operation. Previously proposed high dynamic range enhancement schemes focused mainly on extending the sensor dynamic range at the high illumination end. Sensor dynamic range extension at the low illumination end has not been addressed. Since most applications require low-noise, high-sensitivity, characteristics for imaging of the dark region as well as dynamic range expansion to the bright region, the availability of a low-noise, high-sensitivity pixel device is particularly important. In this dissertation, a dual-conversion-gain (DCG) pixel architecture was proposed; this architecture increases the signal to noise ratio (SNR) and the dynamic range of CMOS image sensors at both the low and high illumination ends. The dual conversion gain pixel improves the dynamic range by changing the conversion gain based on the illumination level without increasing artifacts or increasing the imaging readout noise floor. A MOSFET is used to modulate the capacitance of the charge sensing node. Under high light illumination conditions, a low conversion gain is used to achieve higher full well capacity and wider dynamic range. Under low light conditions, a high conversion gain is enabled to lower the readout noise and achieve excellent low light performance. A sensor prototype using the new pixel architecture with 5.6μm pixel pitch was designed and fabricated using Micron Technology’s 130nm 3-metal and 2-poly silicon process. The periphery circuitries were designed to readout the pixel and support the pixel characterization needs. The pixel design, readout timing, and operation voltage were optimized. A detail sensor characterization was performed; a 127μV/e was achieved for the high conversion gain mode and 30.8μV/e for the low conversion gain mode. Characterization results confirm that a 42ke linear full well was achieved for the low conversion gain mode and 10.5ke for the high conversion gain mode. An average 2.1e readout noise was measured for the high conversion gain mode and 8.6e for the low conversion gain mode. The total sensor dynamic range was extended to 86dB by combining the two modes of operation with a 46.2dB maximum SNR. Several images were taken by the prototype sensor under different illumination levels. The simple processed color images show the clear advantage of the high conversion gain mode for the low light imaging

    Speckle pattern interferometry : vibration measurement based on a novel CMOS camera

    Get PDF
    A digital speckle pattern interferometer based on a novel custom complementary metaloxide- semiconductor (CMOS) array detector is described. The temporal evolution of the dynamic deformation of a test object is measured using inter-frame phase stepping. The flexibility of the CMOS detector is used to identify regions of interest with full-field time averaged measurements and then to interrogate those regions with time-resolved measurements sampled at up to 7 kHz. The maximum surface velocity that can be measured and the number of measurement points are limited by the frame rate and the data transfer rate of the detector. The custom sensor used in this work is a modulated light camera (MLC), whose pixel design is still based on the standard four transistor active pixel sensor (APS), but each pixel has four large independently shuttered capacitors that drastically boost the well capacity from that of the diode alone. Each capacitor represents a channel which has its own shutter switch and can either be operated independently or in tandem with others. The particular APS of this camera enables a novel approach in how the data are acquired and then processed. In this Thesis we demonstrate how, at a given frame rate and at a given number of measurement points, the data transfer rate of our system is increased if compared to the data transfer rate of a system using a standard approach. Moreover, under some assumptions, the gain in system bandwidth doesn’t entail any reduction in the maximum surface velocity that can be reliably measured with inter-frame phase stepping

    High-speed holographic imaging using compressed sensing and phase retrieval

    Get PDF
    Digital in-line holography serves as a useful encoder for spatial information. This allows three-dimensional reconstruction from a two-dimensional image. This is applicable to the tasks of fast motion capture, particle tracking etc. Sampling high resolution holograms yields a spatiotemporal tradeoff. We spatially subsample holograms to increase temporal resolution. We demonstrate this idea with two subsampling techniques, periodic and uniformly random sampling. The implementation includes an on-chip setup for periodic subsampling and a DMD (Digital Micromirror Device) -based setup for pixel-wise random subsampling. The on-chip setup enables direct increase of up to 20 in camera frame rate. Alternatively, the DMD-based setup encodes temporal information as high-speed mask patterns, and projects these masks within a single exposure (coded exposure). This way, the frame rate is improved to the level of the DMD with a temporal gain of 10. The reconstruction of subsampled data using the aforementioned setups is achieved in two ways. We examine and compare two iterative reconstruction methods. One is an error reduction phase retrieval and the other is sparsity-based compressed sensing algorithm. Both methods show strong capability of reconstructing complex object fields. We present both simulations and real experiments. In the lab, we image and reconstruct structure and movement of static polystyrene microspheres, microscopic moving peranema, macroscopic fast moving fur and glitters
    corecore