31,315 research outputs found

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Computational temporal ghost imaging

    Get PDF
    Ghost imaging is a fascinating process, where light interacting with an object is recorded without resolution, but the shape of the object is nevertheless retrieved, thanks to quantum or classical correlations of this interacting light with either a computed or detected random signal. Recently, ghost imaging has been extended to a time object, by using several thousands copies of this periodic object. Here, we present a very simple device, inspired by computational ghost imaging, that allows the retrieval of a single non-reproducible, periodic or non-periodic, temporal signal. The reconstruction is performed by a single shot, spatially multiplexed, measurement of the spatial intensity correlations between computer-generated random images and the images, modulated by a temporal signal, recorded and summed on a chip CMOS camera used with no temporal resolution. Our device allows the reconstruction of either a single temporal signal with monochrome images or wavelength-multiplexed signals with color images

    Encoding of arbitrary micrometric complex illumination patterns with reduced speckle

    Get PDF
    In nonlinear microscopy, phase-only spatial light modulators (SLMs) allow achieving simultaneous two-photon excitation and fluorescence emission from specific regionof-interests (ROIs). However, as iterative Fourier transform algorithms (IFTAs) can only approximate the illumination of selected ROIs, both image formation and/or signal acquisition can be largely affected by the spatial irregularities of the illumination patterns and the speckle noise. To overcome these limitations, we propose an alternative complex illumination method (CIM) able to generate simultaneous excitation of large-area ROIs with full control over the amplitude and phase of light and reduced speckle. As a proof-of-concept we experimentally demonstrate single-photon and second harmonic generation (SHG) with structured illumination over large-area ROIs

    Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation

    Full text link
    The photo-response non-uniformity (PRNU) is a distinctive image sensor characteristic, and an imaging device inadvertently introduces its sensor's PRNU into all media it captures. Therefore, the PRNU can be regarded as a camera fingerprint and used for source attribution. The imaging pipeline in a camera, however, involves various processing steps that are detrimental to PRNU estimation. In the context of photographic images, these challenges are successfully addressed and the method for estimating a sensor's PRNU pattern is well established. However, various additional challenges related to generation of videos remain largely untackled. With this perspective, this work introduces methods to mitigate disruptive effects of widely deployed H.264 and H.265 video compression standards on PRNU estimation. Our approach involves an intervention in the decoding process to eliminate a filtering procedure applied at the decoder to reduce blockiness. It also utilizes decoding parameters to develop a weighting scheme and adjust the contribution of video frames at the macroblock level to PRNU estimation process. Results obtained on videos captured by 28 cameras show that our approach increases the PRNU matching metric up to more than five times over the conventional estimation method tailored for photos
    corecore