6 research outputs found

    Towards photography through realistic fog

    No full text
    © 2018 IEEE. Imaging through fog has important applications in industries such as self-driving cars, augmented driving, airplanes, helicopters, drones and trains. Here we show that time profiles of light reflected from fog have a distribution (Gamma) that is different from light reflected from objects occluded by fog (Gaussian). This helps to distinguish between background photons reflected from the fog and signal photons reflected from the occluded object. Based on this observation, we recover reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. For practical use cases, the imaging system is designed in optical reflection mode with minimal footprint and is based on LIDAR hardware. Specifically, we use a single photon avalanche diode (SPAD) camera that time-tags individual detected photons. A probabilistic computational framework is developed to estimate the fog properties from the measurement itself, without prior knowledge. Other solutions are based on radar that suffers from poor resolution (due to the long wavelength), or on time gating that suffers from low signal-to-noise ratio. The suggested technique is experimentally evaluated in a wide range of fog densities created in a fog chamber It demonstrates recovering objects 57cm away from the camera when the visibility is 37cm. In that case it recovers depth with a resolution of 5cm and scene reflectance with an improvement of 4dB in PSNR and 3.4x reconstruction quality in SSIM over time gating techniques

    Lensless Imaging with Compressive Ultrafast Sensing

    No full text

    Imaging Through Volumetric Scattering with a Single Photon Sensitive Camera

    No full text
    © 2018 The Author(s). Imaging through highly scattering media holds many opportunities in underwater and biomedical imaging. Here we leverage a single photon avalanche diode (SPAD) camera, and experimentally demonstrate an imaging pipeline to see through turbid water in optical reflection mode

    Object classification through scattering media with deep learning on time resolved measurement

    No full text
    © 2017 Optical Society of America. We demonstrate an imaging technique that allows identification and classification of objects hidden behind scattering media and is invariant to changes in calibration parameters within a training range. Traditional techniques to image through scattering solve an inverse problem and are limited by the need to tune a forward model with multiple calibration parameters (like camera field of view, illumination position etc.). Instead of tuning a forward model and directly inverting the optical scattering, we use a data driven approach and leverage convolutional neural networks (CNN) to learn a model that is invariant to calibration parameters variations within the training range and nearly invariant beyond that. This effectively allows robust imaging through scattering conditions that is not sensitive to calibration. The CNN is trained with a large synthetic dataset generated with a Monte Carlo (MC) model that contains random realizations of major calibration parameters. The method is evaluated with a time-resolved camera and multiple experimental results are provided including pose estimation of a mannequin hidden behind a paper sheet with 23 correct classifications out of 30 tests in three poses (76.6% accuracy on real-world measurements). This approach paves the way towards real-time practical non line of sight (NLOS) imaging applications
    corecore