1,392 research outputs found

    {HDR} Denoising and Deblurring by Learning Spatio-temporal Distortion Model

    Get PDF
    We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise (DISTORTED->CLEAN) supervised by pairs of CLEAN and DISTORTED images. Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos. We suggest a method to overcome those two limitations. First, we learn a different function instead: CLEAN->DISTORTED, which generates samples containing correlated pixel noise, and row and column noise, as well as motion blur from a low number of CLEAN sensor readings. Second, as there is not enough CLEAN HDR video available, we devise a method to learn from LDR video in-stead. Our approach compares favorably to several strong baselines, and can boost existing methods when they are re-trained on our data. Combined with spatial and temporal super-resolution, it enables applications such as re-lighting with low noise or blur

    MantissaCam: Learning Snapshot High-dynamic-range Imaging with Perceptually-based In-pixel Irradiance Encoding

    Full text link
    The ability to image high-dynamic-range (HDR) scenes is crucial in many computer vision applications. The dynamic range of conventional sensors, however, is fundamentally limited by their well capacity, resulting in saturation of bright scene parts. To overcome this limitation, emerging sensors offer in-pixel processing capabilities to encode the incident irradiance. Among the most promising encoding schemes is modulo wrapping, which results in a computational photography problem where the HDR scene is computed by an irradiance unwrapping algorithm from the wrapped low-dynamic-range (LDR) sensor image. Here, we design a neural network--based algorithm that outperforms previous irradiance unwrapping methods and, more importantly, we design a perceptually inspired "mantissa" encoding scheme that more efficiently wraps an HDR scene into an LDR sensor. Combined with our reconstruction framework, MantissaCam achieves state-of-the-art results among modulo-type snapshot HDR imaging approaches. We demonstrate the efficacy of our method in simulation and show preliminary results of a prototype MantissaCam implemented with a programmable sensor

    Direct Visualization of Laser-Driven Focusing Shock Waves

    Full text link
    Cylindrically or spherically focusing shock waves have been of keen interest for the past several decades. In addition to fundamental study of materials under extreme conditions, cavitation, and sonoluminescence, focusing shock waves enable myriad applications including hypervelocity launchers, synthesis of new materials, production of high-temperature and high-density plasma fields, and a variety of medical therapies. Applications in controlled thermonuclear fusion and in the study of the conditions reached in laser fusion are also of current interest. Here we report on a method for direct real-time visualization and measurement of laser-driven shock generation, propagation, and 2D focusing in a sample. The 2D focusing of the shock front is the consequence of spatial shaping of the laser shock generation pulse into a ring pattern. A substantial increase of the pressure at the convergence of the acoustic shock front is observed experimentally and simulated numerically. Single-shot acquisitions using a streak camera reveal that at the convergence of the shock wave in liquid water the supersonic speed reaches Mach 6, corresponding to the multiple gigapascal pressure range 30 GPa

    Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors

    Get PDF
    Video frame interpolation (VFI) enables many important applications thatmight involve the temporal domain, such as slow motion playback, or the spatialdomain, such as stop motion sequences. We are focusing on the former task,where one of the key challenges is handling high dynamic range (HDR) scenes inthe presence of complex motion. To this end, we explore possible advantages ofdual-exposure sensors that readily provide sharp short and blurry longexposures that are spatially registered and whose ends are temporally aligned.This way, motion blur registers temporally continuous information on the scenemotion that, combined with the sharp reference, enables more precise motionsampling within a single camera shot. We demonstrate that this facilitates amore complex motion reconstruction in the VFI task, as well as HDR framereconstruction that so far has been considered only for the originally capturedframes, not in-between interpolated frames. We design a neural network trainedin these tasks that clearly outperforms existing solutions. We also propose ametric for scene motion complexity that provides important insights into theperformance of VFI methods at the test time.<br
    • …
    corecore