9 research outputs found

    Dual-Camera Joint Deblurring-Denoising

    Full text link
    Recent image enhancement methods have shown the advantages of using a pair of long and short-exposure images for low-light photography. These image modalities offer complementary strengths and weaknesses. The former yields an image that is clean but blurry due to camera or object motion, whereas the latter is sharp but noisy due to low photon count. Motivated by the fact that modern smartphones come equipped with multiple rear-facing camera sensors, we propose a novel dual-camera method for obtaining a high-quality image. Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another. Having a synchronized short exposure burst alongside the long exposure image enables us to (i) obtain better denoising by using a burst instead of a single image, (ii) recover motion from the burst and use it for motion-aware deblurring of the long exposure image, and (iii) fuse the two results to further enhance quality. Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method. We also show that our method qualitatively outperforms competing approaches on real synchronized dual-camera captures.Comment: Project webpage: http://shekshaa.github.io/Joint-Deblurring-Denoising

    Active Short-Long Exposure Deblurring

    Get PDF
    Mobile phones can capture image bursts to produce high quality still photographs. The simplest form of a burst is two frame short-long (S-L) exposure. S-L exposure is particularly suitable in low light conditions where short exposure frames are sharp but noisy and dark, and long exposure frames are affected by motion blur but have better scene chromaticity and luminance. In this work, we take a step further and define active short-long exposure deblurring where the viewfinder frames before the burst are used to optimize the S-L exposure parameters. We introduce deep architectures and data generation for active S-L exposure deblurring. The approach is experimentally validated with realistic data and it shows clear improvements. For the most difficult scenes (worst 5%) the PSNR is improved by +1.39dB.acceptedVersionPeer reviewe

    {HDR} Denoising and Deblurring by Learning Spatio-temporal Distortion Model

    Get PDF
    We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise (DISTORTED->CLEAN) supervised by pairs of CLEAN and DISTORTED images. Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos. We suggest a method to overcome those two limitations. First, we learn a different function instead: CLEAN->DISTORTED, which generates samples containing correlated pixel noise, and row and column noise, as well as motion blur from a low number of CLEAN sensor readings. Second, as there is not enough CLEAN HDR video available, we devise a method to learn from LDR video in-stead. Our approach compares favorably to several strong baselines, and can boost existing methods when they are re-trained on our data. Combined with spatial and temporal super-resolution, it enables applications such as re-lighting with low noise or blur

    Techniques for Deblurring Faces in Images by Utilizing Multi-Camera Fusion

    Get PDF
    This publication describes techniques for deblurring faces in images by utilizing multi-camera (e.g., dual-camera) fusion processes. In the techniques, multiple cameras of a computing device (e.g., wide-angle camera, an ultrawide-angle camera) concurrently capture a scene. A multi-camera fusion technique is utilized to fuse the captured images together to generate an image with increased sharpness while preserving the brightness of the scene and other details under a motion scene. The images are processed by a Deblur Module, which includes an optical flow machine-learned model for generating a warped ultrawide-angle image, a subject mask trained to identify and mask faces detected in the wide-angle image, and an occlusion mask for handling occlusion artifacts. The warped ultrawide-angle image, the raw wide-angle image (with blurred faces), the sharp ultrawide-angle image, the subject mask, and the occlusion map are then stacked and merged (fused) using a machine-learning model to output a sharp image without the presence of motion blur. This publication further describes techniques utilizing adaptive multi-streaming to optimize power consumption and dual camera usage on computing devices

    Depth and IMU aided image deblurring based on deep learning

    Get PDF
    Abstract. With the wide usage and spread of camera phones, it becomes necessary to tackle the problem of the image blur. Embedding a camera in those small devices implies obviously small sensor size compared to sensors in professional cameras such as full-frame Digital Single-Lens Reflex (DSLR) cameras. As a result, this can dramatically affect the collected amount of photons on the image sensor. To overcome this, a long exposure time is needed, but with slight motions that often happen in handheld devices, experiencing image blur is inevitable. Our interest in this thesis is the motion blur that can be caused by the camera motion, scene (objects in the scene) motion, or generally the relative motion between the camera and scene. We use deep neural network (DNN) models in contrary to conventional (non DNN-based) methods which are computationally expensive and time-consuming. The process of deblurring an image is guided by utilizing the scene depth and camera’s inertial measurement unit (IMU) records. One of the challenges of adopting DNN solutions is that a relatively huge amount of data is needed to train the neural network. Moreover, several hyperparameters need to be tuned including the network architecture itself. To train our network, a novel and promising method of synthesizing spatially-variant motion blur is proposed that considers the depth variations in the scene, which showed improvement of results against other methods. In addition to the synthetic dataset generation algorithm, a real blurry and sharp dataset collection setup is designed. This setup can provide thousands of real blurry and sharp images which can be of paramount benefit in DNN training or fine-tuning
    corecore