9 research outputs found
Dual-Camera Joint Deblurring-Denoising
Recent image enhancement methods have shown the advantages of using a pair of
long and short-exposure images for low-light photography. These image
modalities offer complementary strengths and weaknesses. The former yields an
image that is clean but blurry due to camera or object motion, whereas the
latter is sharp but noisy due to low photon count. Motivated by the fact that
modern smartphones come equipped with multiple rear-facing camera sensors, we
propose a novel dual-camera method for obtaining a high-quality image. Our
method uses a synchronized burst of short exposure images captured by one
camera and a long exposure image simultaneously captured by another. Having a
synchronized short exposure burst alongside the long exposure image enables us
to (i) obtain better denoising by using a burst instead of a single image, (ii)
recover motion from the burst and use it for motion-aware deblurring of the
long exposure image, and (iii) fuse the two results to further enhance quality.
Our method is able to achieve state-of-the-art results on synthetic dual-camera
images from the GoPro dataset with five times fewer training parameters
compared to the next best method. We also show that our method qualitatively
outperforms competing approaches on real synchronized dual-camera captures.Comment: Project webpage:
http://shekshaa.github.io/Joint-Deblurring-Denoising
Image processing and synthesis: From hand-crafted to data-driven modeling
This work investigates image and video restoration problems using effective optimization algorithms. First, we study the problem of single image dehazing to suppress artifacts in compressed or noisy images and videos. Our method is based on the linear haze model and minimizes the gradient residual between the input and output images. This successfully suppresses any new artifacts that are not obvious in the input images. Second, we propose a new method for image inpainting using deep neural networks. Given a set of training data, deep generate models can generate high-quality natural images following the same distribution. We search the nearest neighbor in the latent space of the deep generate models using a weighted context loss and prior loss. This code is then converted to the clean and uncorrupted image of the input. Third, we study the problem of recovering high-quality images from very noisy raw data captured in low-light conditions with short exposures. We build deep neural networks to learn the camera processing pipeline specifically for low-light raw data with an extremely low signal-to-noise ratio (SNR). To train the networks, we capture a new dataset of more than five thousand images with short-exposed and long-exposed pairs. Promising results are obtained compared with the traditional image processing pipeline. Finally, we propose a new method for extreme-low light video processing. The raw video frames are pre-processed using spatial-temporal denoising. A neural network is trained to move the error in the pre-processed data, learning to perform the image processing pipeline and encourage temporal smoothness of the output. Both quantitative and qualitative results demonstrate the proposed method significantly outperform the existing methods. It also paves the way for future research on this area