12,789 research outputs found
Burst Denoising with Kernel Prediction Networks
We present a technique for jointly denoising bursts of images taken from a
handheld camera. In particular, we propose a convolutional neural network
architecture for predicting spatially varying kernels that can both align and
denoise frames, a synthetic data generation approach based on a realistic noise
formation model, and an optimization guided by an annealed loss function to
avoid undesirable local minima. Our model matches or outperforms the
state-of-the-art across a wide range of noise levels on both real and synthetic
data.Comment: To appear in CVPR 2018 (spotlight). Project page:
http://people.eecs.berkeley.edu/~bmild/kpn
Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections
In this paper, we propose a very deep fully convolutional encoding-decoding
framework for image restoration such as denoising and super-resolution. The
network is composed of multiple layers of convolution and de-convolution
operators, learning end-to-end mappings from corrupted images to the original
ones. The convolutional layers act as the feature extractor, which capture the
abstraction of image contents while eliminating noises/corruptions.
De-convolutional layers are then used to recover the image details. We propose
to symmetrically link convolutional and de-convolutional layers with skip-layer
connections, with which the training converges much faster and attains a
higher-quality local optimum. First, The skip connections allow the signal to
be back-propagated to bottom layers directly, and thus tackles the problem of
gradient vanishing, making training deep networks easier and achieving
restoration performance gains consequently. Second, these skip connections pass
image details from convolutional layers to de-convolutional layers, which is
beneficial in recovering the original image. Significantly, with the large
capacity, we can handle different levels of noises using a single model.
Experimental results show that our network achieves better performance than all
previously reported state-of-the-art methods.Comment: Accepted to Proc. Advances in Neural Information Processing Systems
(NIPS'16). Content of the final version may be slightly different. Extended
version is available at http://arxiv.org/abs/1606.0892
Discriminative Transfer Learning for General Image Restoration
Recently, several discriminative learning approaches have been proposed for
effective image restoration, achieving convincing trade-off between image
quality and computational efficiency. However, these methods require separate
training for each restoration task (e.g., denoising, deblurring, demosaicing)
and problem condition (e.g., noise level of input images). This makes it
time-consuming and difficult to encompass all tasks and conditions during
training. In this paper, we propose a discriminative transfer learning method
that incorporates formal proximal optimization and discriminative learning for
general image restoration. The method requires a single-pass training and
allows for reuse across various problems and conditions while achieving an
efficiency comparable to previous discriminative approaches. Furthermore, after
being trained, our model can be easily transferred to new likelihood terms to
solve untrained tasks, or be combined with existing priors to further improve
image restoration quality
Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds
Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. The best currently available denoising methods
approximate this mapping with cleverly engineered algorithms. In this work we
attempt to learn this mapping directly with plain multi layer perceptrons (MLP)
applied to image patches. We will show that by training on large image
databases we are able to outperform the current state-of-the-art image
denoising methods. In addition, our method achieves results that are superior
to one type of theoretical bound and goes a large way toward closing the gap
with a second type of theoretical bound. Our approach is easily adapted to less
extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG
artifacts, salt-and-pepper noise and noise resembling stripes, for which we
achieve excellent results as well. We will show that combining a block-matching
procedure with MLPs can further improve the results on certain images. In a
second paper, we detail the training trade-offs and the inner mechanisms of our
MLPs
- …