6,168 research outputs found
Dilated Deep Residual Network for Image Denoising
Variations of deep neural networks such as convolutional neural network (CNN)
have been successfully applied to image denoising. The goal is to automatically
learn a mapping from a noisy image to a clean image given training data
consisting of pairs of noisy and clean images. Most existing CNN models for
image denoising have many layers. In such cases, the models involve a large
amount of parameters and are computationally expensive to train. In this paper,
we develop a dilated residual CNN for Gaussian image denoising. Compared with
the recently proposed residual denoiser, our method can achieve comparable
performance with less computational cost. Specifically, we enlarge receptive
field by adopting dilated convolution in residual network, and the dilation
factor is set to a certain value. We utilize appropriate zero padding to make
the dimension of the output the same as the input. It has been proven that the
expansion of receptive field can boost the CNN performance in image
classification, and we further demonstrate that it can also lead to competitive
performance for denoising problem. Moreover, we present a formula to calculate
receptive field size when dilated convolution is incorporated. Thus, the change
of receptive field can be interpreted mathematically. To validate the efficacy
of our approach, we conduct extensive experiments for both gray and color image
denoising with specific or randomized noise levels. Both of the quantitative
measurements and the visual results of denoising are promising comparing with
state-of-the-art baselines.Comment: camera ready, 8 pages, accepted to IEEE ICTAI 201
An ELU Network with Total Variation for Image Denoising
In this paper, we propose a novel convolutional neural network (CNN) for
image denoising, which uses exponential linear unit (ELU) as the activation
function. We investigate the suitability by analyzing ELU's connection with
trainable nonlinear reaction diffusion model (TNRD) and residual denoising. On
the other hand, batch normalization (BN) is indispensable for residual
denoising and convergence purpose. However, direct stacking of BN and ELU
degrades the performance of CNN. To mitigate this issue, we design an
innovative combination of activation layer and normalization layer to exploit
and leverage the ELU network, and discuss the corresponding rationale.
Moreover, inspired by the fact that minimizing total variation (TV) can be
applied to image denoising, we propose a TV regularized L2 loss to evaluate the
training effect during the iterations. Finally, we conduct extensive
experiments, showing that our model outperforms some recent and popular
approaches on Gaussian denoising with specific or randomized noise levels for
both gray and color images.Comment: 10 pages, Accepted by the 24th International Conference on Neural
Information Processing (2017
Fully Convolutional Network with Multi-Step Reinforcement Learning for Image Processing
This paper tackles a new problem setting: reinforcement learning with
pixel-wise rewards (pixelRL) for image processing. After the introduction of
the deep Q-network, deep RL has been achieving great success. However, the
applications of deep RL for image processing are still limited. Therefore, we
extend deep RL to pixelRL for various image processing applications. In
pixelRL, each pixel has an agent, and the agent changes the pixel value by
taking an action. We also propose an effective learning method for pixelRL that
significantly improves the performance by considering not only the future
states of the own pixel but also those of the neighbor pixels. The proposed
method can be applied to some image processing tasks that require pixel-wise
manipulations, where deep RL has never been applied. We apply the proposed
method to three image processing tasks: image denoising, image restoration, and
local color enhancement. Our experimental results demonstrate that the proposed
method achieves comparable or better performance, compared with the
state-of-the-art methods based on supervised learning.Comment: Accepted to AAAI 201
- …