7,216 research outputs found
Scale-wise Convolution for Image Restoration
While scale-invariant modeling has substantially boosted the performance of
visual recognition tasks, it remains largely under-explored in deep networks
based image restoration. Naively applying those scale-invariant techniques
(e.g. multi-scale testing, random-scale data augmentation) to image restoration
tasks usually leads to inferior performance. In this paper, we show that
properly modeling scale-invariance into neural networks can bring significant
benefits to image restoration performance. Inspired from spatial-wise
convolution for shift-invariance, "scale-wise convolution" is proposed to
convolve across multiple scales for scale-invariance. In our scale-wise
convolutional network (SCN), we first map the input image to the feature space
and then build a feature pyramid representation via bi-linear down-scaling
progressively. The feature pyramid is then passed to a residual network with
scale-wise convolutions. The proposed scale-wise convolution learns to
dynamically activate and aggregate features from different input scales in each
residual building block, in order to exploit contextual information on multiple
scales. In experiments, we compare the restoration accuracy and parameter
efficiency among our model and many different variants of multi-scale neural
networks. The proposed network with scale-wise convolution achieves superior
performance in multiple image restoration tasks including image
super-resolution, image denoising and image compression artifacts removal. Code
and models are available at: https://github.com/ychfan/scn_srComment: AAAI 202
Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections
In this paper, we propose a very deep fully convolutional encoding-decoding
framework for image restoration such as denoising and super-resolution. The
network is composed of multiple layers of convolution and de-convolution
operators, learning end-to-end mappings from corrupted images to the original
ones. The convolutional layers act as the feature extractor, which capture the
abstraction of image contents while eliminating noises/corruptions.
De-convolutional layers are then used to recover the image details. We propose
to symmetrically link convolutional and de-convolutional layers with skip-layer
connections, with which the training converges much faster and attains a
higher-quality local optimum. First, The skip connections allow the signal to
be back-propagated to bottom layers directly, and thus tackles the problem of
gradient vanishing, making training deep networks easier and achieving
restoration performance gains consequently. Second, these skip connections pass
image details from convolutional layers to de-convolutional layers, which is
beneficial in recovering the original image. Significantly, with the large
capacity, we can handle different levels of noises using a single model.
Experimental results show that our network achieves better performance than all
previously reported state-of-the-art methods.Comment: Accepted to Proc. Advances in Neural Information Processing Systems
(NIPS'16). Content of the final version may be slightly different. Extended
version is available at http://arxiv.org/abs/1606.0892
Fully Convolutional Network with Multi-Step Reinforcement Learning for Image Processing
This paper tackles a new problem setting: reinforcement learning with
pixel-wise rewards (pixelRL) for image processing. After the introduction of
the deep Q-network, deep RL has been achieving great success. However, the
applications of deep RL for image processing are still limited. Therefore, we
extend deep RL to pixelRL for various image processing applications. In
pixelRL, each pixel has an agent, and the agent changes the pixel value by
taking an action. We also propose an effective learning method for pixelRL that
significantly improves the performance by considering not only the future
states of the own pixel but also those of the neighbor pixels. The proposed
method can be applied to some image processing tasks that require pixel-wise
manipulations, where deep RL has never been applied. We apply the proposed
method to three image processing tasks: image denoising, image restoration, and
local color enhancement. Our experimental results demonstrate that the proposed
method achieves comparable or better performance, compared with the
state-of-the-art methods based on supervised learning.Comment: Accepted to AAAI 201
Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers
In image restoration tasks, like denoising and super resolution, continual
modulation of restoration levels is of great importance for real-world
applications, but has failed most of existing deep learning based image
restoration methods. Learning from discrete and fixed restoration levels, deep
models cannot be easily generalized to data of continuous and unseen levels.
This topic is rarely touched in literature, due to the difficulty of modulating
well-trained models with certain hyper-parameters. We make a step forward by
proposing a unified CNN framework that consists of few additional parameters
than a single-level model yet could handle arbitrary restoration levels between
a start and an end level. The additional module, namely AdaFM layer, performs
channel-wise feature modification, and can adapt a model to another restoration
level with high accuracy. By simply tweaking an interpolation coefficient, the
intermediate model - AdaFM-Net could generate smooth and continuous restoration
effects without artifacts. Extensive experiments on three image restoration
tasks demonstrate the effectiveness of both model training and modulation
testing. Besides, we carefully investigate the properties of AdaFM layers,
providing a detailed guidance on the usage of the proposed method.Comment: Accepted by CVPR 2019 (oral); code is available:
https://github.com/hejingwenhejingwen/AdaF
- …