6,073 research outputs found
Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers
In image restoration tasks, like denoising and super resolution, continual
modulation of restoration levels is of great importance for real-world
applications, but has failed most of existing deep learning based image
restoration methods. Learning from discrete and fixed restoration levels, deep
models cannot be easily generalized to data of continuous and unseen levels.
This topic is rarely touched in literature, due to the difficulty of modulating
well-trained models with certain hyper-parameters. We make a step forward by
proposing a unified CNN framework that consists of few additional parameters
than a single-level model yet could handle arbitrary restoration levels between
a start and an end level. The additional module, namely AdaFM layer, performs
channel-wise feature modification, and can adapt a model to another restoration
level with high accuracy. By simply tweaking an interpolation coefficient, the
intermediate model - AdaFM-Net could generate smooth and continuous restoration
effects without artifacts. Extensive experiments on three image restoration
tasks demonstrate the effectiveness of both model training and modulation
testing. Besides, we carefully investigate the properties of AdaFM layers,
providing a detailed guidance on the usage of the proposed method.Comment: Accepted by CVPR 2019 (oral); code is available:
https://github.com/hejingwenhejingwen/AdaF
Semantic Image Synthesis via Adversarial Learning
In this paper, we propose a way of synthesizing realistic images directly
with natural language description, which has many useful applications, e.g.
intelligent image manipulation. We attempt to accomplish such synthesis: given
a source image and a target text description, our model synthesizes images to
meet two requirements: 1) being realistic while matching the target text
description; 2) maintaining other image features that are irrelevant to the
text description. The model should be able to disentangle the semantic
information from the two modalities (image and text), and generate new images
from the combined semantics. To achieve this, we proposed an end-to-end neural
architecture that leverages adversarial learning to automatically learn
implicit loss functions, which are optimized to fulfill the aforementioned two
requirements. We have evaluated our model by conducting experiments on
Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated
that our model is capable of synthesizing realistic images that match the given
descriptions, while still maintain other features of original images.Comment: Accepted to ICCV 201
Bounded perturbation resilience of extragradient-type methods and their applications
In this paper we study the bounded perturbation resilience of the
extragradient and the subgradient extragradient methods for solving variational
inequality (VI) problem in real Hilbert spaces. This is an important property
of algorithms which guarantees the convergence of the scheme under summable
errors, meaning that an inexact version of the methods can also be considered.
Moreover, once an algorithm is proved to be bounded perturbation resilience,
superiorizion can be used, and this allows flexibility in choosing the bounded
perturbations in order to obtain a superior solution, as well explained in the
paper. We also discuss some inertial extragradient methods. Under mild and
standard assumptions of monotonicity and Lipschitz continuity of the VI's
associated mapping, convergence of the perturbed extragradient and subgradient
extragradient methods is proved. In addition we show that the perturbed
algorithms converges at the rate of . Numerical illustrations are given
to demonstrate the performances of the algorithms.Comment: Accepted for publication in The Journal of Inequalities and
Applications. arXiv admin note: text overlap with arXiv:1711.01936 and text
overlap with arXiv:1507.07302 by other author
- …