351 research outputs found
Memory augment is All You Need for image restoration
Image restoration is a low-level vision task, most CNN methods are designed
as a black box, lacking transparency and internal aesthetics. Although some
methods combining traditional optimization algorithms with DNNs have been
proposed, they all have some limitations. In this paper, we propose a
three-granularity memory layer and contrast learning named MemoryNet,
specifically, dividing the samples into positive, negative, and actual three
samples for contrastive learning, where the memory layer is able to preserve
the deep features of the image and the contrastive learning converges the
learned features to balance. Experiments on Derain/Deshadow/Deblur task
demonstrate that these methods are effective in improving restoration
performance. In addition, this paper's model obtains significant PSNR, SSIM
gain on three datasets with different degradation types, which is a strong
proof that the recovered images are perceptually realistic. The source code of
MemoryNet can be obtained from https://github.com/zhangbaijin/MemoryNe
Unsupervised Single Image Deraining with Self-supervised Constraints
Most existing single image deraining methods require learning supervised
models from a large set of paired synthetic training data, which limits their
generality, scalability and practicality in real-world multimedia applications.
Besides, due to lack of labeled-supervised constraints, directly applying
existing unsupervised frameworks to the image deraining task will suffer from
low-quality recovery. Therefore, we propose an Unsupervised Deraining
Generative Adversarial Network (UD-GAN) to tackle above problems by introducing
self-supervised constraints from the intrinsic statistics of unpaired rainy and
clean images. Specifically, we firstly design two collaboratively optimized
modules, namely Rain Guidance Module (RGM) and Background Guidance Module
(BGM), to take full advantage of rainy image characteristics: The RGM is
designed to discriminate real rainy images from fake rainy images which are
created based on outputs of the generator with BGM. Simultaneously, the BGM
exploits a hierarchical Gaussian-Blur gradient error to ensure background
consistency between rainy input and de-rained output. Secondly, a novel
luminance-adjusting adversarial loss is integrated into the clean image
discriminator considering the built-in luminance difference between real clean
images and derained images. Comprehensive experiment results on various
benchmarking datasets and different training settings show that UD-GAN
outperforms existing image deraining methods in both quantitative and
qualitative comparisons.Comment: 10 pages, 8 figure
- …