10,994 research outputs found
Semi-Supervised Image-to-Image Translation
Image-to-image translation is a long-established and a difficult problem in
computer vision. In this paper we propose an adversarial based model for
image-to-image translation. The regular deep neural-network based methods
perform the task of image-to-image translation by comparing gram matrices and
using image segmentation which requires human intervention. Our generative
adversarial network based model works on a conditional probability approach.
This approach makes the image translation independent of any local, global and
content or style features. In our approach we use a bidirectional
reconstruction model appended with the affine transform factor that helps in
conserving the content and photorealism as compared to other models. The
advantage of using such an approach is that the image-to-image translation is
semi-supervised, independant of image segmentation and inherits the properties
of generative adversarial networks tending to produce realistic. This method
has proven to produce better results than Multimodal Unsupervised
Image-to-image translation
RainDiffusion:When Unsupervised Learning Meets Diffusion Models for Real-world Image Deraining
What will happen when unsupervised learning meets diffusion models for
real-world image deraining? To answer it, we propose RainDiffusion, the first
unsupervised image deraining paradigm based on diffusion models. Beyond the
traditional unsupervised wisdom of image deraining, RainDiffusion introduces
stable training of unpaired real-world data instead of weakly adversarial
training. RainDiffusion consists of two cooperative branches: Non-diffusive
Translation Branch (NTB) and Diffusive Translation Branch (DTB). NTB exploits a
cycle-consistent architecture to bypass the difficulty in unpaired training of
standard diffusion models by generating initial clean/rainy image pairs. DTB
leverages two conditional diffusion modules to progressively refine the desired
output with initial image pairs and diffusive generative prior, to obtain a
better generalization ability of deraining and rain generation. Rain-Diffusion
is a non adversarial training paradigm, serving as a new standard bar for
real-world image deraining. Extensive experiments confirm the superiority of
our RainDiffusion over un/semi-supervised methods and show its competitive
advantages over fully-supervised ones.Comment: 9 page
UGC: Unified GAN Compression for Efficient Image-to-Image Translation
Recent years have witnessed the prevailing progress of Generative Adversarial
Networks (GANs) in image-to-image translation. However, the success of these
GAN models hinges on ponderous computational costs and labor-expensive training
data. Current efficient GAN learning techniques often fall into two orthogonal
aspects: i) model slimming via reduced calculation costs;
ii)data/label-efficient learning with fewer training data/labels. To combine
the best of both worlds, we propose a new learning paradigm, Unified GAN
Compression (UGC), with a unified optimization objective to seamlessly prompt
the synergy of model-efficient and label-efficient learning. UGC sets up
semi-supervised-driven network architecture search and adaptive online
semi-supervised distillation stages sequentially, which formulates a
heterogeneous mutual learning scheme to obtain an architecture-flexible,
label-efficient, and performance-excellent model
- …