27 research outputs found
Multi-Channel Attention Selection GANs for Guided Image-to-Image Translation
We propose a novel model named Multi-Channel Attention Selection Generative
Adversarial Network (SelectionGAN) for guided image-to-image translation, where
we translate an input image into another while respecting an external semantic
guidance. The proposed SelectionGAN explicitly utilizes the semantic guidance
information and consists of two stages. In the first stage, the input image and
the conditional semantic guidance are fed into a cycled semantic-guided
generation network to produce initial coarse results. In the second stage, we
refine the initial results by using the proposed multi-scale spatial pooling \&
channel selection module and the multi-channel attention selection module.
Moreover, uncertainty maps automatically learned from attention maps are used
to guide the pixel loss for better network optimization. Exhaustive experiments
on four challenging guided image-to-image translation tasks (face, hand, body
and street view) demonstrate that our SelectionGAN is able to generate
significantly better results than the state-of-the-art methods. Meanwhile, the
proposed framework and modules are unified solutions and can be applied to
solve other generation tasks, such as semantic image synthesis. The code is
available at https://github.com/Ha0Tang/SelectionGAN.Comment: An extended version of a paper published in CVPR2019. arXiv admin
note: substantial text overlap with arXiv:1904.0680