1 research outputs found
Segmentation Guided Image-to-Image Translation with Adversarial Networks
Recently image-to-image translation has received increasing attention, which
aims to map images in one domain to another specific one. Existing methods
mainly solve this task via a deep generative model, and focus on exploring the
relationship between different domains. However, these methods neglect to
utilize higher-level and instance-specific information to guide the training
process, leading to a great deal of unrealistic generated images of low
quality. Existing methods also lack of spatial controllability during
translation. To address these challenge, we propose a novel Segmentation Guided
Generative Adversarial Networks (SGGAN), which leverages semantic segmentation
to further boost the generation performance and provide spatial mapping. In
particular, a segmentor network is designed to impose semantic information on
the generated images. Experimental results on multi-domain face image
translation task empirically demonstrate our ability of the spatial
modification and our superiority in image quality over several state-of-the-art
methods.Comment: Accepted for publication in 2019 14th IEEE International Conference
on Automatic Face & Gesture Recognition (FG 2019