11 research outputs found
XingGAN for Person Image Generation
We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN)
for person image generation tasks, i.e., translating the pose of a given person
to a desired one. The proposed Xing generator consists of two generation
branches that model the person's appearance and shape information,
respectively. Moreover, we propose two novel blocks to effectively transfer and
update the person's shape and appearance embeddings in a crossing way to
mutually improve each other, which has not been considered by any other
existing GAN-based image generation work. Extensive experiments on two
challenging datasets, i.e., Market-1501 and DeepFashion, demonstrate that the
proposed XingGAN advances the state-of-the-art performance both in terms of
objective quantitative scores and subjective visual realness. The source code
and trained models are available at https://github.com/Ha0Tang/XingGAN.Comment: Accepted to ECCV 2020, camera ready (16 pages) + supplementary (6
pages