This paper tackles text-guided control of StyleGAN for editing garments in
full-body human images. Existing StyleGAN-based methods suffer from handling
the rich diversity of garments and body shapes and poses. We propose a
framework for text-guided full-body human image synthesis via an
attention-based latent code mapper, which enables more disentangled control of
StyleGAN than existing mappers. Our latent code mapper adopts an attention
mechanism that adaptively manipulates individual latent codes on different
StyleGAN layers under text guidance. In addition, we introduce feature-space
masking at inference time to avoid unwanted changes caused by text inputs. Our
quantitative and qualitative evaluations reveal that our method can control
generated images more faithfully to given texts than existing methods