Online clothing catalogs lack diversity in body shape and garment size.
Brands commonly display their garments on models of one or two sizes, rarely
including plus-size models. In this work, we propose a new method, SizeGAN, for
generating images of garments on different-sized models. To change the garment
and model size while maintaining a photorealistic image, we incorporate image
alignment ideas from the medical imaging literature into the StyleGAN2-ADA
architecture. Our method learns deformation fields at multiple resolutions and
uses a spatial transformer to modify the garment and model size. We evaluate
our approach along three dimensions: realism, garment faithfulness, and size.
To our knowledge, SizeGAN is the first method to focus on this size
under-representation problem for modeling clothing. We provide an analysis
comparing SizeGAN to other plausible approaches and additionally provide the
first clothing dataset with size labels. In a user study comparing SizeGAN and
two recent virtual try-on methods, we show that our method ranks first in each
dimension, and was vastly preferred for realism and garment faithfulness. In
comparison to most previous work, which has focused on generating
photorealistic images of garments, our work shows that it is possible to
generate images that are both photorealistic and cover diverse garment sizes