Generative Adversarial Networks (GANs) can synthesize realistic images, with
the learned latent space shown to encode rich semantic information with various
interpretable directions. However, due to the unstructured nature of the
learned latent space, it inherits the bias from the training data where
specific groups of visual attributes that are not causally related tend to
appear together, a phenomenon also known as spurious correlations, e.g., age
and eyeglasses or women and lipsticks. Consequently, the learned distribution
often lacks the proper modelling of the missing examples. The interpolation
following editing directions for one attribute could result in entangled
changes with other attributes. To address this problem, previous works
typically adjust the learned directions to minimize the changes in other
attributes, yet they still fail on strongly correlated features. In this work,
we study the entanglement issue in both the training data and the learned
latent space for the StyleGAN2-FFHQ model. We propose a novel framework
SC2GAN that achieves disentanglement by re-projecting low-density latent
code samples in the original latent space and correcting the editing directions
based on both the high-density and low-density regions. By leveraging the
original meaningful directions and semantic region-specific layers, our
framework interpolates the original latent codes to generate images with
attribute combination that appears infrequently, then inverts these samples
back to the original latent space. We apply our framework to pre-existing
methods that learn meaningful latent directions and showcase its strong
capability to disentangle the attributes with small amounts of low-density
region samples added.Comment: Accepted to the Out Of Distribution Generalization in Computer Vision
workshop at ICCV202