Few-shot image generation aims to generate images of high quality and great
diversity with limited data. However, it is difficult for modern GANs to avoid
overfitting when trained on only a few images. The discriminator can easily
remember all the training samples and guide the generator to replicate them,
leading to severe diversity degradation. Several methods have been proposed to
relieve overfitting by adapting GANs pre-trained on large source domains to
target domains with limited real samples. In this work, we present a novel
approach to realize few-shot GAN adaptation via masked discrimination. Random
masks are applied to features extracted by the discriminator from input images.
We aim to encourage the discriminator to judge more diverse images which share
partially common features with training samples as realistic images.
Correspondingly, the generator is guided to generate more diverse images
instead of replicating training samples. In addition, we employ cross-domain
consistency loss for the discriminator to keep relative distances between
samples in its feature space. The discriminator cross-domain consistency loss
serves as another optimization target in addition to adversarial loss and
guides adapted GANs to preserve more information learned from source domains
for higher image quality. The effectiveness of our approach is demonstrated
both qualitatively and quantitatively with higher quality and greater diversity
on a series of few-shot image generation tasks than prior methods