Discriminator plays a vital role in training generative adversarial networks
(GANs) via distinguishing real and synthesized samples. While the real data
distribution remains the same, the synthesis distribution keeps varying because
of the evolving generator, and thus effects a corresponding change to the
bi-classification task for the discriminator. We argue that a discriminator
with an on-the-fly adjustment on its capacity can better accommodate such a
time-varying task. A comprehensive empirical study confirms that the proposed
training strategy, termed as DynamicD, improves the synthesis performance
without incurring any additional computation cost or training objectives. Two
capacity adjusting schemes are developed for training GANs under different data
regimes: i) given a sufficient amount of training data, the discriminator
benefits from a progressively increased learning capacity, and ii) when the
training data is limited, gradually decreasing the layer width mitigates the
over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware
image synthesis tasks conducted on a range of datasets substantiate the
generalizability of our DynamicD as well as its substantial improvement over
the baselines. Furthermore, DynamicD is synergistic to other
discriminator-improving approaches (including data augmentation, regularizers,
and pre-training), and brings continuous performance gain when combined for
learning GANs.Comment: To appear in NeurIPS 202