Generative adversarial networks (GANs) are known to benefit from
regularization or normalization of their critic (discriminator) network during
training. In this paper, we analyze the popular spectral normalization scheme,
find a significant drawback and introduce sparsity aware normalization (SAN), a
new alternative approach for stabilizing GAN training. As opposed to other
normalization methods, our approach explicitly accounts for the sparse nature
of the feature maps in convolutional networks with ReLU activations. We
illustrate the effectiveness of our method through extensive experiments with a
variety of network architectures. As we show, sparsity is particularly dominant
in critics used for image-to-image translation settings. In these cases our
approach improves upon existing methods, in less training epochs and with
smaller capacity networks, while requiring practically no computational
overhead.Comment: AAAI Conference on Artificial Intelligence (AAAI-21