17 research outputs found
Self Sparse Generative Adversarial Networks
Generative Adversarial Networks (GANs) are an unsupervised generative model
that learns data distribution through adversarial training. However, recent
experiments indicated that GANs are difficult to train due to the requirement
of optimization in the high dimensional parameter space and the zero gradient
problem. In this work, we propose a Self Sparse Generative Adversarial Network
(Self-Sparse GAN) that reduces the parameter space and alleviates the zero
gradient problem. In the Self-Sparse GAN, we design a Self-Adaptive Sparse
Transform Module (SASTM) comprising the sparsity decomposition and feature-map
recombination, which can be applied on multi-channel feature maps to obtain
sparse feature maps. The key idea of Self-Sparse GAN is to add the SASTM
following every deconvolution layer in the generator, which can adaptively
reduce the parameter space by utilizing the sparsity in multi-channel feature
maps. We theoretically prove that the SASTM can not only reduce the search
space of the convolution kernel weight of the generator but also alleviate the
zero gradient problem by maintaining meaningful features in the Batch
Normalization layer and driving the weight of deconvolution layers away from
being negative. The experimental results show that our method achieves the best
FID scores for image generation compared with WGAN-GP on MNIST, Fashion-MNIST,
CIFAR-10, STL-10, mini-ImageNet, CELEBA-HQ, and LSUN bedrooms, and the relative
decrease of FID is 4.76% ~ 21.84%