44 research outputs found

    IntersectGAN: Learning Domain Intersection for Generating Images with Multiple Attributes

    Full text link
    Generative adversarial networks (GANs) have demonstrated great success in generating various visual content. However, images generated by existing GANs are often of attributes (e.g., smiling expression) learned from one image domain. As a result, generating images of multiple attributes requires many real samples possessing multiple attributes which are very resource expensive to be collected. In this paper, we propose a novel GAN, namely IntersectGAN, to learn multiple attributes from different image domains through an intersecting architecture. For example, given two image domains X1X_1 and X2X_2 with certain attributes, the intersection X1X2X_1 \cap X_2 denotes a new domain where images possess the attributes from both X1X_1 and X2X_2 domains. The proposed IntersectGAN consists of two discriminators D1D_1 and D2D_2 to distinguish between generated and real samples of different domains, and three generators where the intersection generator is trained against both discriminators. And an overall adversarial loss function is defined over three generators. As a result, our proposed IntersectGAN can be trained on multiple domains of which each presents one specific attribute, and eventually eliminates the need of real sample images simultaneously possessing multiple attributes. By using the CelebFaces Attributes dataset, our proposed IntersectGAN is able to produce high quality face images possessing multiple attributes (e.g., a face with black hair and a smiling expression). Both qualitative and quantitative evaluations are conducted to compare our proposed IntersectGAN with other baseline methods. Besides, several different applications of IntersectGAN have been explored with promising results

    Bayesian Generative Adversarial Nets with Dropout Inference

    No full text
    Generative adversarial networks are one of the most popular approaches to generate new data from complex high-dimensional data distributions. They have revolutionized the area of generative models by creating quality samples that highly resemble the true data distribution. However, these samples often cover only few high density areas of the true data distribution. As some of the modes are missing in the generated data, this issue is referred to as mode collapse. Bayesian GANs (BGANs) can address this to a great extend by considering Bayesian learning principles. Instead of learning point estimates of parameters in the network, BGANs learn a probability distribution over these parameters and make use of the posterior distribution over parameters to make prediction. As these models are huge neural networks, analytical inference is not feasible due to the intractable likelihood and evidence terms. Hence, BGANs perform an approximate inference based on stochastic gradient Hamiltonian Monte Carlo (SGHMC) sampling which is computationally expensive and displays convergence problems. We propose a simple and effective Bayesian GAN model based on Monte Carlo dropout based inference (BDGAN). We establish theoretical connection between variational inference in Bayesian GANs and Monte Carlo dropout in GANs. The effectiveness of the proposed model in overcoming mode collapse is demonstrated on various synthetic and real-world data sets. Additionally, we analyse the training time and memory usage to show case the proposed method's advantages over Bayesian GAN. © 2021 ACM
    corecore