121 research outputs found

    A Systematic Survey of Regularization and Normalization in GANs

    Full text link
    Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can generate realistic samples without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs' training, such as non-convergence, mode collapses, gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey which primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the regularization and normalization techniques that have been frequently employed in SOTA GANs. Finally, we highlight potential future directions of research in this domain

    On gradient regularizers for MMD GANs

    Get PDF
    We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on 160 × 160 CelebA and 64 × 64 unconditional ImageNet

    Optical lattice experiments at unobserved conditions and scales through generative adversarial deep learning

    Full text link
    Machine learning provides a novel avenue for the study of experimental realizations of many-body systems, and has recently been proven successful in analyzing properties of experimental data of ultracold quantum gases. We here show that deep learning succeeds in the more challenging task of modelling such an experimental data distribution. Our generative model (RUGAN) is able to produce snapshots of a doped two-dimensional Fermi-Hubbard model that are indistinguishable from previously reported experimental realizations. Importantly, it is capable of accurately generating snapshots at conditions for which it did not observe any experimental data, such as at higher doping values. On top of that, our generative model extracts relevant patterns from small-scale examples and can use these to construct new configurations at a larger size that serve as a precursor to observations at scales that are currently experimentally inaccessible. The snapshots created by our model---which come at effectively no cost---are extremely useful as they can be employed to quantitatively test new theoretical developments under conditions that have not been explored experimentally, parameterize phenomenological models, or train other, more data-intensive, machine learning methods. We provide predictions for experimental observables at unobserved conditions and benchmark these against modern theoretical frameworks. The deep learning method we develop here is broadly applicable and can be used for the efficient large-scale simulation of equilibrium and nonequilibrium physical systems
    • …
    corecore