583 research outputs found
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Revisiting Precision and Recall Definition for Generative Model Evaluation
In this article we revisit the definition of Precision-Recall (PR) curves for
generative models proposed by Sajjadi et al. (arXiv:1806.00035). Rather than
providing a scalar for generative quality, PR curves distinguish mode-collapse
(poor recall) and bad quality (poor precision). We first generalize their
formulation to arbitrary measures, hence removing any restriction to finite
support. We also expose a bridge between PR curves and type I and type II error
rates of likelihood ratio classifiers on the task of discriminating between
samples of the two distributions. Building upon this new perspective, we
propose a novel algorithm to approximate precision-recall curves, that shares
some interesting methodological properties with the hypothesis testing
technique from Lopez-Paz et al (arXiv:1610.06545). We demonstrate the interest
of the proposed formulation over the original approach on controlled
multi-modal datasets.Comment: ICML 201
- …