6,506 research outputs found
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
Manifold-valued Image Generation with Wasserstein Generative Adversarial Nets
Generative modeling over natural images is one of the most fundamental
machine learning problems. However, few modern generative models, including
Wasserstein Generative Adversarial Nets (WGANs), are studied on manifold-valued
images that are frequently encountered in real-world applications. To fill the
gap, this paper first formulates the problem of generating manifold-valued
images and exploits three typical instances: hue-saturation-value (HSV) color
image generation, chromaticity-brightness (CB) color image generation, and
diffusion-tensor (DT) image generation. For the proposed generative modeling
problem, we then introduce a theorem of optimal transport to derive a new
Wasserstein distance of data distributions on complete manifolds, enabling us
to achieve a tractable objective under the WGAN framework. In addition, we
recommend three benchmark datasets that are CIFAR-10 HSV/CB color images,
ImageNet HSV/CB color images, UCL DT image datasets. On the three datasets, we
experimentally demonstrate the proposed manifold-aware WGAN model can generate
more plausible manifold-valued images than its competitors.Comment: Accepted by AAAI 201
Sliced Wasserstein Generative Models
In generative modeling, the Wasserstein distance (WD) has emerged as a useful
metric to measure the discrepancy between generated and real data
distributions. Unfortunately, it is challenging to approximate the WD of
high-dimensional distributions. In contrast, the sliced Wasserstein distance
(SWD) factorizes high-dimensional distributions into their multiple
one-dimensional marginal distributions and is thus easier to approximate. In
this paper, we introduce novel approximations of the primal and dual SWD.
Instead of using a large number of random projections, as it is done by
conventional SWD approximation methods, we propose to approximate SWDs with a
small number of parameterized orthogonal projections in an end-to-end deep
learning fashion. As concrete applications of our SWD approximations, we design
two types of differentiable SWD blocks to equip modern generative
frameworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In
the experiments, we not only show the superiority of the proposed generative
models on standard image synthesis benchmarks, but also demonstrate the
state-of-the-art performance on challenging high resolution image and video
generation in an unsupervised manner.Comment: This paper is accepted by CVPR 2019, accidentally uploaded as a new
submission (arXiv:1904.05408, which has been withdrawn). The code is
available at this https URL https:// github.com/musikisomorphie/swd.gi
- …