6,723 research outputs found
Scalable Unbalanced Optimal Transport using Generative Adversarial Networks
Generative adversarial networks (GANs) are an expressive class of neural
generative models with tremendous success in modeling high-dimensional
continuous measures. In this paper, we present a scalable method for unbalanced
optimal transport (OT) based on the generative-adversarial framework. We
formulate unbalanced OT as a problem of simultaneously learning a transport map
and a scaling factor that push a source measure to a target measure in a
cost-optimal manner. In addition, we propose an algorithm for solving this
problem based on stochastic alternating gradient updates, similar in practice
to GANs. We also provide theoretical justification for this formulation,
showing that it is closely related to an existing static formulation by Liero
et al. (2018), and perform numerical experiments demonstrating how this
methodology can be applied to population modeling
Quadratically-Regularized Optimal Transport on Graphs
Optimal transportation provides a means of lifting distances between points
on a geometric domain to distances between signals over the domain, expressed
as probability distributions. On a graph, transportation problems can be used
to express challenging tasks involving matching supply to demand with minimal
shipment expense; in discrete language, these become minimum-cost network flow
problems. Regularization typically is needed to ensure uniqueness for the
linear ground distance case and to improve optimization convergence;
state-of-the-art techniques employ entropic regularization on the
transportation matrix. In this paper, we explore a quadratic alternative to
entropic regularization for transport over a graph. We theoretically analyze
the behavior of quadratically-regularized graph transport, characterizing how
regularization affects the structure of flows in the regime of small but
nonzero regularization. We further exploit elegant second-order structure in
the dual of this problem to derive an easily-implemented Newton-type
optimization algorithm.Comment: 27 page
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
- …