310 research outputs found

    Dynamical Optimal Transport on Discrete Surfaces

    Full text link
    We propose a technique for interpolating between probability distributions on discrete surfaces, based on the theory of optimal transport. Unlike previous attempts that use linear programming, our method is based on a dynamical formulation of quadratic optimal transport proposed for flat domains by Benamou and Brenier [2000], adapted to discrete surfaces. Our structure-preserving construction yields a Riemannian metric on the (finite-dimensional) space of probability distributions on a discrete surface, which translates the so-called Otto calculus to discrete language. From a practical perspective, our technique provides a smooth interpolation between distributions on discrete surfaces with less diffusion than state-of-the-art algorithms involving entropic regularization. Beyond interpolation, we show how our discrete notion of optimal transport extends to other tasks, such as distribution-valued Dirichlet problems and time integration of gradient flows

    Quadratically-Regularized Optimal Transport on Graphs

    Full text link
    Optimal transportation provides a means of lifting distances between points on a geometric domain to distances between signals over the domain, expressed as probability distributions. On a graph, transportation problems can be used to express challenging tasks involving matching supply to demand with minimal shipment expense; in discrete language, these become minimum-cost network flow problems. Regularization typically is needed to ensure uniqueness for the linear ground distance case and to improve optimization convergence; state-of-the-art techniques employ entropic regularization on the transportation matrix. In this paper, we explore a quadratic alternative to entropic regularization for transport over a graph. We theoretically analyze the behavior of quadratically-regularized graph transport, characterizing how regularization affects the structure of flows in the regime of small but nonzero regularization. We further exploit elegant second-order structure in the dual of this problem to derive an easily-implemented Newton-type optimization algorithm.Comment: 27 page

    A JKO splitting scheme for Kantorovich-Fisher-Rao gradient flows

    Full text link
    In this article we set up a splitting variant of the JKO scheme in order to handle gradient flows with respect to the Kantorovich-Fisher-Rao metric, recently introduced and defined on the space of positive Radon measure with varying masses. We perform successively a time step for the quadratic Wasserstein/Monge-Kantorovich distance, and then for the Hellinger/Fisher-Rao distance. Exploiting some inf-convolution structure of the metric we show convergence of the whole process for the standard class of energy functionals under suitable compactness assumptions, and investigate in details the case of internal energies. The interest is double: On the one hand we prove existence of weak solutions for a certain class of reaction-advection-diffusion equations, and on the other hand this process is constructive and well adapted to available numerical solvers.Comment: Final version, to appear in SIAM SIM

    Interior-point algorithms for convex optimization based on primal-dual metrics

    Full text link
    We propose and analyse primal-dual interior-point algorithms for convex optimization problems in conic form. The families of algorithms we analyse are so-called short-step algorithms and they match the current best iteration complexity bounds for primal-dual symmetric interior-point algorithm of Nesterov and Todd, for symmetric cone programming problems with given self-scaled barriers. Our results apply to any self-concordant barrier for any convex cone. We also prove that certain specializations of our algorithms to hyperbolic cone programming problems (which lie strictly between symmetric cone programming and general convex optimization problems in terms of generality) can take advantage of the favourable special structure of hyperbolic barriers. We make new connections to Riemannian geometry, integrals over operator spaces, Gaussian quadrature, and strengthen the connection of our algorithms to quasi-Newton updates and hence first-order methods in general.Comment: 36 page

    Trends in Mathematical Imaging and Surface Processing

    Get PDF
    Motivated both by industrial applications and the challenge of new problems, one observes an increasing interest in the field of image and surface processing over the last years. It has become clear that even though the applications areas differ significantly the methodological overlap is enormous. Even if contributions to the field come from almost any discipline in mathematics, a major role is played by partial differential equations and in particular by geometric and variational modeling and by their numerical counterparts. The aim of the workshop was to gather a group of leading experts coming from mathematics, engineering and computer graphics to cover the main developments

    A geodesic interior-point method for linear optimization over symmetric cones

    Full text link
    We develop a new interior-point method for symmetric-cone optimization, a common generalization of linear, second-order-cone, and semidefinite programming. Our key idea is updating iterates with a geodesic of the cone instead of the kernel of the linear constraints. This approach yields a primal-dual-symmetric, scale-invariant, and line-search-free algorithm that uses just half the variables of a standard primal-dual method. With elementary arguments, we establish polynomial-time convergence matching the standard square-root-n bound. Finally, we prove global convergence of a long-step variant and compare the approaches computationally. For linear programming, our algorithms reduce to central-path tracking in the log domain

    Non-Convex and Geometric Methods for Tomography and Label Learning

    Get PDF
    Data labeling is a fundamental problem of mathematical data analysis in which each data point is assigned exactly one single label (prototype) from a finite predefined set. In this thesis we study two challenging extensions, where either the input data cannot be observed directly or prototypes are not available beforehand. The main application of the first setting is discrete tomography. We propose several non-convex variational as well as smooth geometric approaches to joint image label assignment and reconstruction from indirect measurements with known prototypes. In particular, we consider spatial regularization of assignments, based on the KL-divergence, which takes into account the smooth geometry of discrete probability distributions endowed with the Fisher-Rao (information) metric, i.e. the assignment manifold. Finally, the geometric point of view leads to a smooth flow evolving on a Riemannian submanifold including the tomographic projection constraints directly into the geometry of assignments. Furthermore we investigate corresponding implicit numerical schemes which amount to solving a sequence of convex problems. Likewise, for the second setting, when the prototypes are absent, we introduce and study a smooth dynamical system for unsupervised data labeling which evolves by geometric integration on the assignment manifold. Rigorously abstracting from ``data-label'' to ``data-data'' decisions leads to interpretable low-rank data representations, which themselves are parameterized by label assignments. The resulting self-assignment flow simultaneously performs learning of latent prototypes in the very same framework while they are used for inference. Moreover, a single parameter, the scale of regularization in terms of spatial context, drives the entire process. By smooth geodesic interpolation between different normalizations of self-assignment matrices on the positive definite matrix manifold, a one-parameter family of self-assignment flows is defined. Accordingly, the proposed approach can be characterized from different viewpoints such as discrete optimal transport, normalized spectral cuts and combinatorial optimization by completely positive factorizations, each with additional built-in spatial regularization
    • 

    corecore