19 research outputs found

    On the Efficiency of the Sinkhorn and Greenkhorn Algorithms and Their Acceleration for Optimal Transport

    Full text link
    We present new complexity results for several algorithms that approximately solve the regularized optimal transport (OT) problem between two discrete probability measures with at most nn atoms. First, we show that a greedy variant of the classical Sinkhorn algorithm, known as the \textit{Greenkhorn} algorithm, achieves the complexity bound of O~(n2ε−2)\widetilde{\mathcal{O}}(n^2\varepsilon^{-2}), which improves the best known bound O~(n2ε−3)\widetilde{\mathcal{O}}(n^2\varepsilon^{-3}). Notably, this matches the best known complexity bound of the Sinkhorn algorithm and explains the superior performance of the Greenkhorn algorithm in practice. Furthermore, we generalize an adaptive primal-dual accelerated gradient descent (APDAGD) algorithm with mirror mapping ϕ\phi and show that the resulting \textit{adaptive primal-dual accelerated mirror descent} (APDAMD) algorithm achieves the complexity bound of O~(n2δε−1)\widetilde{\mathcal{O}}(n^2\sqrt{\delta}\varepsilon^{-1}) where δ>0\delta>0 depends on ϕ\phi. We point out that an existing complexity bound for the APDAGD algorithm is not valid in general using a simple counterexample and then establish the complexity bound of O~(n5/2ε−1)\widetilde{\mathcal{O}}(n^{5/2}\varepsilon^{-1}) by exploiting the connection between the APDAMD and APDAGD algorithms. Moreover, we introduce accelerated Sinkhorn and Greenkhorn algorithms that achieve the complexity bound of O~(n7/3ε−1)\widetilde{\mathcal{O}}(n^{7/3}\varepsilon^{-1}), which improves on the complexity bounds O~(n2ε−2)\widetilde{\mathcal{O}}(n^2\varepsilon^{-2}) of Sinkhorn and Greenkhorn algorithms in terms of ε\varepsilon. Experimental results on synthetic and real datasets demonstrate the favorable performance of new algorithms in practice.Comment: A preliminary version [arXiv:1901.06482] of this paper, with a subset of the results that are presented here, was presented at ICML 201

    Batch Greenkhorn Algorithm for Entropic-Regularized Multimarginal Optimal Transport: Linear Rate of Convergence and Iteration Complexity

    Get PDF
    In this work we propose a batch multimarginal version of the Greenkhorn algorithm for the entropic-regularized optimal transport problem. This framework is general enough to cover, as particular cases, existing Sinkhorn and Greenkhorn algorithms for the bi-marginal setting, and greedy MultiSinkhorn for the general multimarginal case. We provide a comprehensive convergence analysis based on the properties of the iterative Bregman projections method with greedy control. Linear rate of convergence as well as explicit bounds on the iteration complexity are obtained. When specialized to the above mentioned algorithms, our results give new convergence rates or provide key improvements over the state-of-the-art rates. We present numerical experiments showing that the flexibility of the batch can be exploited to improve performance of Sinkhorn algorithm both in bi-marginal and multimarginal settings

    Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm

    Full text link
    We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of mm discrete probability measures supported on a finite metric space of size nn. We show first that the constraint matrix arising from the standard linear programming (LP) representation of the FS-WBP is \textit{not totally unimodular} when m≥3m \geq 3 and n≥3n \geq 3. This result resolves an open question pertaining to the relationship between the FS-WBP and the minimum-cost flow (MCF) problem since it proves that the FS-WBP in the standard LP form is not an MCF problem when m≥3m \geq 3 and n≥3n \geq 3. We also develop a provably fast \textit{deterministic} variant of the celebrated iterative Bregman projection (IBP) algorithm, named \textsc{FastIBP}, with a complexity bound of O~(mn7/3ε−4/3)\tilde{O}(mn^{7/3}\varepsilon^{-4/3}), where ε∈(0,1)\varepsilon \in (0, 1) is the desired tolerance. This complexity bound is better than the best known complexity bound of O~(mn2ε−2)\tilde{O}(mn^2\varepsilon^{-2}) for the IBP algorithm in terms of ε\varepsilon, and that of O~(mn5/2ε−1)\tilde{O}(mn^{5/2}\varepsilon^{-1}) from accelerated alternating minimization algorithm or accelerated primal-dual adaptive gradient algorithm in terms of nn. Finally, we conduct extensive experiments with both synthetic data and real images and demonstrate the favorable performance of the \textsc{FastIBP} algorithm in practice.Comment: Accepted by NeurIPS 2020; fix some confusing parts in the proof and improve the empirical evaluatio

    On accelerated alternating minimization

    Get PDF
    Alternating minimization (AM) optimization algorithms have been known for a long time and are of importance in machine learning problems, among which we are mostly motivated by approximating optimal transport distances. AM algorithms assume that the decision variable is divided into several blocks and minimization in each block can be done explicitly or cheaply with high accuracy. The ubiquitous Sinkhorn's algorithm can be seen as an alternating minimization algorithm for the dual to the entropy-regularized optimal transport problem. We introduce an accelerated alternating minimization method with a 1/k21/k^2 convergence rate, where kk is the iteration counter. This improves over known bound 1/k1/k for general AM methods and for the Sinkhorn's algorithm. Moreover, our algorithm converges faster than gradient-type methods in practice as it is free of the choice of the step-size and is adaptive to the local smoothness of the problem. We show that the proposed method is primal-dual, meaning that if we apply it to a dual problem, we can reconstruct the solution of the primal problem with the same convergence rate. We apply our method to the entropy regularized optimal transport problem and show experimentally, that it outperforms Sinkhorn's algorithm

    Inexact Model: A Framework for Optimization and Variational Inequalities

    Get PDF
    In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities.Comment: 41 page
    corecore