22 research outputs found

    Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes

    Full text link
    Optimal transport is an important tool in machine learning, allowing to capture geometric properties of the data through a linear program on transport polytopes. We present a single-loop optimization algorithm for minimizing general convex objectives on these domains, utilizing the principles of Sinkhorn matrix scaling and mirror descent. The proposed algorithm is robust to noise, and can be used in an online setting. We provide theoretical guarantees for convex objectives and experimental results showcasing it effectiveness on both synthetic and real-world data.Comment: ICML 202

    Universal Algorithms: Beyond the Simplex

    Full text link
    The bulk of universal algorithms in the online convex optimisation literature are variants of the Hedge (exponential weights) algorithm on the simplex. While these algorithms extend to polytope domains by assigning weights to the vertices, this process is computationally unfeasible for many important classes of polytopes where the number VV of vertices depends exponentially on the dimension dd. In this paper we show the Subgradient algorithm is universal, meaning it has O(N)O(\sqrt N) regret in the antagonistic setting and O(1)O(1) pseudo-regret in the i.i.d setting, with two main advantages over Hedge: (1) The update step is more efficient as the action vectors have length only dd rather than VV; and (2) Subgradient gives better performance if the cost vectors satisfy Euclidean rather than sup-norm bounds. This paper extends the authors' recent results for Subgradient on the simplex. We also prove the same O(N)O(\sqrt N) and O(1)O(1) bounds when the domain is the unit ball. To the authors' knowledge this is the first instance of these bounds on a domain other than a polytope.Comment: 1 figure, 40 page

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    A primal-dual flow for affine constrained convex optimization

    Full text link
    We introduce a novel primal-dual flow for affine constrained convex optimization problem. As a modification of the standard saddle-point system, our primal-dual flow is proved to possesses the exponential decay property, in terms of a tailored Lyapunov function. Then a class of primal-dual methods for the original optimization problem are obtained from numerical discretizations of the continuous flow, and with a unified discrete Lyapunov function, nonergodic convergence rates are established. Among those algorithms, we can recover the (linearized) augmented Lagrangian method and the quadratic penalty method with continuation technique. Also, new methods with a special inner problem, that is a linear symmetric positive definite system or a nonlinear equation which may be solved efficiently via the semi-smooth Newton method, have been proposed as well. Especially, numerical tests on the linearly constrained l1l_1-l2l_2 minimization show that our method outperforms the accelerated linearized Bregman method
    corecore