22 research outputs found
Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes
Optimal transport is an important tool in machine learning, allowing to
capture geometric properties of the data through a linear program on transport
polytopes. We present a single-loop optimization algorithm for minimizing
general convex objectives on these domains, utilizing the principles of
Sinkhorn matrix scaling and mirror descent. The proposed algorithm is robust to
noise, and can be used in an online setting. We provide theoretical guarantees
for convex objectives and experimental results showcasing it effectiveness on
both synthetic and real-world data.Comment: ICML 202
Universal Algorithms: Beyond the Simplex
The bulk of universal algorithms in the online convex optimisation literature
are variants of the Hedge (exponential weights) algorithm on the simplex. While
these algorithms extend to polytope domains by assigning weights to the
vertices, this process is computationally unfeasible for many important classes
of polytopes where the number of vertices depends exponentially on the
dimension . In this paper we show the Subgradient algorithm is universal,
meaning it has regret in the antagonistic setting and
pseudo-regret in the i.i.d setting, with two main advantages over Hedge: (1)
The update step is more efficient as the action vectors have length only
rather than ; and (2) Subgradient gives better performance if the cost
vectors satisfy Euclidean rather than sup-norm bounds. This paper extends the
authors' recent results for Subgradient on the simplex. We also prove the same
and bounds when the domain is the unit ball. To the
authors' knowledge this is the first instance of these bounds on a domain other
than a polytope.Comment: 1 figure, 40 page
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
A primal-dual flow for affine constrained convex optimization
We introduce a novel primal-dual flow for affine constrained convex
optimization problem. As a modification of the standard saddle-point system,
our primal-dual flow is proved to possesses the exponential decay property, in
terms of a tailored Lyapunov function. Then a class of primal-dual methods for
the original optimization problem are obtained from numerical discretizations
of the continuous flow, and with a unified discrete Lyapunov function,
nonergodic convergence rates are established. Among those algorithms, we can
recover the (linearized) augmented Lagrangian method and the quadratic penalty
method with continuation technique. Also, new methods with a special inner
problem, that is a linear symmetric positive definite system or a nonlinear
equation which may be solved efficiently via the semi-smooth Newton method,
have been proposed as well. Especially, numerical tests on the linearly
constrained - minimization show that our method outperforms the
accelerated linearized Bregman method