13,585 research outputs found
On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
The proximal point algorithm (PPA) has been well studied in the literature.
In particular, its linear convergence rate has been studied by Rockafellar in
1976 under certain condition. We consider a generalized PPA in the generic
setting of finding a zero point of a maximal monotone operator, and show that
the condition proposed by Rockafellar can also sufficiently ensure the linear
convergence rate for this generalized PPA. Indeed we show that these linear
convergence rates are optimal. Both the exact and inexact versions of this
generalized PPA are discussed. The motivation to consider this generalized PPA
is that it includes as special cases the relaxed versions of some splitting
methods that are originated from PPA. Thus, linear convergence results of this
generalized PPA can be used to better understand the convergence of some widely
used algorithms in the literature. We focus on the particular convex
minimization context and specify Rockafellar's condition to see how to ensure
the linear convergence rate for some efficient numerical schemes, including the
classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and
its relaxed version, the original alternating direction method of multipliers
(ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the
generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions
weaker than existing ones are proposed in these particular contexts.Comment: 22 pages, 1 figur
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
- …