193 research outputs found
An Extragradient-Based Alternating Direction Method for Convex Minimization
In this paper, we consider the problem of minimizing the sum of two convex
functions subject to linear linking constraints. The classical alternating
direction type methods usually assume that the two convex functions have
relatively easy proximal mappings. However, many problems arising from
statistics, image processing and other fields have the structure that while one
of the two functions has easy proximal mapping, the other function is smoothly
convex but does not have an easy proximal mapping. Therefore, the classical
alternating direction methods cannot be applied. To deal with the difficulty,
we propose in this paper an alternating direction method based on
extragradients. Under the assumption that the smooth function has a Lipschitz
continuous gradient, we prove that the proposed method returns an
-optimal solution within iterations. We apply the
proposed method to solve a new statistical model called fused logistic
regression. Our numerical experiments show that the proposed method performs
very well when solving the test problems. We also test the performance of the
proposed method through solving the lasso problem arising from statistics and
compare the result with several existing efficient solvers for this problem;
the results are very encouraging indeed
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
We show that standard extragradient methods (i.e. mirror prox [Arkadi Nemirovski, 2004] and dual extrapolation [Yurii Nesterov, 2007]) recover optimal accelerated rates for first-order minimization of smooth convex functions. To obtain this result we provide fine-grained characterization of the convergence rates of extragradient methods for solving monotone variational inequalities in terms of a natural condition we call relative Lipschitzness. We further generalize this framework to handle local and randomized notions of relative Lipschitzness and thereby recover rates for box-constrained ?_? regression based on area convexity [Jonah Sherman, 2017] and complexity bounds achieved by accelerated (randomized) coordinate descent [Zeyuan {Allen Zhu} et al., 2016; Yurii Nesterov and Sebastian U. Stich, 2017] for smooth convex function minimization
On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
The proximal point algorithm (PPA) has been well studied in the literature.
In particular, its linear convergence rate has been studied by Rockafellar in
1976 under certain condition. We consider a generalized PPA in the generic
setting of finding a zero point of a maximal monotone operator, and show that
the condition proposed by Rockafellar can also sufficiently ensure the linear
convergence rate for this generalized PPA. Indeed we show that these linear
convergence rates are optimal. Both the exact and inexact versions of this
generalized PPA are discussed. The motivation to consider this generalized PPA
is that it includes as special cases the relaxed versions of some splitting
methods that are originated from PPA. Thus, linear convergence results of this
generalized PPA can be used to better understand the convergence of some widely
used algorithms in the literature. We focus on the particular convex
minimization context and specify Rockafellar's condition to see how to ensure
the linear convergence rate for some efficient numerical schemes, including the
classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and
its relaxed version, the original alternating direction method of multipliers
(ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the
generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions
weaker than existing ones are proposed in these particular contexts.Comment: 22 pages, 1 figur
- …