137,153 research outputs found
A General Analysis of the Convergence of ADMM
We provide a new proof of the linear convergence of the alternating direction
method of multipliers (ADMM) when one of the objective terms is strongly
convex. Our proof is based on a framework for analyzing optimization algorithms
introduced in Lessard et al. (2014), reducing algorithm convergence to
verifying the stability of a dynamical system. This approach generalizes a
number of existing results and obviates any assumptions about specific choices
of algorithm parameters. On a numerical example, we demonstrate that minimizing
the derived bound on the convergence rate provides a practical approach to
selecting algorithm parameters for particular ADMM instances. We complement our
upper bound by constructing a nearly-matching lower bound on the worst-case
rate of convergence.Comment: 10 pages, 6 figure
Delay-agnostic Asynchronous Distributed Optimization
Existing asynchronous distributed optimization algorithms often use
diminishing step-sizes that cause slow practical convergence, or fixed
step-sizes that depend on an assumed upper bound of delays. Not only is such a
delay bound hard to obtain in advance, but it is also large and therefore
results in unnecessarily slow convergence. This paper develops asynchronous
versions of two distributed algorithms, DGD and DGD-ATC, for solving consensus
optimization problems over undirected networks. In contrast to alternatives,
our algorithms can converge to the fixed-point set of their synchronous
counterparts using step-sizes that are independent of the delays. We establish
convergence guarantees under both partial and total asynchrony. The practical
performance of our algorithms is demonstrated by numerical experiments
Semistochastic Quadratic Bound Methods
Partition functions arise in a variety of settings, including conditional
random fields, logistic regression, and latent gaussian models. In this paper,
we consider semistochastic quadratic bound (SQB) methods for maximum likelihood
inference based on partition function optimization. Batch methods based on the
quadratic bound were recently proposed for this class of problems, and
performed favorably in comparison to state-of-the-art techniques.
Semistochastic methods fall in between batch algorithms, which use all the
data, and stochastic gradient type methods, which use small random selections
at each iteration. We build semistochastic quadratic bound-based methods, and
prove both global convergence (to a stationary point) under very weak
assumptions, and linear convergence rate under stronger assumptions on the
objective. To make the proposed methods faster and more stable, we consider
inexact subproblem minimization and batch-size selection schemes. The efficacy
of SQB methods is demonstrated via comparison with several state-of-the-art
techniques on commonly used datasets.Comment: 11 pages, 1 figur
- …