42 research outputs found
On the Global Linear Convergence of the ADMM with Multi-Block Variables
The alternating direction method of multipliers (ADMM) has been widely used
for solving structured convex optimization problems. In particular, the ADMM
can solve convex programs that minimize the sum of convex functions with
-block variables linked by some linear constraints. While the convergence of
the ADMM for was well established in the literature, it remained an open
problem for a long time whether or not the ADMM for is still
convergent. Recently, it was shown in [3] that without further conditions the
ADMM for may actually fail to converge. In this paper, we show that
under some easily verifiable and reasonable conditions the global linear
convergence of the ADMM when can still be assured, which is important
since the ADMM is a popular method for solving large scale multi-block
optimization models and is known to perform very well in practice even when
. Our study aims to offer an explanation for this phenomenon
Iteration Complexity Analysis of Multi-Block ADMM for a Family of Convex Minimization without Strong Convexity
The alternating direction method of multipliers (ADMM) is widely used in
solving structured convex optimization problems due to its superior practical
performance. On the theoretical side however, a counterexample was shown in [7]
indicating that the multi-block ADMM for minimizing the sum of
convex functions with block variables linked by linear constraints may
diverge. It is therefore of great interest to investigate further sufficient
conditions on the input side which can guarantee convergence for the
multi-block ADMM. The existing results typically require the strong convexity
on parts of the objective. In this paper, we present convergence and
convergence rate results for the multi-block ADMM applied to solve certain
-block convex minimization problems without requiring strong
convexity. Specifically, we prove the following two results: (1) the
multi-block ADMM returns an -optimal solution within
iterations by solving an associated perturbation to the
original problem; (2) the multi-block ADMM returns an -optimal
solution within iterations when it is applied to solve a
certain sharing problem, under the condition that the augmented Lagrangian
function satisfies the Kurdyka-Lojasiewicz property, which essentially covers
most convex optimization models except for some pathological cases.Comment: arXiv admin note: text overlap with arXiv:1408.426
L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework
Despite the importance of sparsity in many large-scale applications, there
are few methods for distributed optimization of sparsity-inducing objectives.
In this paper, we present a communication-efficient framework for
L1-regularized optimization in the distributed environment. By viewing
classical objectives in a more general primal-dual setting, we develop a new
class of methods that can be efficiently distributed and applied to common
sparsity-inducing models, such as Lasso, sparse logistic regression, and
elastic net-regularized problems. We provide theoretical convergence guarantees
for our framework, and demonstrate its efficiency and flexibility with a
thorough experimental comparison on Amazon EC2. Our proposed framework yields
speedups of up to 50x as compared to current state-of-the-art methods for
distributed L1-regularized optimization
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient
distributed optimization methods for machine learning. We present a
general-purpose framework for distributed computing environments, CoCoA, that
has an efficient communication scheme and is applicable to a wide variety of
problems in machine learning and signal processing. We extend the framework to
cover general non-strongly-convex regularizers, including L1-regularized
problems like lasso, sparse logistic regression, and elastic net
regularization, and show how earlier work can be derived as a special case. We
provide convergence guarantees for the class of convex regularized loss
minimization objectives, leveraging a novel approach in handling
non-strongly-convex regularizers and non-smooth loss functions. The resulting
framework has markedly improved performance over state-of-the-art methods, as
we illustrate with an extensive set of experiments on real distributed
datasets
Nonconvex Generalization of ADMM for Nonlinear Equality Constrained Problems
The ever-increasing demand for efficient and distributed optimization
algorithms for large-scale data has led to the growing popularity of the
Alternating Direction Method of Multipliers (ADMM). However, although the use
of ADMM to solve linear equality constrained problems is well understood, we
lacks a generic framework for solving problems with nonlinear equality
constraints, which are common in practical applications (e.g., spherical
constraints). To address this problem, we are proposing a new generic ADMM
framework for handling nonlinear equality constraints, neADMM. After
introducing the generalized problem formulation and the neADMM algorithm, the
convergence properties of neADMM are discussed, along with its sublinear
convergence rate , where is the number of iterations. Next, two
important applications of neADMM are considered and the paper concludes by
describing extensive experiments on several synthetic and real-world datasets
to demonstrate the convergence and effectiveness of neADMM compared to existing
state-of-the-art methods