4,444 research outputs found
Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization
In this work, we revisit a classical incremental implementation of the
primal-descent dual-ascent gradient method used for the solution of equality
constrained optimization problems. We provide a short proof that establishes
the linear (exponential) convergence of the algorithm for smooth
strongly-convex cost functions and study its relation to the non-incremental
implementation. We also study the effect of the augmented Lagrangian penalty
term on the performance of distributed optimization algorithms for the
minimization of aggregate cost functions over multi-agent networks
Distributed Model Predictive Consensus via the Alternating Direction Method of Multipliers
We propose a distributed optimization method for solving a distributed model
predictive consensus problem. The goal is to design a distributed controller
for a network of dynamical systems to optimize a coupled objective function
while respecting state and input constraints. The distributed optimization
method is an augmented Lagrangian method called the Alternating Direction
Method of Multipliers (ADMM), which was introduced in the 1970s but has seen a
recent resurgence in the context of dramatic increases in computing power and
the development of widely available distributed computing platforms. The method
is applied to position and velocity consensus in a network of double
integrators. We find that a few tens of ADMM iterations yield closed-loop
performance near what is achieved by solving the optimization problem
centrally. Furthermore, the use of recent code generation techniques for
solving local subproblems yields fast overall computation times.Comment: 7 pages, 5 figures, 50th Allerton Conference on Communication,
Control, and Computing, Monticello, IL, USA, 201
Consensus ALADIN: A Framework for Distributed Optimization and Its Application in Federated Learning
This paper investigates algorithms for solving distributed consensus
optimization problems that are non-convex. Since Typical ALADIN (Typical
Augmented Lagrangian based Alternating Direction Inexact Newton Method,
T-ALADIN for short) [1] is a well-performed algorithm treating distributed
optimization problems that are non-convex, directly adopting T-ALADIN to those
of consensus is a natural approach. However, T-ALADIN typically results in high
communication and computation overhead, which makes such an approach far from
efficient. In this paper, we propose a new variant of the ALADIN family, coined
consensus ALADIN (C-ALADIN for short). C-ALADIN inherits all the good
properties of T-ALADIN, such as the local linear or super-linear convergence
rate and the local convergence guarantees for non-convex optimization problems;
besides, C-ALADIN offers unique improvements in terms of communication
efficiency and computational efficiency. Moreover, C-ALADIN involves a reduced
version, in comparison with Consensus ADMM (Alternating Direction Method of
Multipliers) [3], showing significant convergence performance, even without the
help of second-order information. We also propose a practical version of
C-ALADIN, named FedALADIN, that seamlessly serves the emerging federated
learning applications, which expands the reach of our proposed C-ALADIN. We
provide numerical experiments to demonstrate the effectiveness of C-ALADIN. The
results show that C-ALADIN has significant improvements in convergence
performance
Two-Stage Consensus-Based Distributed MPC for Interconnected Microgrids
In this paper, we propose a model predictive control based two-stage energy
management system that aims at increasing the renewable infeed in
interconnected microgrids (MGs). In particular, the proposed approach ensures
that each MG in the network benefits from power exchange. In the first stage,
the optimal islanded operational cost of each MG is obtained. In the second
stage, the power exchange is determined such that the operational cost of each
MG is below the optimal islanded cost from the first stage. In this stage, a
distributed augmented Lagrangian method is used to solve the optimisation
problem and determine the power flow of the network without requiring a central
entity. This algorithm has faster convergence and same information exchange at
each iteration as the dual decomposition algorithm. The properties of the
algorithm are illustrated in a numerical case study
On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
Nonconvex and structured optimization problems arise in many engineering
applications that demand scalable and distributed solution methods. The study
of the convergence properties of these methods is in general difficult due to
the nonconvexity of the problem. In this paper, two distributed solution
methods that combine the fast convergence properties of augmented
Lagrangian-based methods with the separability properties of alternating
optimization are investigated. The first method is adapted from the classic
quadratic penalty function method and is called the Alternating Direction
Penalty Method (ADPM). Unlike the original quadratic penalty function method,
in which single-step optimizations are adopted, ADPM uses an alternating
optimization, which in turn makes it scalable. The second method is the
well-known Alternating Direction Method of Multipliers (ADMM). It is shown that
ADPM for nonconvex problems asymptotically converges to a primal feasible point
under mild conditions and an additional condition ensuring that it
asymptotically reaches the standard first order necessary conditions for local
optimality are introduced. In the case of the ADMM, novel sufficient conditions
under which the algorithm asymptotically reaches the standard first order
necessary conditions are established. Based on this, complete convergence of
ADMM for a class of low dimensional problems are characterized. Finally, the
results are illustrated by applying ADPM and ADMM to a nonconvex localization
problem in wireless sensor networks.Comment: 13 pages, 6 figure
- …