2 research outputs found
A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization
This paper presents a fully asynchronous and distributed approach for
tackling optimization problems in which both the objective function and the
constraints may be nonconvex. In the considered network setting each node is
active upon triggering of a local timer and has access only to a portion of the
objective function and to a subset of the constraints. In the proposed
technique, based on the method of multipliers, each node performs, when it
wakes up, either a descent step on a local augmented Lagrangian or an ascent
step on the local multiplier vector. Nodes realize when to switch from the
descent step to the ascent one through an asynchronous distributed logic-AND,
which detects when all the nodes have reached a predefined tolerance in the
minimization of the augmented Lagrangian. It is shown that the resulting
distributed algorithm is equivalent to a block coordinate descent for the
minimization of the global augmented Lagrangian. This allows one to extend the
properties of the centralized method of multipliers to the considered
distributed framework. Two application examples are presented to validate the
proposed approach: a distributed source localization problem and the parameter
estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
Nonlinear Programming Methods for Distributed Optimization
In this paper we investigate how standard nonlinear programming algorithms can be used to solve constrained
optimization problems in a distributed manner. The optimization setup consists of a set of agents interacting through
a communication graph that have as common goal the minimization of a function expressed as a sum of (possibly
non-convex) differentiable functions. Each function in the sum corresponds to an agent and each agent has associated
an equality constraint. By re-casting the distributed optimization problem into an equivalent, augmented centralized
problem, we show that distributed algorithms result naturally from applying standard nonlinear programming tech-
niques. Due to the distributed formulation, the standard assumptions and convergence results no longer hold. We
emphasize what changes are necessary for convergence to still be achieved for three algorithms: two algorithms
based on Lagrangian methods, and an algorithm based the method of multipliers. The changes in the convergence
results are necessary mainly due to the fact that the (local) minimizers of the lifted optimization problem are not
regular, as a results of the distributed formulation. Unlike the standard algorithm based on the method of multipliers,
for the distributed version we cannot show that the theoretical super-linear convergence rate can be achieved