38 research outputs found
Towards time-varying proximal dynamics in Multi-Agent Network Games
Distributed decision making in multi-agent networks has recently attracted
significant research attention thanks to its wide applicability, e.g. in the
management and optimization of computer networks, power systems, robotic teams,
sensor networks and consumer markets. Distributed decision-making problems can
be modeled as inter-dependent optimization problems, i.e., multi-agent
game-equilibrium seeking problems, where noncooperative agents seek an
equilibrium by communicating over a network. To achieve a network equilibrium,
the agents may decide to update their decision variables via proximal dynamics,
driven by the decision variables of the neighboring agents. In this paper, we
provide an operator-theoretic characterization of convergence with a
time-invariant communication network. For the time-varying case, we consider
adjacency matrices that may switch subject to a dwell time. We illustrate our
investigations using a distributed robotic exploration example.Comment: 6 pages, 3 figure
A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization
This paper presents a fully asynchronous and distributed approach for
tackling optimization problems in which both the objective function and the
constraints may be nonconvex. In the considered network setting each node is
active upon triggering of a local timer and has access only to a portion of the
objective function and to a subset of the constraints. In the proposed
technique, based on the method of multipliers, each node performs, when it
wakes up, either a descent step on a local augmented Lagrangian or an ascent
step on the local multiplier vector. Nodes realize when to switch from the
descent step to the ascent one through an asynchronous distributed logic-AND,
which detects when all the nodes have reached a predefined tolerance in the
minimization of the augmented Lagrangian. It is shown that the resulting
distributed algorithm is equivalent to a block coordinate descent for the
minimization of the global augmented Lagrangian. This allows one to extend the
properties of the centralized method of multipliers to the considered
distributed framework. Two application examples are presented to validate the
proposed approach: a distributed source localization problem and the parameter
estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
Randomized dual proximal gradient for large-scale distributed optimization
In this paper we consider distributed optimization problems in which the cost
function is separable (i.e., a sum of possibly non-smooth functions all
sharing a common variable) and can be split into a strongly convex term and a
convex one. The second term is typically used to encode constraints or to
regularize the solution. We propose an asynchronous, distributed optimization
algorithm over an undirected topology, based on a proximal gradient update on
the dual problem. We show that by means of a proper choice of primal
variables, the dual problem is separable and the dual variables can be stacked
into separate blocks. This allows us to show that a distributed
gossip update can be obtained by means of a randomized block-coordinate
proximal gradient on the dual function