14 research outputs found
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization
This paper presents a fully asynchronous and distributed approach for
tackling optimization problems in which both the objective function and the
constraints may be nonconvex. In the considered network setting each node is
active upon triggering of a local timer and has access only to a portion of the
objective function and to a subset of the constraints. In the proposed
technique, based on the method of multipliers, each node performs, when it
wakes up, either a descent step on a local augmented Lagrangian or an ascent
step on the local multiplier vector. Nodes realize when to switch from the
descent step to the ascent one through an asynchronous distributed logic-AND,
which detects when all the nodes have reached a predefined tolerance in the
minimization of the augmented Lagrangian. It is shown that the resulting
distributed algorithm is equivalent to a block coordinate descent for the
minimization of the global augmented Lagrangian. This allows one to extend the
properties of the centralized method of multipliers to the considered
distributed framework. Two application examples are presented to validate the
proposed approach: a distributed source localization problem and the parameter
estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
Convergence rate analysis of a subgradient averaging algorithm for distributed optimisation with different constraint sets
We consider a multi-agent setting with agents exchanging information over a network to solve a convex constrained optimisation problem in a distributed manner. We analyse a new algorithm based on local subgradient exchange under undirected time-varying communication. First, we prove asymptotic convergence of the iterates to a minimum of the given optimisation problem for time-varying step-sizes of the form c(k) = rac{eta }{{k + 1}}, for some \u3b7 > 0. We then restrict attention to step-size choices c(k) = rac{eta }{{sqrt {k + 1} }},eta > 0, and establish a convergence of mathcal{O}left( {rac{{ln (k)}}{{sqrt k }}}
ight) in objective value. Our algorithm extends currently available distributed subgradient/proximal methods by: (i) accounting for different constraint sets at each node, and (ii) enhancing the convergence speed thanks to a subgradient averaging step performed by the agents. A numerical example demonstrates the efficacy of the proposed algorithm
Uncertain Multi-Agent Systems with Distributed Constrained Optimization Missions and Event-Triggered Communications: Application to Resource Allocation
This paper deals with solving distributed optimization problems with equality
constraints by a class of uncertain nonlinear heterogeneous dynamic multi-agent
systems. It is assumed that each agent with an uncertain dynamic model has
limited information about the main problem and limited access to the
information of the state variables of the other agents. A distributed algorithm
that guarantees cooperative solving of the constrained optimization problem by
the agents is proposed. Via this algorithm, the agents do not need to
continuously broadcast their data. It is shown that the proposed algorithm can
be useful in solving resource allocation problems
Randomized Constraints Consensus for Distributed Robust Mixed-Integer Programming
In this paper, we consider a network of processors aiming at cooperatively
solving mixed-integer convex programs subject to uncertainty. Each node only
knows a common cost function and its local uncertain constraint set. We propose
a randomized, distributed algorithm working under asynchronous, unreliable and
directed communication. The algorithm is based on a local computation and
communication paradigm. At each communication round, nodes perform two updates:
(i) a verification in which they check---in a randomized fashion---the robust
feasibility of a candidate optimal point, and (ii) an optimization step in
which they exchange their candidate basis (the minimal set of constraints
defining a solution) with neighbors and locally solve an optimization problem.
As main result, we show that processors can stop the algorithm after a finite
number of communication rounds (either because verification has been successful
for a sufficient number of rounds or because a given threshold has been
reached), so that candidate optimal solutions are consensual. The common
solution is proven to be---with high confidence---feasible and hence optimal
for the entire set of uncertainty except a subset having an arbitrary small
probability measure. We show the effectiveness of the proposed distributed
algorithm using two examples: a random, uncertain mixed-integer linear program
and a distributed localization in wireless sensor networks. The distributed
algorithm is implemented on a multi-core platform in which the nodes
communicate asynchronously.Comment: Submitted for publication. arXiv admin note: text overlap with
arXiv:1706.0048
Tracking-ADMM for Distributed Constraint-Coupled Optimization
We consider constraint-coupled optimization problems in which agents of a
network aim to cooperatively minimize the sum of local objective functions
subject to individual constraints and a common linear coupling constraint. We
propose a novel optimization algorithm that embeds a dynamic average consensus
protocol in the parallel Alternating Direction Method of Multipliers (ADMM) to
design a fully distributed scheme for the considered set-up. The dynamic
average mechanism allows agents to track the time-varying coupling constraint
violation (at the current solution estimates). The tracked version of the
constraint violation is then used to update local dual variables in a
consensus-based scheme mimicking a parallel ADMM step. Under convexity, we
prove that all limit points of the agents' primal solution estimates form an
optimal solution of the constraint-coupled (primal) problem. The result is
proved by means of a Lyapunov-based analysis simultaneously showing consensus
of the dual estimates to a dual optimal solution, convergence of the tracking
scheme and asymptotic optimality of primal iterates. A numerical study on
optimal charging schedule of plug-in electric vehicles corroborates the
theoretical results.Comment: 14 pages, 2 figures, submitted to Automatic