54,534 research outputs found
A Distributed Asynchronous Method of Multipliers for Constrained Nonconvex Optimization
This paper presents a fully asynchronous and distributed approach for
tackling optimization problems in which both the objective function and the
constraints may be nonconvex. In the considered network setting each node is
active upon triggering of a local timer and has access only to a portion of the
objective function and to a subset of the constraints. In the proposed
technique, based on the method of multipliers, each node performs, when it
wakes up, either a descent step on a local augmented Lagrangian or an ascent
step on the local multiplier vector. Nodes realize when to switch from the
descent step to the ascent one through an asynchronous distributed logic-AND,
which detects when all the nodes have reached a predefined tolerance in the
minimization of the augmented Lagrangian. It is shown that the resulting
distributed algorithm is equivalent to a block coordinate descent for the
minimization of the global augmented Lagrangian. This allows one to extend the
properties of the centralized method of multipliers to the considered
distributed framework. Two application examples are presented to validate the
proposed approach: a distributed source localization problem and the parameter
estimation of a neural network.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0648
Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization
Consider the problem of minimizing the sum of a smooth (possibly non-convex)
and a convex (possibly nonsmooth) function involving a large number of
variables. A popular approach to solve this problem is the block coordinate
descent (BCD) method whereby at each iteration only one variable block is
updated while the remaining variables are held fixed. With the recent advances
in the developments of the multi-core parallel processing technology, it is
desirable to parallelize the BCD method by allowing multiple blocks to be
updated simultaneously at each iteration of the algorithm. In this work, we
propose an inexact parallel BCD approach where at each iteration, a subset of
the variables is updated in parallel by minimizing convex approximations of the
original objective function. We investigate the convergence of this parallel
BCD method for both randomized and cyclic variable selection rules. We analyze
the asymptotic and non-asymptotic convergence behavior of the algorithm for
both convex and non-convex objective functions. The numerical experiments
suggest that for a special case of Lasso minimization problem, the cyclic block
selection rule can outperform the randomized rule
A Low-Cost Robust Distributed Linearly Constrained Beamformer for Wireless Acoustic Sensor Networks with Arbitrary Topology
We propose a new robust distributed linearly constrained beamformer which
utilizes a set of linear equality constraints to reduce the cross power
spectral density matrix to a block-diagonal form. The proposed beamformer has a
convenient objective function for use in arbitrary distributed network
topologies while having identical performance to a centralized implementation.
Moreover, the new optimization problem is robust to relative acoustic transfer
function (RATF) estimation errors and to target activity detection (TAD)
errors. Two variants of the proposed beamformer are presented and evaluated in
the context of multi-microphone speech enhancement in a wireless acoustic
sensor network, and are compared with other state-of-the-art distributed
beamformers in terms of communication costs and robustness to RATF estimation
errors and TAD errors
Evolutionary Approaches to Minimizing Network Coding Resources
We wish to minimize the resources used for network coding while achieving the
desired throughput in a multicast scenario. We employ evolutionary approaches,
based on a genetic algorithm, that avoid the computational complexity that
makes the problem NP-hard. Our experiments show great improvements over the
sub-optimal solutions of prior methods. Our new algorithms improve over our
previously proposed algorithm in three ways. First, whereas the previous
algorithm can be applied only to acyclic networks, our new method works also
with networks with cycles. Second, we enrich the set of components used in the
genetic algorithm, which improves the performance. Third, we develop a novel
distributed framework. Combining distributed random network coding with our
distributed optimization yields a network coding protocol where the resources
used for coding are optimized in the setup phase by running our evolutionary
algorithm at each node of the network. We demonstrate the effectiveness of our
approach by carrying out simulations on a number of different sets of network
topologies.Comment: 9 pages, 6 figures, accepted to the 26th Annual IEEE Conference on
Computer Communications (INFOCOM 2007
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
- …