233 research outputs found
D-ADMM: A Communication-Efficient Distributed Algorithm For Separable Optimization
We propose a distributed algorithm, named Distributed Alternating Direction
Method of Multipliers (D-ADMM), for solving separable optimization problems in
networks of interconnected nodes or agents. In a separable optimization problem
there is a private cost function and a private constraint set at each node. The
goal is to minimize the sum of all the cost functions, constraining the
solution to be in the intersection of all the constraint sets. D-ADMM is proven
to converge when the network is bipartite or when all the functions are
strongly convex, although in practice, convergence is observed even when these
conditions are not met. We use D-ADMM to solve the following problems from
signal processing and control: average consensus, compressed sensing, and
support vector machines. Our simulations show that D-ADMM requires less
communications than state-of-the-art algorithms to achieve a given accuracy
level. Algorithms with low communication requirements are important, for
example, in sensor networks, where sensors are typically battery-operated and
communicating is the most energy consuming operation.Comment: To appear in IEEE Transactions on Signal Processin
Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition
In this paper we consider a novel partitioned framework for distributed
optimization in peer-to-peer networks. In several important applications the
agents of a network have to solve an optimization problem with two key
features: (i) the dimension of the decision variable depends on the network
size, and (ii) cost function and constraints have a sparsity structure related
to the communication graph. For this class of problems a straightforward
application of existing consensus methods would show two inefficiencies: poor
scalability and redundancy of shared information. We propose an asynchronous
distributed algorithm, based on dual decomposition and coordinate methods, to
solve partitioned optimization problems. We show that, by exploiting the
problem structure, the solution can be partitioned among the nodes, so that
each node just stores a local copy of a portion of the decision variable
(rather than a copy of the entire decision vector) and solves a small-scale
local problem
Multi-Path Alpha-Fair Resource Allocation at Scale in Distributed Software Defined Networks
The performance of computer networks relies on how bandwidth is shared among
different flows. Fair resource allocation is a challenging problem particularly
when the flows evolve over time. To address this issue, bandwidth sharing
techniques that quickly react to the traffic fluctuations are of interest,
especially in large scale settings with hundreds of nodes and thousands of
flows. In this context, we propose a distributed algorithm based on the
Alternating Direction Method of Multipliers (ADMM) that tackles the multi-path
fair resource allocation problem in a distributed SDN control architecture. Our
ADMM-based algorithm continuously generates a sequence of resource allocation
solutions converging to the fair allocation while always remaining feasible, a
property that standard primal-dual decomposition methods often lack. Thanks to
the distribution of all computer intensive operations, we demonstrate that we
can handle large instances at scale
Distributed Maximum Likelihood Sensor Network Localization
We propose a class of convex relaxations to solve the sensor network
localization problem, based on a maximum likelihood (ML) formulation. This
class, as well as the tightness of the relaxations, depends on the noise
probability density function (PDF) of the collected measurements. We derive a
computational efficient edge-based version of this ML convex relaxation class
and we design a distributed algorithm that enables the sensor nodes to solve
these edge-based convex programs locally by communicating only with their close
neighbors. This algorithm relies on the alternating direction method of
multipliers (ADMM), it converges to the centralized solution, it can run
asynchronously, and it is computation error-resilient. Finally, we compare our
proposed distributed scheme with other available methods, both analytically and
numerically, and we argue the added value of ADMM, especially for large-scale
networks
Network Inference via the Time-Varying Graphical Lasso
Many important problems can be modeled as a system of interconnected
entities, where each entity is recording time-dependent observations or
measurements. In order to spot trends, detect anomalies, and interpret the
temporal dynamics of such data, it is essential to understand the relationships
between the different entities and how these relationships evolve over time. In
this paper, we introduce the time-varying graphical lasso (TVGL), a method of
inferring time-varying networks from raw time series data. We cast the problem
in terms of estimating a sparse time-varying inverse covariance matrix, which
reveals a dynamic network of interdependencies between the entities. Since
dynamic network inference is a computationally expensive task, we derive a
scalable message-passing algorithm based on the Alternating Direction Method of
Multipliers (ADMM) to solve this problem in an efficient way. We also discuss
several extensions, including a streaming algorithm to update the model and
incorporate new observations in real time. Finally, we evaluate our TVGL
algorithm on both real and synthetic datasets, obtaining interpretable results
and outperforming state-of-the-art baselines in terms of both accuracy and
scalability
- …