1,589 research outputs found
Distributed Model Predictive Consensus via the Alternating Direction Method of Multipliers
We propose a distributed optimization method for solving a distributed model
predictive consensus problem. The goal is to design a distributed controller
for a network of dynamical systems to optimize a coupled objective function
while respecting state and input constraints. The distributed optimization
method is an augmented Lagrangian method called the Alternating Direction
Method of Multipliers (ADMM), which was introduced in the 1970s but has seen a
recent resurgence in the context of dramatic increases in computing power and
the development of widely available distributed computing platforms. The method
is applied to position and velocity consensus in a network of double
integrators. We find that a few tens of ADMM iterations yield closed-loop
performance near what is achieved by solving the optimization problem
centrally. Furthermore, the use of recent code generation techniques for
solving local subproblems yields fast overall computation times.Comment: 7 pages, 5 figures, 50th Allerton Conference on Communication,
Control, and Computing, Monticello, IL, USA, 201
Distributed Optimization With Local Domains: Applications in MPC and Network Flows
In this paper we consider a network with nodes, where each node has
exclusive access to a local cost function. Our contribution is a
communication-efficient distributed algorithm that finds a vector
minimizing the sum of all the functions. We make the additional assumption that
the functions have intersecting local domains, i.e., each function depends only
on some components of the variable. Consequently, each node is interested in
knowing only some components of , not the entire vector. This allows
for improvement in communication-efficiency. We apply our algorithm to model
predictive control (MPC) and to network flow problems and show, through
experiments on large networks, that our proposed algorithm requires less
communications to converge than prior algorithms.Comment: Submitted to IEEE Trans. Aut. Contro
Multi-Path Alpha-Fair Resource Allocation at Scale in Distributed Software Defined Networks
The performance of computer networks relies on how bandwidth is shared among
different flows. Fair resource allocation is a challenging problem particularly
when the flows evolve over time. To address this issue, bandwidth sharing
techniques that quickly react to the traffic fluctuations are of interest,
especially in large scale settings with hundreds of nodes and thousands of
flows. In this context, we propose a distributed algorithm based on the
Alternating Direction Method of Multipliers (ADMM) that tackles the multi-path
fair resource allocation problem in a distributed SDN control architecture. Our
ADMM-based algorithm continuously generates a sequence of resource allocation
solutions converging to the fair allocation while always remaining feasible, a
property that standard primal-dual decomposition methods often lack. Thanks to
the distribution of all computer intensive operations, we demonstrate that we
can handle large instances at scale
Optimal scaling of the ADMM algorithm for distributed quadratic programming
This paper presents optimal scaling of the alternating directions method of
multipliers (ADMM) algorithm for a class of distributed quadratic programming
problems. The scaling corresponds to the ADMM step-size and relaxation
parameter, as well as the edge-weights of the underlying communication graph.
We optimize these parameters to yield the smallest convergence factor of the
algorithm. Explicit expressions are derived for the step-size and relaxation
parameter, as well as for the corresponding convergence factor. Numerical
simulations justify our results and highlight the benefits of optimally scaling
the ADMM algorithm.Comment: Submitted to the IEEE Transactions on Signal Processing. Prior work
was presented at the 52nd IEEE Conference on Decision and Control, 201
- …