27,658 research outputs found
Quantization Design for Distributed Optimization
We consider the problem of solving a distributed optimization problem using a
distributed computing platform, where the communication in the network is
limited: each node can only communicate with its neighbours and the channel has
a limited data-rate. A common technique to address the latter limitation is to
apply quantization to the exchanged information. We propose two distributed
optimization algorithms with an iteratively refining quantization design based
on the inexact proximal gradient method and its accelerated variant. We show
that if the parameters of the quantizers, i.e. the number of bits and the
initial quantization intervals, satisfy certain conditions, then the
quantization error is bounded by a linearly decreasing function and the
convergence of the distributed algorithms is guaranteed. Furthermore, we prove
that after imposing the quantization scheme, the distributed algorithms still
exhibit a linear convergence rate, and show complexity upper-bounds on the
number of iterations to achieve a given accuracy. Finally, we demonstrate the
performance of the proposed algorithms and the theoretical findings for solving
a distributed optimal control problem
Quantization Design for Distributed Optimization
We consider the problem of solving a distributed optimization problem using a distributed computing platform, where the communication in the network is limited: each node can only communicate with its neighbors and the channel has a limited data-rate. A common technique to address the latter limitation is to apply quantization to the exchanged information. We propose two distributed optimization algorithms with an iteratively refining quantization design based on the inexact proximal gradient method and its accelerated variant. We show that if the parameters of the quantizers, i.e., the number of bits and the initial quantization intervals, satisfy certain conditions, then the quantization error is bounded by a linearly decreasing function and the convergence of the distributed algorithms is guaranteed. Furthermore, we prove that after imposing the quantization scheme, the distributed algorithms still exhibit a linear convergence rate, and show complexity upper-bounds on the number of iterations to achieve a given accuracy. Finally, we demonstrate the performance of the proposed algorithms and the theoretical findings for solving a distributed optimal control problem
Quantization Design for Unconstrained Distributed Optimization
We consider an unconstrained distributed optimization problem and assume that the bit rate of the communication in the network is limited. We propose a distributed optimization algorithm with an iteratively refining quantization design, which bounds the quantization errors and ensures convergence to the global optimum. We present conditions on the bit rate and the initial quantization intervals for convergence, and show that as the bit rate increases, the corresponding minimum initial quantization intervals decrease. We prove that after imposing the quantization scheme, the algorithm still provides a linear convergence rate, and furthermore derive an upper bound on the number of iterations to achieve a given accuracy. Finally, we demonstrate the performance of the proposed algorithm and the theoretical findings for solving a randomly generated example of a distributed least squares problem
A penalty ADMM with quantized communication for distributed optimization over multi-agent systems
summary:In this paper, we design a distributed penalty ADMM algorithm with quantized communication to solve distributed convex optimization problems over multi-agent systems. Firstly, we introduce a quantization scheme that reduces the bandwidth limitation of multi-agent systems without requiring an encoder or decoder, unlike existing quantized algorithms. This scheme also minimizes the computation burden. Moreover, with the aid of the quantization design, we propose a quantized penalty ADMM to obtain the suboptimal solution. Furthermore, the proposed algorithm converges to the suboptimal solution with an convergence rate for general convex objective functions, and with an R-linear rate for strongly convex objective functions
Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing
In this paper, we study the convergence behavior of distributed iterative
algorithms with quantized message passing. We first introduce general iterative
function evaluation algorithms for solving fixed point problems distributively.
We then analyze the convergence of the distributed algorithms, e.g. Jacobi
scheme and Gauss-Seidel scheme, under the quantized message passing. Based on
the closed-form convergence performance derived, we propose two quantizer
designs, namely the time invariant convergence-optimal quantizer (TICOQ) and
the time varying convergence-optimal quantizer (TVCOQ), to minimize the effect
of the quantization error on the convergence. We also study the tradeoff
between the convergence error and message passing overhead for both TICOQ and
TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative
waterfilling algorithm of MIMO interference game.Comment: 17 pages, 9 figures, Transaction on Signal Processing, accepte
- …