110 research outputs found
Quantized Consensus by the Alternating Direction Method of Multipliers: Algorithms and Applications
Collaborative in-network processing is a major tenet in the fields of control, signal processing, information theory, and computer science. Agents operating in a coordinated fashion can gain greater efficiency and operational capability than those perform solo missions. In many such applications the central task is to compute the global average of agents\u27 data in a distributed manner. Much recent attention has been devoted to quantized consensus, where, due to practical constraints, only quantized communications are allowed between neighboring nodes in order to achieve the average consensus. This dissertation aims to develop efficient quantized consensus algorithms based on the alternating direction method of multipliers (ADMM) for networked applications, and in particular, consensus based detection in large scale sensor networks.
We study the effects of two commonly used uniform quantization schemes, dithered and deterministic quantizations, on an ADMM based distributed averaging algorithm. With dithered quantization, this algorithm yields linear convergence to the desired average in the mean sense with a bounded variance. When deterministic quantization is employed, the distributed ADMM either converges to a consensus or cycles with a finite period after a finite-time iteration. In the cyclic case, local quantized variables have the same sample mean over one period and hence each node can also reach a consensus. We then obtain an upper bound on the consensus error, which depends only on the quantization resolution and the average degree of the network. This is preferred in large scale networks where the range of agents\u27 data and the size of network may be large.
Noticing that existing quantized consensus algorithms, including the above two, adopt infinite-bit quantizers unless a bound on agents\u27 data is known a priori, we further develop an ADMM based quantized consensus algorithm using finite-bit bounded quantizers for possibly unbounded agents\u27 data. By picking a small enough ADMM step size, this algorithm can obtain the same consensus result as using the unbounded deterministic quantizer. We then apply this algorithm to distributed detection in connected sensor networks where each node can only exchange information with its direct neighbors. We establish that, with each node employing an identical one-bit quantizer for local information exchange, our approach achieves the optimal asymptotic performance of centralized detection. The statement is true under three different detection frameworks: the Bayesian criterion where the maximum a posteriori detector is optimal, the Neyman-Pearson criterion with a constant type-I error constraint, and the Neyman-Pearson criterion with an exponential type-I error constraint. The key to achieving optimal asymptotic performance is the use of a one-bit deterministic quantizer with controllable threshold that results in desired consensus error bounds
Distributed Detection and Estimation in Wireless Sensor Networks
In this article we consider the problems of distributed detection and
estimation in wireless sensor networks. In the first part, we provide a general
framework aimed to show how an efficient design of a sensor network requires a
joint organization of in-network processing and communication. Then, we recall
the basic features of consensus algorithm, which is a basic tool to reach
globally optimal decisions through a distributed approach. The main part of the
paper starts addressing the distributed estimation problem. We show first an
entirely decentralized approach, where observations and estimations are
performed without the intervention of a fusion center. Then, we consider the
case where the estimation is performed at a fusion center, showing how to
allocate quantization bits and transmit powers in the links between the nodes
and the fusion center, in order to accommodate the requirement on the maximum
estimation variance, under a constraint on the global transmit power. We extend
the approach to the detection problem. Also in this case, we consider the
distributed approach, where every node can achieve a globally optimal decision,
and the case where the decision is taken at a central node. In the latter case,
we show how to allocate coding bits and transmit power in order to maximize the
detection probability, under constraints on the false alarm rate and the global
transmit power. Then, we generalize consensus algorithms illustrating a
distributed procedure that converges to the projection of the observation
vector onto a signal subspace. We then address the issue of energy consumption
in sensor networks, thus showing how to optimize the network topology in order
to minimize the energy necessary to achieve a global consensus. Finally, we
address the problem of matching the topology of the network to the graph
describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R.
Chellapa and S. Theodoridis, Eds., Elsevier, 201
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Quantized Consensus ADMM for Multi-Agent Distributed Optimization
Multi-agent distributed optimization over a network minimizes a global
objective formed by a sum of local convex functions using only local
computation and communication. We develop and analyze a quantized distributed
algorithm based on the alternating direction method of multipliers (ADMM) when
inter-agent communications are subject to finite capacity and other practical
constraints. While existing quantized ADMM approaches only work for quadratic
local objectives, the proposed algorithm can deal with more general objective
functions (possibly non-smooth) including the LASSO. Under certain convexity
assumptions, our algorithm converges to a consensus within
iterations, where depends on the local
objectives and the network topology, and is a polynomial determined by
the quantization resolution, the distance between initial and optimal variable
values, the local objective functions and the network topology. A tight upper
bound on the consensus error is also obtained which does not depend on the size
of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal
Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other
author
- …