1,702 research outputs found
Integer-Forcing Source Coding
Integer-Forcing (IF) is a new framework, based on compute-and-forward, for
decoding multiple integer linear combinations from the output of a Gaussian
multiple-input multiple-output channel. This work applies the IF approach to
arrive at a new low-complexity scheme, IF source coding, for distributed lossy
compression of correlated Gaussian sources under a minimum mean squared error
distortion measure. All encoders use the same nested lattice codebook. Each
encoder quantizes its observation using the fine lattice as a quantizer and
reduces the result modulo the coarse lattice, which plays the role of binning.
Rather than directly recovering the individual quantized signals, the decoder
first recovers a full-rank set of judiciously chosen integer linear
combinations of the quantized signals, and then inverts it. In general, the
linear combinations have smaller average powers than the original signals. This
allows to increase the density of the coarse lattice, which in turn translates
to smaller compression rates. We also propose and analyze a one-shot version of
IF source coding, that is simple enough to potentially lead to a new design
principle for analog-to-digital converters that can exploit spatial
correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
Gossip Algorithms for Distributed Signal Processing
Gossip algorithms are attractive for in-network processing in sensor networks
because they do not require any specialized routing, there is no bottleneck or
single point of failure, and they are robust to unreliable wireless network
conditions. Recently, there has been a surge of activity in the computer
science, control, signal processing, and information theory communities,
developing faster and more robust gossip algorithms and deriving theoretical
performance guarantees. This article presents an overview of recent work in the
area. We describe convergence rate results, which are related to the number of
transmitted messages and thus the amount of energy consumed in the network for
gossiping. We discuss issues related to gossiping over wireless links,
including the effects of quantization and noise, and we illustrate the use of
gossip algorithms for canonical signal processing tasks including distributed
estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page
Neural Distributed Compressor Discovers Binning
We consider lossy compression of an information source when the decoder has
lossless access to a correlated one. This setup, also known as the Wyner-Ziv
problem, is a special case of distributed source coding. To this day, practical
approaches for the Wyner-Ziv problem have neither been fully developed nor
heavily investigated. We propose a data-driven method based on machine learning
that leverages the universal function approximation capability of artificial
neural networks. We find that our neural network-based compression scheme,
based on variational vector quantization, recovers some principles of the
optimum theoretical solution of the Wyner-Ziv setup, such as binning in the
source space as well as optimal combination of the quantization index and side
information, for exemplary sources. These behaviors emerge although no
structure exploiting knowledge of the source distributions was imposed. Binning
is a widely used tool in information theoretic proofs and methods, and to our
knowledge, this is the first time it has been explicitly observed to emerge
from data-driven learning.Comment: draft of a journal version of our previous ISIT 2023 paper (available
at: arXiv:2305.04380). arXiv admin note: substantial text overlap with
arXiv:2305.0438
Distributed Reception in the Presence of Gaussian Interference
abstract: An analysis is presented of a network of distributed receivers encumbered by strong in-band interference. The structure of information present across such receivers and how they might collaborate to recover a signal of interest is studied. Unstructured (random coding) and structured (lattice coding) strategies are studied towards this purpose for a certain adaptable system model. Asymptotic performances of these strategies and algorithms to compute them are developed. A jointly-compressed lattice code with proper configuration performs best of all strategies investigated.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
- …