245 research outputs found
Network vector quantization
We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources
We improve the existing achievable rate regions for causal and for zero-delay
source coding of stationary Gaussian sources under an average mean squared
error (MSE) distortion measure. To begin with, we find a closed-form expression
for the information-theoretic causal rate-distortion function (RDF) under such
distortion measure, denoted by , for first-order Gauss-Markov
processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically
attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that,
for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq
Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze
for arbitrary zero-mean Gaussian stationary sources, we
introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the
reconstruction error is jointly stationary with the source. Based upon
\bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate
loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two
of these bounds are strictly smaller than 0.5 bits/sample at all rates. These
bounds differ from one another in their tightness and ease of evaluation; the
tighter the bound, the more involved its evaluation. We then show that, for any
source spectral density and any positive distortion D\leq \sigma_{x}^{2},
\bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set
of causal pre-, post-, and feedback filters. We show that finding such filters
constitutes a convex optimization problem. In order to solve the latter, we
propose an iterative optimization procedure that yields the optimal filters and
is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a
connection to feedback quantization we design a causal and a zero-delay coding
scheme which, for Gaussian sources, achieves...Comment: 47 pages, revised version submitted to IEEE Trans. Information Theor
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Tracking and Control of Gauss-Markov Processes over Packet-Drop Channels with Acknowledgments
We consider the problem of tracking the state of Gauss–Markov processes over rate-limited erasure-prone links. We concentrate first on the scenario in which several independent processes are seen by a single observer. The observer maps the processes into finite-rate packets that are sent over the erasure-prone links to a state estimator, and are acknowledged upon packet arrivals. The aim of the state estimator is to track the processes with zero delay and with minimum mean square error (MMSE). We show that, in the limit of many processes, greedy quantization with respect to the squared error distortion is optimal. That is, there is no tension between optimizing the MMSE of the process in the current time instant and that of future times. For the case of packet erasures with delayed acknowledgments, we connect the problem to that of compression with side information that is known at the observer and may be known at the state estimator—where the most recent packets serve as side information that may have been erased, and demonstrate that the loss due to a delay by one time unit is rather small. For the scenario where only one process is tracked by the observer–state estimator system, we further show that variable-length coding techniques are within a small gap of the many-process outer bound. We demonstrate the usefulness of the proposed approach for the simple setting of discrete-time scalar linear quadratic Gaussian control with a limited data-rate feedback that is susceptible to packet erasures
- …