5,302 research outputs found
Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources
We deal with zero-delay source coding of a vector-valued Gauss-Markov source
subject to a mean-squared error (MSE) fidelity criterion characterized by the
operational zero-delay vector-valued Gaussian rate distortion function (RDF).
We address this problem by considering the nonanticipative RDF (NRDF) which is
a lower bound to the causal optimal performance theoretically attainable (OPTA)
function and operational zero-delay RDF. We recall the realization that
corresponds to the optimal "test-channel" of the Gaussian NRDF, when
considering a vector Gauss-Markov source subject to a MSE distortion in the
finite time horizon. Then, we introduce sufficient conditions to show existence
of solution for this problem in the infinite time horizon. For the asymptotic
regime, we use the asymptotic characterization of the Gaussian NRDF to provide
a new equivalent realization scheme with feedback which is characterized by a
resource allocation (reverse-waterfilling) problem across the dimension of the
vector source. We leverage the new realization to derive a predictive coding
scheme via lattice quantization with subtractive dither and joint memoryless
entropy coding. This coding scheme offers an upper bound to the operational
zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then
for "r" active dimensions of the vector Gauss-Markov source the gap between the
obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1
bits/vector. We further show that it is possible when we use vector
quantization, and assume infinite dimensional Gauss-Markov sources to make the
previous gap to be negligible, i.e., Gaussian NRDF approximates the operational
zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian
sources of any finite memory under mild conditions. Our theoretical framework
is demonstrated with illustrative numerical experiments.Comment: 32 pages, 9 figures, published in IEEE Journal of Selected Topics in
Signal Processin
An Upper Bound to Zero-Delay Rate Distortion via Kalman Filtering for Vector Gaussian Sources
We deal with zero-delay source coding of a vector Gaussian autoregressive
(AR) source subject to an average mean squared error (MSE) fidelity criterion.
Toward this end, we consider the nonanticipative rate distortion function
(NRDF) which is a lower bound to the causal and zero-delay rate distortion
function (RDF). We use the realization scheme with feedback proposed in [1] to
model the corresponding optimal "test-channel" of the NRDF, when considering
vector Gaussian AR(1) sources subject to an average MSE distortion. We give
conditions on the vector Gaussian AR(1) source to ensure asymptotic
stationarity of the realization scheme (bounded performance). Then, we encode
the vector innovations due to Kalman filtering via lattice quantization with
subtractive dither and memoryless entropy coding. This coding scheme provides a
tight upper bound to the zero-delay Gaussian RDF. We extend this result to
vector Gaussian AR sources of any finite order. Further, we show that for
infinite dimensional vector Gaussian AR sources of any finite order, the NRDF
coincides with the zero-delay RDF. Our theoretical framework is corroborated
with a simulation example.Comment: 7 pages, 6 figures, accepted for publication in IEEE Information
Theory Workshop (ITW
Data Processing Bounds for Scalar Lossy Source Codes with Side Information at the Decoder
In this paper, we introduce new lower bounds on the distortion of scalar
fixed-rate codes for lossy compression with side information available at the
receiver. These bounds are derived by presenting the relevant random variables
as a Markov chain and applying generalized data processing inequalities a la
Ziv and Zakai. We show that by replacing the logarithmic function with other
functions, in the data processing theorem we formulate, we obtain new lower
bounds on the distortion of scalar coding with side information at the decoder.
The usefulness of these results is demonstrated for uniform sources and the
convex function , . The bounds in this case are
shown to be better than one can obtain from the Wyner-Ziv rate-distortion
function.Comment: 35 pages, 9 figure
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
On optimum parameter modulation-estimation from a large deviations perspective
We consider the problem of jointly optimum modulation and estimation of a
real-valued random parameter, conveyed over an additive white Gaussian noise
(AWGN) channel, where the performance metric is the large deviations behavior
of the estimator, namely, the exponential decay rate (as a function of the
observation time) of the probability that the estimation error would exceed a
certain threshold. Our basic result is in providing an exact characterization
of the fastest achievable exponential decay rate, among all possible
modulator-estimator (transmitter-receiver) pairs, where the modulator is
limited only in the signal power, but not in bandwidth. This exponential rate
turns out to be given by the reliability function of the AWGN channel. We also
discuss several ways to achieve this optimum performance, and one of them is
based on quantization of the parameter, followed by optimum channel coding and
modulation, which gives rise to a separation-based transmitter, if one views
this setting from the perspective of joint source-channel coding. This is in
spite of the fact that, in general, when error exponents are considered, the
source-channel separation theorem does not hold true. We also discuss several
observations, modifications and extensions of this result in several
directions, including other channels, and the case of multidimensional
parameter vectors. One of our findings concerning the latter, is that there is
an abrupt threshold effect in the dimensionality of the parameter vector: below
a certain critical dimension, the probability of excess estimation error may
still decay exponentially, but beyond this value, it must converge to unity.Comment: 26 pages; Submitted to the IEEE Transactions on Information Theor
A Tight Bound on the Performance of a Minimal-Delay Joint Source-Channel Coding Scheme
An analog source is to be transmitted across a Gaussian channel in more than
one channel use per source symbol. This paper derives a lower bound on the
asymptotic mean squared error for a strategy that consists of repeatedly
quantizing the source, transmitting the quantizer outputs in the first channel
uses, and sending the remaining quantization error uncoded in the last channel
use. The bound coincides with the performance achieved by a suboptimal decoder
studied by the authors in a previous paper, thereby establishing that the bound
is tight.Comment: 5 pages, submitted to IEEE International Symposium on Information
Theory (ISIT) 201
- …