8 research outputs found
Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources
We deal with zero-delay source coding of a vector-valued Gauss-Markov source
subject to a mean-squared error (MSE) fidelity criterion characterized by the
operational zero-delay vector-valued Gaussian rate distortion function (RDF).
We address this problem by considering the nonanticipative RDF (NRDF) which is
a lower bound to the causal optimal performance theoretically attainable (OPTA)
function and operational zero-delay RDF. We recall the realization that
corresponds to the optimal "test-channel" of the Gaussian NRDF, when
considering a vector Gauss-Markov source subject to a MSE distortion in the
finite time horizon. Then, we introduce sufficient conditions to show existence
of solution for this problem in the infinite time horizon. For the asymptotic
regime, we use the asymptotic characterization of the Gaussian NRDF to provide
a new equivalent realization scheme with feedback which is characterized by a
resource allocation (reverse-waterfilling) problem across the dimension of the
vector source. We leverage the new realization to derive a predictive coding
scheme via lattice quantization with subtractive dither and joint memoryless
entropy coding. This coding scheme offers an upper bound to the operational
zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then
for "r" active dimensions of the vector Gauss-Markov source the gap between the
obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1
bits/vector. We further show that it is possible when we use vector
quantization, and assume infinite dimensional Gauss-Markov sources to make the
previous gap to be negligible, i.e., Gaussian NRDF approximates the operational
zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian
sources of any finite memory under mild conditions. Our theoretical framework
is demonstrated with illustrative numerical experiments.Comment: 32 pages, 9 figures, published in IEEE Journal of Selected Topics in
Signal Processin
An integrative perspective to LQ and ℓ∞ control for delayed and quantized systems
Deterministic and stochastic approaches to handle uncertainties may incur very different complexities in computation time and memory usage, in addition to different uncertainty models. For linear systems with delay and rate constrained communications between the observer and the controller, previous work shows that a deterministic approach, the ℓ ∞ control has low complexity but can only handle bounded disturbances. In this article, we take a stochastic approach and propose a linear-quadratic (LQ) controller that can handle arbitrarily large disturbance but has large complexity in time and space. The differences in robustness and complexity of the ℓ ∞ and LQ controllers motivate the design of a hybrid controller that interpolates between the two: The ℓ ∞ controller is applied when the disturbance is not too large (normal mode) and the LQ controller is resorted to otherwise (acute mode). We characterize the switching behavior between the normal and acute modes. Using our theoretical bounds which are supplemented by numerical experiments, we show that the hybrid controller can achieve a sweet spot in the robustness-complexity tradeoff, i.e., reject occasional large disturbance while operating with low complexity most of the time
Tracking an Auto-Regressive Process with Limited Communication per Unit Time
Samples from a high-dimensional AR[1] process are observed by a sender which
can communicate only finitely many bits per unit time to a receiver. The
receiver seeks to form an estimate of the process value at every time instant
in real-time. We consider a time-slotted communication model in a slow-sampling
regime where multiple communication slots occur between two sampling instants.
We propose a successive update scheme which uses communication between sampling
instants to refine estimates of the latest sample and study the following
question: Is it better to collect communication of multiple slots to send
better refined estimates, making the receiver wait more for every refinement,
or to be fast but loose and send new information in every communication
opportunity? We show that the fast but loose successive update scheme with
ideal spherical codes is universally optimal asymptotically for a large
dimension. However, most practical quantization codes for fixed dimensions do
not meet the ideal performance required for this optimality, and they typically
will have a bias in the form of a fixed additive error. Interestingly, our
analysis shows that the fast but loose scheme is not an optimal choice in the
presence of such errors, and a judiciously chosen frequency of updates
outperforms it
Zero-Delay Multiple Descriptions of Stationary Scalar Gauss-Markov Sources
In this paper, we introduce the zero-delay multiple-description problem, where an encoder constructs two descriptions and the decoders receive a subset of these descriptions. The encoder and decoders are causal and operate under the restriction of zero delay, which implies that at each time instance, the encoder must generate codewords that can be decoded by the decoders using only the current and past codewords. For the case of discrete-time stationary scalar Gauss—Markov sources and quadratic distortion constraints, we present information-theoretic lower bounds on the average sum-rate in terms of the directed and mutual information rate between the source and the decoder reproductions. Furthermore, we show that the optimum test channel is in this case Gaussian, and it can be realized by a feedback coding scheme that utilizes prediction and correlated Gaussian noises. Operational achievable results are considered in the high-rate scenario using a simple differential pulse code modulation scheme with staggered quantizers. Using this scheme, we achieve operational rates within 0.415 bits / sample / description of the theoretical lower bounds for varying description rates