1,308 research outputs found
Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources
We deal with zero-delay source coding of a vector-valued Gauss-Markov source
subject to a mean-squared error (MSE) fidelity criterion characterized by the
operational zero-delay vector-valued Gaussian rate distortion function (RDF).
We address this problem by considering the nonanticipative RDF (NRDF) which is
a lower bound to the causal optimal performance theoretically attainable (OPTA)
function and operational zero-delay RDF. We recall the realization that
corresponds to the optimal "test-channel" of the Gaussian NRDF, when
considering a vector Gauss-Markov source subject to a MSE distortion in the
finite time horizon. Then, we introduce sufficient conditions to show existence
of solution for this problem in the infinite time horizon. For the asymptotic
regime, we use the asymptotic characterization of the Gaussian NRDF to provide
a new equivalent realization scheme with feedback which is characterized by a
resource allocation (reverse-waterfilling) problem across the dimension of the
vector source. We leverage the new realization to derive a predictive coding
scheme via lattice quantization with subtractive dither and joint memoryless
entropy coding. This coding scheme offers an upper bound to the operational
zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then
for "r" active dimensions of the vector Gauss-Markov source the gap between the
obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1
bits/vector. We further show that it is possible when we use vector
quantization, and assume infinite dimensional Gauss-Markov sources to make the
previous gap to be negligible, i.e., Gaussian NRDF approximates the operational
zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian
sources of any finite memory under mild conditions. Our theoretical framework
is demonstrated with illustrative numerical experiments.Comment: 32 pages, 9 figures, published in IEEE Journal of Selected Topics in
Signal Processin
Optimal Estimation via Nonanticipative Rate Distortion Function and Applications to Time-Varying Gauss-Markov Processes
In this paper, we develop {finite-time horizon} causal filters using the
nonanticipative rate distortion theory. We apply the {developed} theory to
{design optimal filters for} time-varying multidimensional Gauss-Markov
processes, subject to a mean square error fidelity constraint. We show that
such filters are equivalent to the design of an optimal \texttt{\{encoder,
channel, decoder\}}, which ensures that the error satisfies {a} fidelity
constraint. Moreover, we derive a universal lower bound on the mean square
error of any estimator of time-varying multidimensional Gauss-Markov processes
in terms of conditional mutual information. Unlike classical Kalman filters,
the filter developed is characterized by a reverse-waterfilling algorithm,
which ensures {that} the fidelity constraint is satisfied. The theoretical
results are demonstrated via illustrative examples.Comment: 35 pages, 6 figures, submitted for publication in SIAM Journal on
Control and Optimization (SICON
Minimum Bitrate Neuromorphic Encoding for Continuous-Time Gauss-Markov Processes
In this work, we study minimum data rate tracking of a dynamical system under
a neuromorphic event-based sensing paradigm. We begin by bridging the gap
between continuous-time (CT) system dynamics and information theory's causal
rate distortion theory. We motivate the use of non-singular source codes to
quantify bitrates in event-based sampling schemes. This permits an analysis of
minimum bitrate event-based tracking using tools already established in the
control and information theory literature. We derive novel, nontrivial lower
bounds to event-based sensing, and compare the lower bound with the performance
of well-known schemes in the established literature
Active Classification for POMDPs: a Kalman-like State Estimator
The problem of state tracking with active observation control is considered
for a system modeled by a discrete-time, finite-state Markov chain observed
through conditionally Gaussian measurement vectors. The measurement model
statistics are shaped by the underlying state and an exogenous control input,
which influence the observations' quality. Exploiting an innovations approach,
an approximate minimum mean-squared error (MMSE) filter is derived to estimate
the Markov chain system state. To optimize the control strategy, the associated
mean-squared error is used as an optimization criterion in a partially
observable Markov decision process formulation. A stochastic dynamic
programming algorithm is proposed to solve for the optimal solution. To enhance
the quality of system state estimates, approximate MMSE smoothing estimators
are also derived. Finally, the performance of the proposed framework is
illustrated on the problem of physical activity detection in wireless body
sensing networks. The power of the proposed framework lies within its ability
to accommodate a broad spectrum of active classification applications including
sensor management for object classification and tracking, estimation of sparse
signals and radar scheduling.Comment: 38 pages, 6 figure
Source Coding When the Side Information May Be Delayed
For memoryless sources, delayed side information at the decoder does not
improve the rate-distortion function. However, this is not the case for more
general sources with memory, as demonstrated by a number of works focusing on
the special case of (delayed) feedforward. In this paper, a setting is studied
in which the encoder is potentially uncertain about the delay with which
measurements of the side information are acquired at the decoder. Assuming a
hidden Markov model for the sources, at first, a single-letter characterization
is given for the set-up where the side information delay is arbitrary and known
at the encoder, and the reconstruction at the destination is required to be
(near) lossless. Then, with delay equal to zero or one source symbol, a
single-letter characterization is given of the rate-distortion region for the
case where side information may be delayed or not, unbeknownst to the encoder.
The characterization is further extended to allow for additional information to
be sent when the side information is not delayed. Finally, examples for binary
and Gaussian sources are provided.Comment: revised July 201
Tracking an Auto-Regressive Process with Limited Communication per Unit Time
Samples from a high-dimensional AR[1] process are observed by a sender which
can communicate only finitely many bits per unit time to a receiver. The
receiver seeks to form an estimate of the process value at every time instant
in real-time. We consider a time-slotted communication model in a slow-sampling
regime where multiple communication slots occur between two sampling instants.
We propose a successive update scheme which uses communication between sampling
instants to refine estimates of the latest sample and study the following
question: Is it better to collect communication of multiple slots to send
better refined estimates, making the receiver wait more for every refinement,
or to be fast but loose and send new information in every communication
opportunity? We show that the fast but loose successive update scheme with
ideal spherical codes is universally optimal asymptotically for a large
dimension. However, most practical quantization codes for fixed dimensions do
not meet the ideal performance required for this optimality, and they typically
will have a bias in the form of a fixed additive error. Interestingly, our
analysis shows that the fast but loose scheme is not an optimal choice in the
presence of such errors, and a judiciously chosen frequency of updates
outperforms it
Optimal Causal Rate-Constrained Sampling of the Wiener Process
We consider the following communication scenario. An encoder causally observes the Wiener process and decides when and what to transmit about it. A decoder makes real-time estimation of the process using causally received codewords. We determine the causal encoding and decoding policies that jointly minimize the mean-square estimation error, under the long-term communication rate constraint of R bits per second. We show that an optimal encoding policy can be implemented as a causal sampling policy followed by a causal compressing policy. We prove that the optimal encoding policy samples the Wiener process once the innovation passes either β(1/R) or ββ(1/R), and compresses the sign of the innovation (SOI) using a 1-bit codeword. The SOI coding scheme achieves the operational distortion-rate function, which is equal to D^(op)(R)=1/(6R). Surprisingly, this is significantly better than the distortion-rate tradeoff achieved in the limit of infinite delay by the best non-causal code. This is because the SOI coding scheme leverages the free timing information supplied by the zero-delay channel between the encoder and the decoder. The key to unlock that gain is the event-triggered nature of the SOI sampling policy. In contrast, the distortion-rate tradeoffs achieved with deterministic sampling policies are much worse: we prove that the causal informational distortion-rate function in that scenario is as high as D_(DET)(R)=5/(6R). It is achieved by the uniform sampling policy with the sampling interval 1/R. In either case, the optimal strategy is to sample the process as fast as possible and to transmit 1-bit codewords to the decoder without delay
- β¦