336 research outputs found
A Nonstochastic Information Theory for Communication and State Estimation
In communications, unknown variables are usually modelled as random
variables, and concepts such as independence, entropy and information are
defined in terms of the underlying probability distributions. In contrast,
control theory often treats uncertainties and disturbances as bounded unknowns
having no statistical structure. The area of networked control combines both
fields, raising the question of whether it is possible to construct meaningful
analogues of stochastic concepts such as independence, Markovness, entropy and
information without assuming a probability space. This paper introduces a
framework for doing so, leading to the construction of a maximin information
functional for nonstochastic variables. It is shown that the largest maximin
information rate through a memoryless, error-prone channel in this framework
coincides with the block-coding zero-error capacity of the channel. Maximin
information is then used to derive tight conditions for uniformly estimating
the state of a linear time-invariant system over such a channel, paralleling
recent results of Matveev and Savkin
On Continuous-Time Gaussian Channels
A continuous-time white Gaussian channel can be formulated using a white
Gaussian noise, and a conventional way for examining such a channel is the
sampling approach based on the Shannon-Nyquist sampling theorem, where the
original continuous-time channel is converted to an equivalent discrete-time
channel, to which a great variety of established tools and methodology can be
applied. However, one of the key issues of this scheme is that continuous-time
feedback and memory cannot be incorporated into the channel model. It turns out
that this issue can be circumvented by considering the Brownian motion
formulation of a continuous-time white Gaussian channel. Nevertheless, as
opposed to the white Gaussian noise formulation, a link that establishes the
information-theoretic connection between a continuous-time channel under the
Brownian motion formulation and its discrete-time counterparts has long been
missing. This paper is to fill this gap by establishing causality-preserving
connections between continuous-time Gaussian feedback/memory channels and their
associated discrete-time versions in the forms of sampling and approximation
theorems, which we believe will play important roles in the long run for
further developing continuous-time information theory.
As an immediate application of the approximation theorem, we propose the
so-called approximation approach to examine continuous-time white Gaussian
channels in the point-to-point or multi-user setting. It turns out that the
approximation approach, complemented by relevant tools from stochastic
calculus, can enhance our understanding of continuous-time Gaussian channels in
terms of giving alternative and strengthened interpretation to some long-held
folklore, recovering "long known" results from new perspectives, and rigorously
establishing new results predicted by the intuition that the approximation
approach carries
The Capacity of Channels with Feedback
We introduce a general framework for treating channels with memory and
feedback. First, we generalize Massey's concept of directed information and use
it to characterize the feedback capacity of general channels. Second, we
present coding results for Markov channels. This requires determining
appropriate sufficient statistics at the encoder and decoder. Third, a dynamic
programming framework for computing the capacity of Markov channels is
presented. Fourth, it is shown that the average cost optimality equation (ACOE)
can be viewed as an implicit single-letter characterization of the capacity.
Fifth, scenarios with simple sufficient statistics are described
Lecture Notes on Network Information Theory
These lecture notes have been converted to a book titled Network Information
Theory published recently by Cambridge University Press. This book provides a
significantly expanded exposition of the material in the lecture notes as well
as problems and bibliographic notes at the end of each chapter. The authors are
currently preparing a set of slides based on the book that will be posted in
the second half of 2012. More information about the book can be found at
http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of
the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/
Information Nonanticipative Rate Distortion Function and Its Applications
This paper investigates applications of nonanticipative Rate Distortion
Function (RDF) in a) zero-delay Joint Source-Channel Coding (JSCC) design based
on average and excess distortion probability, b) in bounding the Optimal
Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and
computing the Rate Loss (RL) of zero-delay and causal codes with respect to
noncausal codes. These applications are described using two running examples,
the Binary Symmetric Markov Source with parameter p, (BSMS(p)) and the
multidimensional partially observed Gaussian-Markov source. For the
multidimensional Gaussian-Markov source with square error distortion, the
solution of the nonanticipative RDF is derived, its operational meaning using
JSCC design via a noisy coding theorem is shown by providing the optimal
encoding-decoding scheme over a vector Gaussian channel, and the RL of causal
and zero-delay codes with respect to noncausal codes is computed.
For the BSMS(p) with Hamming distortion, the solution of the nonanticipative
RDF is derived, the RL of causal codes with respect to noncausal codes is
computed, and an uncoded noisy coding theorem based on excess distortion
probability is shown. The information nonanticipative RDF is shown to be
equivalent to the nonanticipatory epsilon-entropy, which corresponds to the
classical RDF with an additional causality or nonanticipative condition imposed
on the optimal reproduction conditional distribution.Comment: 34 pages, 12 figures, part of this paper was accepted for publication
in IEEE International Symposium on Information Theory (ISIT), 2014 and in
book Coordination Control of Distributed Systems of series Lecture Notes in
Control and Information Sciences, 201
- …