20 research outputs found
On Continuous-Time Gaussian Channels
A continuous-time white Gaussian channel can be formulated using a white
Gaussian noise, and a conventional way for examining such a channel is the
sampling approach based on the Shannon-Nyquist sampling theorem, where the
original continuous-time channel is converted to an equivalent discrete-time
channel, to which a great variety of established tools and methodology can be
applied. However, one of the key issues of this scheme is that continuous-time
feedback and memory cannot be incorporated into the channel model. It turns out
that this issue can be circumvented by considering the Brownian motion
formulation of a continuous-time white Gaussian channel. Nevertheless, as
opposed to the white Gaussian noise formulation, a link that establishes the
information-theoretic connection between a continuous-time channel under the
Brownian motion formulation and its discrete-time counterparts has long been
missing. This paper is to fill this gap by establishing causality-preserving
connections between continuous-time Gaussian feedback/memory channels and their
associated discrete-time versions in the forms of sampling and approximation
theorems, which we believe will play important roles in the long run for
further developing continuous-time information theory.
As an immediate application of the approximation theorem, we propose the
so-called approximation approach to examine continuous-time white Gaussian
channels in the point-to-point or multi-user setting. It turns out that the
approximation approach, complemented by relevant tools from stochastic
calculus, can enhance our understanding of continuous-time Gaussian channels in
terms of giving alternative and strengthened interpretation to some long-held
folklore, recovering "long known" results from new perspectives, and rigorously
establishing new results predicted by the intuition that the approximation
approach carries
Pointwise Relations between Information and Estimation in Gaussian Noise
Many of the classical and recent relations between information and estimation
in the presence of Gaussian noise can be viewed as identities between
expectations of random quantities. These include the I-MMSE relationship of Guo
et al.; the relative entropy and mismatched estimation relationship of
Verd\'{u}; the relationship between causal estimation and mutual information of
Duncan, and its extension to the presence of feedback by Kadota et al.; the
relationship between causal and non-casual estimation of Guo et al., and its
mismatched version of Weissman. We dispense with the expectations and explore
the nature of the pointwise relations between the respective random quantities.
The pointwise relations that we find are as succinctly stated as - and give
considerable insight into - the original expectation identities.
As an illustration of our results, consider Duncan's 1970 discovery that the
mutual information is equal to the causal MMSE in the AWGN channel, which can
equivalently be expressed saying that the difference between the input-output
information density and half the causal estimation error is a zero mean random
variable (regardless of the distribution of the channel input). We characterize
this random variable explicitly, rather than merely its expectation. Classical
estimation and information theoretic quantities emerge with new and surprising
roles. For example, the variance of this random variable turns out to be given
by the causal MMSE (which, in turn, is equal to the mutual information by
Duncan's result).Comment: 31 pages, 2 figures, submitted to IEEE Transactions on Information
Theor