11,130 research outputs found
On Continuous-Time Gaussian Channels
A continuous-time white Gaussian channel can be formulated using a white
Gaussian noise, and a conventional way for examining such a channel is the
sampling approach based on the Shannon-Nyquist sampling theorem, where the
original continuous-time channel is converted to an equivalent discrete-time
channel, to which a great variety of established tools and methodology can be
applied. However, one of the key issues of this scheme is that continuous-time
feedback and memory cannot be incorporated into the channel model. It turns out
that this issue can be circumvented by considering the Brownian motion
formulation of a continuous-time white Gaussian channel. Nevertheless, as
opposed to the white Gaussian noise formulation, a link that establishes the
information-theoretic connection between a continuous-time channel under the
Brownian motion formulation and its discrete-time counterparts has long been
missing. This paper is to fill this gap by establishing causality-preserving
connections between continuous-time Gaussian feedback/memory channels and their
associated discrete-time versions in the forms of sampling and approximation
theorems, which we believe will play important roles in the long run for
further developing continuous-time information theory.
As an immediate application of the approximation theorem, we propose the
so-called approximation approach to examine continuous-time white Gaussian
channels in the point-to-point or multi-user setting. It turns out that the
approximation approach, complemented by relevant tools from stochastic
calculus, can enhance our understanding of continuous-time Gaussian channels in
terms of giving alternative and strengthened interpretation to some long-held
folklore, recovering "long known" results from new perspectives, and rigorously
establishing new results predicted by the intuition that the approximation
approach carries
The Capacity of Channels with Feedback
We introduce a general framework for treating channels with memory and
feedback. First, we generalize Massey's concept of directed information and use
it to characterize the feedback capacity of general channels. Second, we
present coding results for Markov channels. This requires determining
appropriate sufficient statistics at the encoder and decoder. Third, a dynamic
programming framework for computing the capacity of Markov channels is
presented. Fourth, it is shown that the average cost optimality equation (ACOE)
can be viewed as an implicit single-letter characterization of the capacity.
Fifth, scenarios with simple sufficient statistics are described
Effective Capacity in Broadcast Channels with Arbitrary Inputs
We consider a broadcast scenario where one transmitter communicates with two
receivers under quality-of-service constraints. The transmitter initially
employs superposition coding strategies with arbitrarily distributed signals
and sends data to both receivers. Regarding the channel state conditions, the
receivers perform successive interference cancellation to decode their own
data. We express the effective capacity region that provides the maximum
allowable sustainable data arrival rate region at the transmitter buffer or
buffers. Given an average transmission power limit, we provide a two-step
approach to obtain the optimal power allocation policies that maximize the
effective capacity region. Then, we characterize the optimal decoding regions
at the receivers in the space spanned by the channel fading power values. We
finally substantiate our results with numerical presentations.Comment: This paper will appear in 14th International Conference on
Wired&Wireless Internet Communications (WWIC
- …