237 research outputs found
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
Lossy joint source-channel coding in the finite blocklength regime
This paper finds new tight finite-blocklength bounds for the best achievable
lossy joint source-channel code rate, and demonstrates that joint
source-channel code design brings considerable performance advantage over a
separate one in the non-asymptotic regime. A joint source-channel code maps a
block of source symbols onto a length channel codeword, and the
fidelity of reproduction at the receiver end is measured by the probability
that the distortion exceeds a given threshold . For memoryless
sources and channels, it is demonstrated that the parameters of the best joint
source-channel code must satisfy , where and are the channel capacity and channel
dispersion, respectively; and are the source
rate-distortion and rate-dispersion functions; and is the standard Gaussian
complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve
the Shannon limit when the source and channel satisfy a certain probabilistic
matching condition. In this paper we show that even when this condition is not
satisfied, symbol-by-symbol transmission is, in some cases, the best known
strategy in the non-asymptotic regime
Information Nonanticipative Rate Distortion Function and Its Applications
This paper investigates applications of nonanticipative Rate Distortion
Function (RDF) in a) zero-delay Joint Source-Channel Coding (JSCC) design based
on average and excess distortion probability, b) in bounding the Optimal
Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and
computing the Rate Loss (RL) of zero-delay and causal codes with respect to
noncausal codes. These applications are described using two running examples,
the Binary Symmetric Markov Source with parameter p, (BSMS(p)) and the
multidimensional partially observed Gaussian-Markov source. For the
multidimensional Gaussian-Markov source with square error distortion, the
solution of the nonanticipative RDF is derived, its operational meaning using
JSCC design via a noisy coding theorem is shown by providing the optimal
encoding-decoding scheme over a vector Gaussian channel, and the RL of causal
and zero-delay codes with respect to noncausal codes is computed.
For the BSMS(p) with Hamming distortion, the solution of the nonanticipative
RDF is derived, the RL of causal codes with respect to noncausal codes is
computed, and an uncoded noisy coding theorem based on excess distortion
probability is shown. The information nonanticipative RDF is shown to be
equivalent to the nonanticipatory epsilon-entropy, which corresponds to the
classical RDF with an additional causality or nonanticipative condition imposed
on the optimal reproduction conditional distribution.Comment: 34 pages, 12 figures, part of this paper was accepted for publication
in IEEE International Symposium on Information Theory (ISIT), 2014 and in
book Coordination Control of Distributed Systems of series Lecture Notes in
Control and Information Sciences, 201
On optimum parameter modulation-estimation from a large deviations perspective
We consider the problem of jointly optimum modulation and estimation of a
real-valued random parameter, conveyed over an additive white Gaussian noise
(AWGN) channel, where the performance metric is the large deviations behavior
of the estimator, namely, the exponential decay rate (as a function of the
observation time) of the probability that the estimation error would exceed a
certain threshold. Our basic result is in providing an exact characterization
of the fastest achievable exponential decay rate, among all possible
modulator-estimator (transmitter-receiver) pairs, where the modulator is
limited only in the signal power, but not in bandwidth. This exponential rate
turns out to be given by the reliability function of the AWGN channel. We also
discuss several ways to achieve this optimum performance, and one of them is
based on quantization of the parameter, followed by optimum channel coding and
modulation, which gives rise to a separation-based transmitter, if one views
this setting from the perspective of joint source-channel coding. This is in
spite of the fact that, in general, when error exponents are considered, the
source-channel separation theorem does not hold true. We also discuss several
observations, modifications and extensions of this result in several
directions, including other channels, and the case of multidimensional
parameter vectors. One of our findings concerning the latter, is that there is
an abrupt threshold effect in the dimensionality of the parameter vector: below
a certain critical dimension, the probability of excess estimation error may
still decay exponentially, but beyond this value, it must converge to unity.Comment: 26 pages; Submitted to the IEEE Transactions on Information Theor
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
- …