24,873 research outputs found
A strong converse for a collection of network source coding problems
We prove a strong converse for particular source coding problems: the Ahlswede-Korner (coded side information) problem, lossless source coding for multicast networks with side-information at the end nodes, and the Gray-Wyner problem. Source and side-information sequences are drawn i.i.d. according to a given distribution on a finite alphabet. The strong converse discussed here states that when a given rate vector R is not D-achievable, the probability of observing distortion D for any sequence of block codes at rate R must decrease exponentially to 0 as the block length grows without bound. This strong converse implies the prior strong converses for the point-to-point network, Slepian-Wolf problem, and Ahlswede-Korner (coded side information) problem
Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information
We consider the -user successive refinement problem with causal decoder
side information and derive an exponential strong converse theorem. The
rate-distortion region for the problem can be derived as a straightforward
extension of the two-user case by Maor and Merhav (2008). We show that for any
rate-distortion tuple outside the rate-distortion region of the -user
successive refinement problem with causal decoder side information, the joint
excess-distortion probability approaches one exponentially fast. Our proof
follows by judiciously adapting the recently proposed strong converse technique
by Oohama using the information spectrum method, the variational form of the
rate-distortion region and H\"older's inequality. The lossy source coding
problem with causal decoder side information considered by El Gamal and
Weissman is a special case () of the current problem. Therefore, the
exponential strong converse theorem for the El Gamal and Weissman problem
follows as a corollary of our result
Empirical processes, typical sequences and coordinated actions in standard Borel spaces
This paper proposes a new notion of typical sequences on a wide class of
abstract alphabets (so-called standard Borel spaces), which is based on
approximations of memoryless sources by empirical distributions uniformly over
a class of measurable "test functions." In the finite-alphabet case, we can
take all uniformly bounded functions and recover the usual notion of strong
typicality (or typicality under the total variation distance). For a general
alphabet, however, this function class turns out to be too large, and must be
restricted. With this in mind, we define typicality with respect to any
Glivenko-Cantelli function class (i.e., a function class that admits a Uniform
Law of Large Numbers) and demonstrate its power by giving simple derivations of
the fundamental limits on the achievable rates in several source coding
scenarios, in which the relevant operational criteria pertain to reproducing
empirical averages of a general-alphabet stationary memoryless source with
respect to a suitable function class.Comment: 14 pages, 3 pdf figures; accepted to IEEE Transactions on Information
Theor
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
Capacity of wireless erasure networks
In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed
- …