307 research outputs found
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
Energy Requirements for Quantum Data Compression and 1-1 Coding
By looking at quantum data compression in the second quantisation, we present
a new model for the efficient generation and use of variable length codes. In
this picture lossless data compression can be seen as the {\em minimum energy}
required to faithfully represent or transmit classical information contained
within a quantum state.
In order to represent information we create quanta in some predefined modes
(i.e. frequencies) prepared in one of two possible internal states (the
information carrying degrees of freedom). Data compression is now seen as the
selective annihilation of these quanta, the energy of whom is effectively
dissipated into the environment. As any increase in the energy of the
environment is intricately linked to any information loss and is subject to
Landauer's erasure principle, we use this principle to distinguish lossless and
lossy schemes and to suggest bounds on the efficiency of our lossless
compression protocol.
In line with the work of Bostr\"{o}m and Felbinger \cite{bostroem}, we also
show that when using variable length codes the classical notions of prefix or
uniquely decipherable codes are unnecessarily restrictive given the structure
of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of
this restraint we translate existing classical results on 1-1 coding to the
quantum domain to derive a new upper bound on the compression of quantum
information. Finally we present a simple quantum circuit to implement our
scheme.Comment: 10 pages, 5 figure
Network vector quantization
We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples
- …