7 research outputs found
Concentration of Measure Inequalities in Information Theory, Communications and Coding (Second Edition)
During the last two decades, concentration inequalities have been the subject
of exciting developments in various areas, including convex geometry,
functional analysis, statistical physics, high-dimensional statistics, pure and
applied probability theory, information theory, theoretical computer science,
and learning theory. This monograph focuses on some of the key modern
mathematical tools that are used for the derivation of concentration
inequalities, on their links to information theory, and on their various
applications to communications and coding. In addition to being a survey, this
monograph also includes various new recent results derived by the authors. The
first part of the monograph introduces classical concentration inequalities for
martingales, as well as some recent refinements and extensions. The power and
versatility of the martingale approach is exemplified in the context of codes
defined on graphs and iterative decoding algorithms, as well as codes for
wireless communication. The second part of the monograph introduces the entropy
method, an information-theoretic technique for deriving concentration
inequalities. The basic ingredients of the entropy method are discussed first
in the context of logarithmic Sobolev inequalities, which underlie the
so-called functional approach to concentration of measure, and then from a
complementary information-theoretic viewpoint based on transportation-cost
inequalities and probability in metric spaces. Some representative results on
concentration for dependent random variables are briefly summarized, with
emphasis on their connections to the entropy method. Finally, we discuss
several applications of the entropy method to problems in communications and
coding, including strong converses, empirical distributions of good channel
codes, and an information-theoretic converse for concentration of measure.Comment: Foundations and Trends in Communications and Information Theory, vol.
10, no 1-2, pp. 1-248, 2013. Second edition was published in October 2014.
ISBN to printed book: 978-1-60198-906-
Coding Theorems via Jar Decoding
In the development of digital communication and information theory, every channel decoding rule has resulted in a revolution at the time when it was invented. In the area of information theory, early channel coding theorems were established mainly by maximum likelihood decoding, while the arrival of typical sequence decoding signaled the era of multi-user information theory, in which achievability proof became simple and intuitive. Practical channel code design, on the other hand, was based on minimum distance decoding at the early stage. The invention of belief propagation decoding with soft input and soft output, leading to the birth of turbo codes and low-density-parity check (LDPC) codes which are indispensable coding techniques in current communication systems, changed the whole research area so dramatically that people started to use the term "modern coding theory'' to refer to the research based on this decoding rule. In this thesis, we propose a new decoding rule, dubbed jar decoding, which would be expected to bring some new thoughts to both the code performance analysis and the code design.
Given any channel with input alphabet X and output alphabet Y, jar decoding rule can be simply expressed as follows: upon receiving the channel output y^n â Y^n, the decoder first forms a set (called a jar) of sequences x^n â X^n considered to be close to y^n and pick any codeword (if any) inside this jar as the decoding output. The way how the decoder forms the jar is defined independently with the actual channel code and even the channel statistics in certain cases. Under this jar decoding, various coding theorems are proved in this thesis. First of all, focusing on the word error probability, jar decoding is shown to be near optimal by the achievabilities proved via jar decoding and the converses proved via a proof technique, dubbed the outer mirror image of jar, which is also quite related to jar decoding. Then a Taylor-type expansion of optimal channel coding rate with finite block length is discovered by combining those achievability and converse theorems, and it is demonstrated that jar decoding is optimal up to the second order in this Taylor-type expansion. Flexibility of jar decoding is then illustrated by proving LDPC coding theorems via jar decoding, where the bit error probability is concerned. And finally, we consider a coding scenario, called interactive encoding and decoding, and show that jar decoding can be also used to prove coding theorems and guide the code design in the scenario of two-way communication
Resource allocation for cooperative broadcasting in W-CDMA networks
Imperial Users onl
Group testing:an information theory perspective
The group testing problem concerns discovering a small number of defective
items within a large population by performing tests on pools of items. A test
is positive if the pool contains at least one defective, and negative if it
contains no defectives. This is a sparse inference problem with a combinatorial
flavour, with applications in medical testing, biology, telecommunications,
information technology, data science, and more. In this monograph, we survey
recent developments in the group testing problem from an information-theoretic
perspective. We cover several related developments: efficient algorithms with
practical storage and computation requirements, achievability bounds for
optimal decoding methods, and algorithm-independent converse bounds. We assess
the theoretical guarantees not only in terms of scaling laws, but also in terms
of the constant factors, leading to the notion of the {\em rate} of group
testing, indicating the amount of information learned per test. Considering
both noiseless and noisy settings, we identify several regimes where existing
algorithms are provably optimal or near-optimal, as well as regimes where there
remains greater potential for improvement. In addition, we survey results
concerning a number of variations on the standard group testing problem,
including partial recovery criteria, adaptive algorithms with a limited number
of stages, constrained test designs, and sublinear-time algorithms.Comment: Survey paper, 140 pages, 19 figures. To be published in Foundations
and Trends in Communications and Information Theor
Noise in Quantum Information Processing
Quantum phenomena such as superposition and entanglement imbue quantum systems with information processing power in excess of their classical counterparts. These properties of quantum states are, however, highly fragile. As we enter the era of noisy intermediate-scale quantum (NISQ) devices, this vulnerability to noise is a major hurdle to the experimental realisation of quantum technologies. In this thesis we explore the role of noise in quantum information processing from two different perspectives. In Part I we consider noise from the perspective of quantum error correcting codes. Error correcting codes are often analysed with respect to simplified toy models of noise, such as iid depolarising noise. We consider generalising these techniques for analysing codes under more realistic noise models, including features such as biased or correlated errors. We also consider designing customised codes which not only take into account and exploit features of the underlying physical noise. Considering such tailored codes will be of particular importance for NISQ applications in which finite-size effects can be significant. In Part II we apply tools from information theory to study the finite-resource effects which arise in the trade-offs between resource costs and error rates for certain quantum information processing tasks. We start by considering classical communication over quantum channels, providing a refined analysis of the trade-off between communication rate and error in the regime of a finite number of channel uses. We then extend these techniques to the problem of resource interconversion in theories such as quantum entanglement and quantum thermodynamics, studying finite-size effects which arise in resource-error trade-offs. By studying this effect in detail, we also show how detrimental finite-size effects in devices such as thermal engines may be greatly suppressed by carefully engineering the underlying resource interconversion processes