15 research outputs found

    Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

    Full text link
    This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.Comment: Further comments welcom

    Variable-length compression allowing errors

    Get PDF
    This paper studies the fundamental limits of the minimum average length of lossless and lossy variable-length compression, allowing a nonzero error probability Ï”\epsilon, for lossless compression. We give non-asymptotic bounds on the minimum average length in terms of Erokhin's rate-distortion function and we use those bounds to obtain a Gaussian approximation on the speed of approach to the limit which is quite accurate for all but small blocklengths: (1−ϔ)kH(S)−kV(S)2πe−(Q−1(Ï”))22(1 - \epsilon) k H(\mathsf S) - \sqrt{\frac{k V(\mathsf S)}{2 \pi} } e^{- \frac {(Q^{-1}(\epsilon))^2} 2 } where Q−1(⋅)Q^{-1}(\cdot) is the functional inverse of the standard Gaussian complementary cdf, and V(S)V(\mathsf S) is the source dispersion. A nonzero error probability thus not only reduces the asymptotically achievable rate by a factor of 1−ϔ1 - \epsilon, but this asymptotic limit is approached from below, i.e. larger source dispersions and shorter blocklengths are beneficial. Variable-length lossy compression under an excess distortion constraint is shown to exhibit similar properties

    Complexity and second moment of the mathematical theory of communication

    Get PDF
    The performance of an error correcting code is evaluated by its block error probability, code rate, and encoding and decoding complexity. The performance of a series of codes is evaluated by, as the block lengths approach infinity, whether their block error probabilities decay to zero, whether their code rates converge to channel capacity, and whether their growth in complexities stays under control. Over any discrete memoryless channel, I build codes such that: for one, their block error probabilities and code rates scale like random codes’; and for two, their encoding and decoding complexities scale like polar codes’. Quantitatively, for any constants π, ρ > 0 such that π+2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(−Nπ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity O(N logN) per code block. Over any discrete memoryless channel, I also build codes such that: for one, they achieve channel capacity rapidly; and for two, their encoding and decoding complexities outperform all known codes over non-BEC channels. Quantitatively, for any constants τ, ρ > 0 such that 2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(−(logN)τ ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity O(N log(logN)) per code block. The two aforementioned results are built upon two pillars—a versatile framework that generates codes on the basis of channel polarization, and a calculus–probability machinery that evaluates the performances of codes. The framework that generates codes and the machinery that evaluates codes can be extended to many other scenarios in network information theory. To name a few: lossless compression with side information, lossy compression, Slepian–Wolf problem, Wyner–Ziv Problem, multiple access channel, wiretap channel of type I, and broadcast channel. In each scenario, the adapted notions of block error probability and code rate approach their limits at the same paces as specified above

    Polar Coding Theorems for Discrete Systems

    Get PDF
    Polar coding is a recently invented technique for communication over binary-input memoryless channels. This technique allows one to transmit data at rates close to the symmetric-capacity of such channels with arbitrarily high reliability, using low-complexity encoding and decoding algorithms. As such, polar coding is the only explicit low-complexity method known to achieve the capacity of symmetric binary-input memoryless channels. The principle underlying polar coding is channel polarization: recursively combining several copies of a mediocre binary-input channel to create noiseless and useless channels. The same principle can also be used to obtain optimal low-complexity compression schemes for memoryless binary sources. In this dissertation, the generality of the polarization principle is investigated. It is first shown that polarization with recursive procedures is not limited to binary channels and sources. A family of low-complexity methods that polarize all discrete memoryless processes is introduced. In both data transmission and data compression, codes based on such methods achieve optimal rates, i.e., channel capacity and source entropy, respectively. The error probability behavior of such codes is as in the binary case. Next, it is shown that a large class of recursive constructions polarize memoryless processes, establishing the original polar codes as an instance of a large class of codes based on polarization methods. A formula to compute the error probability dependence of generalized constructions on the coding length is derived. Evaluating this formula reveals that substantial error probability improvements over the original polar codes can be achieved at large coding lengths by using generalized constructions, particularly over channels and sources with non-binary alphabets. Polarizing capabilities of recursive methods are shown to extend beyond memoryless processes: Any construction that polarizes memoryless processes will also polarize a large class of processes with memory. The principles developed are applied to settings with multiple memoryless processes. It is shown that separately applying polarization constructions to two correlated processes polarizes both the processes themselves as well as the correlations between them. These observations lead to polar coding theorems for multiple-access channels and separate compression of correlated sources. The proposed coding schemes achieve optimal sum rates in both problems

    Weak variable-length Slepian-Wolf coding with linked encoders for mixed sources

    No full text

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers
    corecore