thesis

On a Generalised Typicality and Its Applications in Information Theory

Abstract

Typicality lemmas have been successfully applied in many information theoretical problems. The conventional strong typicality is only defined for finite alphabets. Conditional typicality and Markov lemmas can be obtained for strong typicality. Weak typicality can be defined based on a measurable space without additional constraints, and can be easily defined based on a general stochastic process. However, to the best of our knowledge, no conditional typicality or strong Markov lemmas have been obtained for weak typicality in classic works. As a result, some important coding theorems can only be proved by strong typicality lemmas and using the discretisation-and-approximation-technique. In order to solve the aforementioned problems, we will show that the conditional typicality lemma can be obtained for a generic typicality. We will then define a multivariate typicality for general alphabets and general probability measures on product spaces, based on the relative entropy, which can be a measure of the relevance between multiple sources. We will provide a series of multivariate typicality lemmas, including conditional and joint typicality lemmas, packing and covering lemmas, as well as the strong Markov lemma for our proposed generalised typicality. These typicality lemmas can be used to solve source and channel coding problems in a unified way for finite, continuous, or more general alphabets. We will present some coding theorems with general settings using the generalised multivariate typicality lemmas without using the discretisation-and-approximation technique. Generally, the proofs of the coding theorems in general settings are simpler by using the generalised typicality, than using strong typicality with the discretisation-and-approximation technique

    Similar works