32 research outputs found

    Mismatched Rate-Distortion Theory: Ensembles, Bounds, and General Alphabets

    Full text link
    In this paper, we consider the mismatched rate-distortion problem, in which the encoding is done using a codebook, and the encoder chooses the minimum-distortion codeword according to a mismatched distortion function that differs from the true one. For the case of discrete memoryless sources, we establish achievable rate-distortion bounds using multi-user coding techniques, namely, superposition coding and expurgated parallel coding. We give examples where these attain the matched rate-distortion trade-off but a standard ensemble with independent codewords fails to do so. On the other hand, in contrast with the channel coding counterpart, we show that there are cases where structured codebooks can perform worse than their unstructured counterparts. In addition, in view of the difficulties in adapting the existing and above-mentioned results to general alphabets, we consider a simpler i.i.d.~random coding ensemble, and establish its achievable rate-distortion bounds for general alphabets

    Slepian-Wolf Coding for Broadcasting with Cooperative Base-Stations

    Full text link
    We propose a base-station (BS) cooperation model for broadcasting a discrete memoryless source in a cellular or heterogeneous network. The model allows the receivers to use helper BSs to improve network performance, and it permits the receivers to have prior side information about the source. We establish the model's information-theoretic limits in two operational modes: In Mode 1, the helper BSs are given information about the channel codeword transmitted by the main BS, and in Mode 2 they are provided correlated side information about the source. Optimal codes for Mode 1 use \emph{hash-and-forward coding} at the helper BSs; while, in Mode 2, optimal codes use source codes from Wyner's \emph{helper source-coding problem} at the helper BSs. We prove the optimality of both approaches by way of a new list-decoding generalisation of [8, Thm. 6], and, in doing so, show an operational duality between Modes 1 and 2.Comment: 16 pages, 1 figur

    Information-Theoretic Foundations of Mismatched Decoding

    Full text link
    Shannon's channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory. In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information Theory (Volume 17, Issue 2-3

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Asymptotically Optimal Stochastic Lossy Coding of Markov Sources

    Full text link
    An effective 'on-the-fly' mechanism for stochastic lossy coding of Markov sources using string matching techniques is proposed in this paper. Earlier work has shown that the rate-distortion bound can be asymptotically achieved by a 'natural type selection' (NTS) mechanism which iteratively encodes asymptotically long source strings (from an unknown source distribution P) and regenerates the codebook according to a maximum likelihood distribution framework, after observing a set of K codewords to 'd-match' (i.e., satisfy the distortion constraint for) a respective set of K source words. This result was later generalized for sources with memory under the assumption that the source words must contain a sequence of asymptotic-length vectors (or super-symbols) over the source super-alphabet, i.e., the source is considered a vector source. However, the earlier result suffers from a significant practical flaw, more specifically, it requires expanding the super-symbols (and correspondingly the super-alphabet) lengths to infinity in order to achieve the rate-distortion bound, even for finite memory sources, e.g., Markov sources. This implies that the complexity of the NTS iteration will explode beyond any practical capabilities, thus compromising the promise of the NTS algorithm in practical scenarios for sources with memory. This work describes a considerably more efficient and tractable mechanism to achieve asymptotically optimal performance given a prescribed memory constraint, within a practical framework tailored to Markov sources. More specifically, the algorithm finds asymptotically the optimal codebook reproduction distribution, within a constrained set of distributions having Markov property with a prescribed order, that achieves the minimum per letter coding rate while maintaining a specified distortion level
    corecore