8 research outputs found

    A Universal Decoder Relative to a Given Family of Metrics

    Full text link
    Consider the following framework of universal decoding suggested in [MerhavUniversal]. Given a family of decoding metrics and random coding distribution (prior), a single, universal, decoder is optimal if for any possible channel the average error probability when using this decoder is better than the error probability attained by the best decoder in the family up to a subexponential multiplicative factor. We describe a general universal decoder in this framework. The penalty for using this universal decoder is computed. The universal metric is constructed as follows. For each metric, a canonical metric is defined and conditions for the given prior to be normal are given. A sub-exponential set of canonical metrics of normal prior can be merged to a single universal optimal metric. We provide an example where this decoder is optimal while the decoder of [MerhavUniversal] is not.Comment: Accepted to ISIT 201

    Universal Decoding for Arbitrary Channels Relative to a Given Class of Decoding Metrics

    No full text
    We consider the problem of universal decoding for arbitrary unknown channels in the random coding regime. For a given random coding distribution and a given class of metric decoders, we propose a generic universal decoder whose average error probability is, within a sub–exponential multiplicative factor, no larger than that of the best decoder within this class of decoders. Since the optimum, maximum likelihood (ML) decoder of the underlying channel is not necessarily assumed to belong to the given class of decoders, this setting suggests a common generalized framework for: (i) mismatched decoding, (ii) universal decoding for a given family of channels, and (iii) universal coding and decoding for deterministic channels using the individual–sequence approach. The proof of our universality result is fairly simple, and it is demonstrated how some earlier results on universal decoding are obtained as special cases. We also demonstrate how our method extends to more complicated scenarios, like incorporation of noiseless feedback, and the multiple access channel

    Universal Decoding for Arbitrary Channels Relative to a Given Class of Decoding Metrics

    No full text

    Information-Theoretic Foundations of Mismatched Decoding

    Full text link
    Shannon's channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory. In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information Theory (Volume 17, Issue 2-3
    corecore