4 research outputs found
A General Framework for Transmission with Transceiver Distortion and Some Applications
A general theoretical framework is presented for analyzing information
transmission over Gaussian channels with memoryless transceiver distortion,
which encompasses various nonlinear distortion models including transmit-side
clipping, receive-side analog-to-digital conversion, and others. The framework
is based on the so-called generalized mutual information (GMI), and the
analysis in particular benefits from the setup of Gaussian codebook ensemble
and nearest-neighbor decoding, for which it is established that the GMI takes a
general form analogous to the channel capacity of undistorted Gaussian
channels, with a reduced "effective" signal-to-noise ratio (SNR) that depends
on the nominal SNR and the distortion model. When applied to specific
distortion models, an array of results of engineering relevance is obtained.
For channels with transmit-side distortion only, it is shown that a
conventional approach, which treats the distorted signal as the sum of the
original signal part and a uncorrelated distortion part, achieves the GMI. For
channels with output quantization, closed-form expressions are obtained for the
effective SNR and the GMI, and related optimization problems are formulated and
solved for quantizer design. Finally, super-Nyquist sampling is analyzed within
the general framework, and it is shown that sampling beyond the Nyquist rate
increases the GMI for all SNR. For example, with a binary symmetric output
quantization, information rates exceeding one bit per channel use are
achievable by sampling the output at four times the Nyquist rate.Comment: 32 pages (including 4 figures, 5 tables, and auxiliary materials);
submitted to IEEE Transactions on Communication
Generalized Nearest Neighbor Decoding
It is well known that for Gaussian channels, a nearest neighbor decoding
rule, which seeks the minimum Euclidean distance between a codeword and the
received channel output vector, is the maximum likelihood solution and hence
capacity-achieving. Nearest neighbor decoding remains a convenient and yet
mismatched solution for general channels, and the key message of this paper is
that the performance of the nearest neighbor decoding can be improved by
generalizing its decoding metric to incorporate channel state dependent output
processing and codeword scaling. Using generalized mutual information, which is
a lower bound to the mismatched capacity under independent and identically
distributed codebook ensemble, as the performance measure, this paper
establishes the optimal generalized nearest neighbor decoding rule, under
Gaussian channel input. Several {restricted forms of the} generalized nearest
neighbor decoding rule are also derived and compared with existing solutions.
The results are illustrated through several case studies for fading channels
with imperfect receiver channel state information and for channels with
quantization effects.Comment: 30 pages, 8 figure
Information-Theoretic Foundations of Mismatched Decoding
Shannon's channel coding theorem characterizes the maximal rate of
information that can be reliably transmitted over a communication channel when
optimal encoding and decoding strategies are used. In many scenarios, however,
practical considerations such as channel uncertainty and implementation
constraints rule out the use of an optimal decoder. The mismatched decoding
problem addresses such scenarios by considering the case that the decoder
cannot be optimized, but is instead fixed as part of the problem statement.
This problem is not only of direct interest in its own right, but also has
close connections with other long-standing theoretical problems in information
theory. In this monograph, we survey both classical literature and recent
developments on the mismatched decoding problem, with an emphasis on achievable
random-coding rates for memoryless channels. We present two widely-considered
achievable rates known as the generalized mutual information (GMI) and the LM
rate, and overview their derivations and properties. In addition, we survey
several improved rates via multi-user coding techniques, as well as recent
developments and challenges in establishing upper bounds on the mismatch
capacity, and an analogous mismatched encoding problem in rate-distortion
theory. Throughout the monograph, we highlight a variety of applications and
connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information
Theory (Volume 17, Issue 2-3