10 research outputs found
Orthogonal Codes for Robust Low-Cost Communication
Orthogonal coding schemes, known to asymptotically achieve the capacity per
unit cost (CPUC) for single-user ergodic memoryless channels with a zero-cost
input symbol, are investigated for single-user compound memoryless channels,
which exhibit uncertainties in their input-output statistical relationships. A
minimax formulation is adopted to attain robustness. First, a class of
achievable rates per unit cost (ARPUC) is derived, and its utility is
demonstrated through several representative case studies. Second, when the
uncertainty set of channel transition statistics satisfies a convexity
property, optimization is performed over the class of ARPUC through utilizing
results of minimax robustness. The resulting CPUC lower bound indicates the
ultimate performance of the orthogonal coding scheme, and coincides with the
CPUC under certain restrictive conditions. Finally, still under the convexity
property, it is shown that the CPUC can generally be achieved, through
utilizing a so-called mixed strategy in which an orthogonal code contains an
appropriate composition of different nonzero-cost input symbols.Comment: 2nd revision, accepted for publicatio
A Rate-Splitting Approach to Fading Channels with Imperfect Channel-State Information
As shown by M\'edard, the capacity of fading channels with imperfect
channel-state information (CSI) can be lower-bounded by assuming a Gaussian
channel input with power and by upper-bounding the conditional entropy
by the entropy of a Gaussian random variable with variance
equal to the linear minimum mean-square error in estimating from
. We demonstrate that, using a rate-splitting approach, this lower
bound can be sharpened: by expressing the Gaussian input as the sum of two
independent Gaussian variables and and by applying M\'edard's lower
bound first to bound the mutual information between and while
treating as noise, and by applying it a second time to the mutual
information between and while assuming to be known, we obtain a
capacity lower bound that is strictly larger than M\'edard's lower bound. We
then generalize this approach to an arbitrary number of layers, where
is expressed as the sum of independent Gaussian random variables of
respective variances , summing up to . Among
all such rate-splitting bounds, we determine the supremum over power
allocations and total number of layers . This supremum is achieved
for and gives rise to an analytically expressible capacity lower
bound. For Gaussian fading, this novel bound is shown to converge to the
Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows,
provided that the variance of the channel estimation error tends to
zero as the SNR tends to infinity.Comment: 28 pages, 8 figures, submitted to IEEE Transactions on Information
Theory. Revised according to first round of review
Generalized Nearest Neighbor Decoding
It is well known that for Gaussian channels, a nearest neighbor decoding
rule, which seeks the minimum Euclidean distance between a codeword and the
received channel output vector, is the maximum likelihood solution and hence
capacity-achieving. Nearest neighbor decoding remains a convenient and yet
mismatched solution for general channels, and the key message of this paper is
that the performance of the nearest neighbor decoding can be improved by
generalizing its decoding metric to incorporate channel state dependent output
processing and codeword scaling. Using generalized mutual information, which is
a lower bound to the mismatched capacity under independent and identically
distributed codebook ensemble, as the performance measure, this paper
establishes the optimal generalized nearest neighbor decoding rule, under
Gaussian channel input. Several {restricted forms of the} generalized nearest
neighbor decoding rule are also derived and compared with existing solutions.
The results are illustrated through several case studies for fading channels
with imperfect receiver channel state information and for channels with
quantization effects.Comment: 30 pages, 8 figure
Fundamental Limits of Gaussian Communication Networks in the Presence of Intelligent Jammers
abstract: The open nature of the wireless communication medium makes it inherently vulnerable to an active attack, wherein a malicious adversary (or jammer) transmits into the medium to disrupt the operation of the legitimate users. Therefore, developing techniques to manage the presence of a jammer and to characterize the effect of an attacker on the fundamental limits of wireless communication networks is important. This dissertation studies various Gaussian communication networks in the presence of such an adversarial jammer.
First of all, a standard Gaussian channel is considered in the presence of a jammer, known as a Gaussian arbitrarily-varying channel, but with list-decoding at the receiver. The receiver decodes a list of messages, instead of only one message, with the goal of the correct message being an element of the list. The capacity is characterized, and it is shown that under some transmitter's power constraints the adversary is able to suspend the communication between the legitimate users and make the capacity zero.
Next, generalized packing lemmas are introduced for Gaussian adversarial channels to achieve the capacity bounds for three Gaussian multi-user channels in the presence of adversarial jammers. Inner and outer bounds on the capacity regions of Gaussian multiple-access channels, Gaussian broadcast channels, and Gaussian interference channels are derived in the presence of malicious jammers. For the Gaussian multiple-access channels with jammer, the capacity bounds coincide. In this dissertation, the adversaries can send any arbitrary signals to the channel while none of the transmitter and the receiver knows the adversarial signals' distribution.
Finally, the capacity of the standard point-to-point Gaussian fading channel in the presence of one jammer is investigated under multiple scenarios of channel state information availability, which is the knowledge of exact fading coefficients. The channel state information is always partially or fully known at the receiver to decode the message while the transmitter or the adversary may or may not have access to this information. Here, the adversary model is the same as the previous cases with no knowledge about the user's transmitted signal except possibly the knowledge of the fading path.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Information-Theoretic Foundations of Mismatched Decoding
Shannon's channel coding theorem characterizes the maximal rate of
information that can be reliably transmitted over a communication channel when
optimal encoding and decoding strategies are used. In many scenarios, however,
practical considerations such as channel uncertainty and implementation
constraints rule out the use of an optimal decoder. The mismatched decoding
problem addresses such scenarios by considering the case that the decoder
cannot be optimized, but is instead fixed as part of the problem statement.
This problem is not only of direct interest in its own right, but also has
close connections with other long-standing theoretical problems in information
theory. In this monograph, we survey both classical literature and recent
developments on the mismatched decoding problem, with an emphasis on achievable
random-coding rates for memoryless channels. We present two widely-considered
achievable rates known as the generalized mutual information (GMI) and the LM
rate, and overview their derivations and properties. In addition, we survey
several improved rates via multi-user coding techniques, as well as recent
developments and challenges in establishing upper bounds on the mismatch
capacity, and an analogous mismatched encoding problem in rate-distortion
theory. Throughout the monograph, we highlight a variety of applications and
connections with other prominent information theory problems.Comment: Published in Foundations and Trends in Communications and Information
Theory (Volume 17, Issue 2-3
Recommended from our members
Information-Theoretic Foundations of Mismatched Decoding
Shannon’s channel coding theorem characterizes the maximal rate of information that can be reliably transmitted over a communication channel when optimal encoding and decoding strategies are used. In many scenarios, however, practical considerations such as channel uncertainty and implementation constraints rule out the use of an optimal decoder. The mismatched decoding problem addresses such scenarios by considering the case that the decoder cannot be optimized, but is instead fixed as part of the problem statement. This problem is not only of direct interest in its own right, but also has close connections with other long-standing theoretical problems in information theory.
In this monograph, we survey both classical literature and recent developments on the mismatched decoding problem, with an emphasis on achievable random-coding rates for memoryless channels. We present two widely-considered achievable rates known as the generalized mutual information (GMI) and the LM rate, and overview their derivations and properties. In addition, we survey several improved rates via multi-user coding techniques, as well as recent developments and challenges in establishing upper bounds on the mismatch capacity, and an analogous mismatched encoding problem in rate-distortion theory. Throughout the monograph, we highlight a variety of applications and connections with other prominent information theory problems