3,587 research outputs found
Relations between random coding exponents and the statistical physics of random codes
The partition function pertaining to finite--temperature decoding of a
(typical) randomly chosen code is known to have three types of behavior,
corresponding to three phases in the plane of rate vs. temperature: the {\it
ferromagnetic phase}, corresponding to correct decoding, the {\it paramagnetic
phase}, of complete disorder, which is dominated by exponentially many
incorrect codewords, and the {\it glassy phase} (or the condensed phase), where
the system is frozen at minimum energy and dominated by subexponentially many
incorrect codewords. We show that the statistical physics associated with the
two latter phases are intimately related to random coding exponents. In
particular, the exponent associated with the probability of correct decoding at
rates above capacity is directly related to the free energy in the glassy
phase, and the exponent associated with probability of error (the error
exponent) at rates below capacity, is strongly related to the free energy in
the paramagnetic phase. In fact, we derive alternative expressions of these
exponents in terms of the corresponding free energies, and make an attempt to
obtain some insights from these expressions. Finally, as a side result, we also
compare the phase diagram associated with a simple finite-temperature universal
decoder for discrete memoryless channels, to that of the finite--temperature
decoder that is aware of the channel statistics.Comment: 26 pages, 2 figures, submitted to IEEE Transactions on Information
Theor
Random Coding Error Exponents for the Two-User Interference Channel
This paper is about deriving lower bounds on the error exponents for the
two-user interference channel under the random coding regime for several
ensembles. Specifically, we first analyze the standard random coding ensemble,
where the codebooks are comprised of independently and identically distributed
(i.i.d.) codewords. For this ensemble, we focus on optimum decoding, which is
in contrast to other, suboptimal decoding rules that have been used in the
literature (e.g., joint typicality decoding, treating interference as noise,
etc.). The fact that the interfering signal is a codeword, rather than an
i.i.d. noise process, complicates the application of conventional techniques of
performance analysis of the optimum decoder. Also, unfortunately, these
conventional techniques result in loose bounds. Using analytical tools rooted
in statistical physics, as well as advanced union bounds, we derive
single-letter formulas for the random coding error exponents. We compare our
results with the best known lower bound on the error exponent, and show that
our exponents can be strictly better. Then, in the second part of this paper,
we consider more complicated coding ensembles, and find a lower bound on the
error exponent associated with the celebrated Han-Kobayashi (HK) random coding
ensemble, which is based on superposition coding.Comment: accepted IEEE Transactions on Information Theor
On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection
The distributed hypothesis testing problem with full side-information is
studied. The trade-off (reliability function) between the two types of error
exponents under limited rate is studied in the following way. First, the
problem is reduced to the problem of determining the reliability function of
channel codes designed for detection (in analogy to a similar result which
connects the reliability function of distributed lossless compression and
ordinary channel codes). Second, a single-letter random-coding bound based on a
hierarchical ensemble, as well as a single-letter expurgated bound, are derived
for the reliability of channel-detection codes. Both bounds are derived for a
system which employs the optimal detection rule. We conjecture that the
resulting random-coding bound is ensemble-tight, and consequently optimal
within the class of quantization-and-binning schemes
Channel Detection in Coded Communication
We consider the problem of block-coded communication, where in each block,
the channel law belongs to one of two disjoint sets. The decoder is aimed to
decode only messages that have undergone a channel from one of the sets, and
thus has to detect the set which contains the prevailing channel. We begin with
the simplified case where each of the sets is a singleton. For any given code,
we derive the optimum detection/decoding rule in the sense of the best
trade-off among the probabilities of decoding error, false alarm, and
misdetection, and also introduce sub-optimal detection/decoding rules which are
simpler to implement. Then, various achievable bounds on the error exponents
are derived, including the exact single-letter characterization of the random
coding exponents for the optimal detector/decoder. We then extend the random
coding analysis to general sets of channels, and show that there exists a
universal detector/decoder which performs asymptotically as well as the optimal
detector/decoder, when tuned to detect a channel from a specific pair of
channels. The case of a pair of binary symmetric channels is discussed in
detail.Comment: Submitted to IEEE Transactions on Information Theor
- …