99 research outputs found
Channel Detection in Coded Communication
We consider the problem of block-coded communication, where in each block,
the channel law belongs to one of two disjoint sets. The decoder is aimed to
decode only messages that have undergone a channel from one of the sets, and
thus has to detect the set which contains the prevailing channel. We begin with
the simplified case where each of the sets is a singleton. For any given code,
we derive the optimum detection/decoding rule in the sense of the best
trade-off among the probabilities of decoding error, false alarm, and
misdetection, and also introduce sub-optimal detection/decoding rules which are
simpler to implement. Then, various achievable bounds on the error exponents
are derived, including the exact single-letter characterization of the random
coding exponents for the optimal detector/decoder. We then extend the random
coding analysis to general sets of channels, and show that there exists a
universal detector/decoder which performs asymptotically as well as the optimal
detector/decoder, when tuned to detect a channel from a specific pair of
channels. The case of a pair of binary symmetric channels is discussed in
detail.Comment: Submitted to IEEE Transactions on Information Theor
Error exponents of typical random codes
We define the error exponent of the typical random code as the long-block
limit of the negative normalized expectation of the logarithm of the error
probability of the random code, as opposed to the traditional random coding
error exponent, which is the limit of the negative normalized logarithm of the
expectation of the error probability. For the ensemble of uniformly randomly
drawn fixed composition codes, we provide exact error exponents of typical
random codes for a general discrete memoryless channel (DMC) and a wide class
of (stochastic) decoders, collectively referred to as the generalized
likelihood decoder (GLD). This ensemble of fixed composition codes is shown to
be no worse than any other ensemble of independent codewords that are drawn
under a permutation--invariant distribution (e.g., i.i.d. codewords). We also
present relationships between the error exponent of the typical random code and
the ordinary random coding error exponent, as well as the expurgated exponent
for the GLD. Finally, we demonstrate that our analysis technique is applicable
also to more general communication scenarios, such as list decoding (for
fixed-size lists) as well as decoding with an erasure/list option in Forney's
sense.Comment: 26 pages, submitted for publicatio
On the Error Exponents of ARQ Channels with Deadlines
We consider communication over Automatic Repeat reQuest (ARQ) memoryless
channels with deadlines. In particular, an upper bound L is imposed on the
maximum number of ARQ transmission rounds. In this setup, it is shown that
incremental redundancy ARQ outperforms Forney's memoryless decoding in terms of
the achievable error exponents.Comment: 16 pages, 6 figures, Submitted to the IEEE Trans. on Information
Theor
Expurgated Bounds for the Asymmetric Broadcast Channel
This work contains two main contributions concerning the expurgation of
hierarchical ensembles for the asymmetric broadcast channel. The first is an
analysis of the optimal maximum likelihood (ML) decoders for the weak and
strong user. Two different methods of code expurgation will be used, that will
provide two competing error exponents. The second is the derivation of
expurgated exponents under the generalized stochastic likelihood decoder (GLD).
We prove that the GLD exponents are at least as tight as the maximum between
the random coding error exponents derived in an earlier work by Averbuch and
Merhav (2017) and one of our ML-based expurgated exponents. By that, we
actually prove the existence of hierarchical codebooks that achieve the best of
the random coding exponent and the expurgated exponent simultaneously for both
users
Multiple Packing: Lower Bounds via Error Exponents
We derive lower bounds on the maximal rates for multiple packings in
high-dimensional Euclidean spaces. Multiple packing is a natural generalization
of the sphere packing problem. For any and , a
multiple packing is a set of points in such that
any point in lies in the intersection of at most balls
of radius around points in . We study this problem
for both bounded point sets whose points have norm at most for some
constant and unbounded point sets whose points are allowed to be anywhere
in . Given a well-known connection with coding theory, multiple
packings can be viewed as the Euclidean analog of list-decodable codes, which
are well-studied for finite fields. We derive the best known lower bounds on
the optimal multiple packing density. This is accomplished by establishing a
curious inequality which relates the list-decoding error exponent for additive
white Gaussian noise channels, a quantity of average-case nature, to the
list-decoding radius, a quantity of worst-case nature. We also derive various
bounds on the list-decoding error exponent in both bounded and unbounded
settings which are of independent interest beyond multiple packing.Comment: The paper arXiv:2107.05161 has been split into three parts with new
results added and significant revision. This paper is one of the three parts.
The other two are arXiv:2211.04407 and arXiv:2211.0440
Statistical mechanics of error exponents for error-correcting codes
Error exponents characterize the exponential decay, when increasing message
length, of the probability of error of many error-correcting codes. To tackle
the long standing problem of computing them exactly, we introduce a general,
thermodynamic, formalism that we illustrate with maximum-likelihood decoding of
low-density parity-check (LDPC) codes on the binary erasure channel (BEC) and
the binary symmetric channel (BSC). In this formalism, we apply the cavity
method for large deviations to derive expressions for both the average and
typical error exponents, which differ by the procedure used to select the codes
from specified ensembles. When decreasing the noise intensity, we find that two
phase transitions take place, at two different levels: a glass to ferromagnetic
transition in the space of codewords, and a paramagnetic to glass transition in
the space of codes.Comment: 32 pages, 13 figure
Generalized Random Gilbert-Varshamov Codes
© 1963-2012 IEEE. We introduce a random coding technique for transmission over discrete memoryless channels, reminiscent of the basic construction attaining the Gilbert-Varshamov bound for codes in Hamming spaces. The code construction is based on drawing codewords recursively from a fixed type class, in such a way that a newly generated codeword must be at a certain minimum distance from all previously chosen codewords, according to some generic distance function. We derive an achievable error exponent for this construction and prove its tightness with respect to the ensemble average. We show that the exponent recovers the Csiszár and Körner exponent as a special case, which is known to be at least as high as both the random-coding and expurgated exponents, and we establish the optimality of certain choices of the distance function. In addition, for additive distances and decoding metrics, we present an equivalent dual expression, along with a generalization to infinite alphabets via cost-constrained random coding.ER
- …