7,047 research outputs found
On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection
The distributed hypothesis testing problem with full side-information is
studied. The trade-off (reliability function) between the two types of error
exponents under limited rate is studied in the following way. First, the
problem is reduced to the problem of determining the reliability function of
channel codes designed for detection (in analogy to a similar result which
connects the reliability function of distributed lossless compression and
ordinary channel codes). Second, a single-letter random-coding bound based on a
hierarchical ensemble, as well as a single-letter expurgated bound, are derived
for the reliability of channel-detection codes. Both bounds are derived for a
system which employs the optimal detection rule. We conjecture that the
resulting random-coding bound is ensemble-tight, and consequently optimal
within the class of quantization-and-binning schemes
Relations between random coding exponents and the statistical physics of random codes
The partition function pertaining to finite--temperature decoding of a
(typical) randomly chosen code is known to have three types of behavior,
corresponding to three phases in the plane of rate vs. temperature: the {\it
ferromagnetic phase}, corresponding to correct decoding, the {\it paramagnetic
phase}, of complete disorder, which is dominated by exponentially many
incorrect codewords, and the {\it glassy phase} (or the condensed phase), where
the system is frozen at minimum energy and dominated by subexponentially many
incorrect codewords. We show that the statistical physics associated with the
two latter phases are intimately related to random coding exponents. In
particular, the exponent associated with the probability of correct decoding at
rates above capacity is directly related to the free energy in the glassy
phase, and the exponent associated with probability of error (the error
exponent) at rates below capacity, is strongly related to the free energy in
the paramagnetic phase. In fact, we derive alternative expressions of these
exponents in terms of the corresponding free energies, and make an attempt to
obtain some insights from these expressions. Finally, as a side result, we also
compare the phase diagram associated with a simple finite-temperature universal
decoder for discrete memoryless channels, to that of the finite--temperature
decoder that is aware of the channel statistics.Comment: 26 pages, 2 figures, submitted to IEEE Transactions on Information
Theor
Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective
We revisit the information-theoretic analysis of bit-interleaved coded
modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The
mismatched decoding model is well-defined for finite, yet arbitrary, block
lengths, and naturally captures the channel memory among the bits belonging to
the same symbol. We give two independent proofs of the achievability of the
BICM capacity calculated by Caire et al. where BICM was modeled as a set of
independent parallel binary-input channels whose output is the bitwise
log-likelihood ratio. Our first achievability proof uses typical sequences, and
shows that due to the random coding construction, the interleaver is not
required. The second proof is based on the random coding error exponents with
mismatched decoding, where the largest achievable rate is the generalized
mutual information. We show that the generalized mutual information of the
mismatched decoder coincides with the infinite-interleaver BICM capacity. We
also show that the error exponent -and hence the cutoff rate- of the BICM
mismatched decoder is upper bounded by that of coded modulation and may thus be
lower than in the infinite-interleaved model. We also consider the mutual
information appearing in the analysis of iterative decoding of BICM with EXIT
charts. We show that the corresponding symbol metric has knowledge of the
transmitted symbol and the EXIT mutual information admits a representation as a
pseudo-generalized mutual information, which is in general not achievable. A
different symbol decoding metric, for which the extrinsic side information
refers to the hypothesized symbol, induces a generalized mutual information
lower than the coded modulation capacity.Comment: submitted to the IEEE Transactions on Information Theory. Conference
version in 2008 IEEE International Symposium on Information Theory, Toronto,
Canada, July 200
Exponent Trade-off for Hypothesis Testing Over Noisy Channels
International audienceThe distributed hypothesis testing (DHT) problem is considered, in which the joint distribution of a pair of sequences present at separated terminals, is governed by one of two possible hypotheses. The decision needs to be made by one of the terminals (the "decoder"). The other terminal (the "encoder") uses a noisy channel in order to help the decoder with the decision. This problem can be seen as a generalization of the side-information variant of the DHT problem, where the rate-limited link is replaced by a noisy channel. A recent work by Salehkalaibar and Wigger has derived an achievable Stein exponent for this problem, by employing concepts from the DHT scheme of Shimokawa et al., and from unequal error protection coding for a single special message. In this work we extend the view to a trade-off between the two error exponents, additionally building on multiple codebooks and two special messages with unequal error protection. As a by product, we also present an achievable exponent trade-off for a rate-limited link, which generalizes Shimokawa et al.
Capacity and Random-Coding Exponents for Channel Coding with Side Information
Capacity formulas and random-coding exponents are derived for a generalized
family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic
upper bounds on the achievable log probability of error. In our model,
information is to be reliably transmitted through a noisy channel with finite
input and output alphabets and random state sequence, and the channel is
selected by a hypothetical adversary. Partial information about the state
sequence is available to the encoder, adversary, and decoder. The design of the
transmitter is subject to a cost constraint. Two families of channels are
considered: 1) compound discrete memoryless channels (CDMC), and 2) channels
with arbitrary memory, subject to an additive cost constraint, or more
generally to a hard constraint on the conditional type of the channel output
given the input. Both problems are closely connected. The random-coding
exponent is achieved using a stacked binning scheme and a maximum penalized
mutual information decoder, which may be thought of as an empirical generalized
Maximum a Posteriori decoder. For channels with arbitrary memory, the
random-coding exponents are larger than their CDMC counterparts. Applications
of this study include watermarking, data hiding, communication in presence of
partially known interferers, and problems such as broadcast channels, all of
which involve the fundamental idea of binning.Comment: to appear in IEEE Transactions on Information Theory, without
Appendices G and
The Sender-Excited Secret Key Agreement Model: Capacity, Reliability and Secrecy Exponents
We consider the secret key generation problem when sources are randomly
excited by the sender and there is a noiseless public discussion channel. Our
setting is thus similar to recent works on channels with action-dependent
states where the channel state may be influenced by some of the parties
involved. We derive single-letter expressions for the secret key capacity
through a type of source emulation analysis. We also derive lower bounds on the
achievable reliability and secrecy exponents, i.e., the exponential rates of
decay of the probability of decoding error and of the information leakage.
These exponents allow us to determine a set of strongly-achievable secret key
rates. For degraded eavesdroppers the maximum strongly-achievable rate equals
the secret key capacity; our exponents can also be specialized to previously
known results.
In deriving our strong achievability results we introduce a coding scheme
that combines wiretap coding (to excite the channel) and key extraction (to
distill keys from residual randomness). The secret key capacity is naturally
seen to be a combination of both source- and channel-type randomness. Through
examples we illustrate a fundamental interplay between the portion of the
secret key rate due to each type of randomness. We also illustrate inherent
tradeoffs between the achievable reliability and secrecy exponents. Our new
scheme also naturally accommodates rate limits on the public discussion. We
show that under rate constraints we are able to achieve larger rates than those
that can be attained through a pure source emulation strategy.Comment: 18 pages, 8 figures; Submitted to the IEEE Transactions on
Information Theory; Revised in Oct 201
- …