4,841 research outputs found
On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection
The distributed hypothesis testing problem with full side-information is
studied. The trade-off (reliability function) between the two types of error
exponents under limited rate is studied in the following way. First, the
problem is reduced to the problem of determining the reliability function of
channel codes designed for detection (in analogy to a similar result which
connects the reliability function of distributed lossless compression and
ordinary channel codes). Second, a single-letter random-coding bound based on a
hierarchical ensemble, as well as a single-letter expurgated bound, are derived
for the reliability of channel-detection codes. Both bounds are derived for a
system which employs the optimal detection rule. We conjecture that the
resulting random-coding bound is ensemble-tight, and consequently optimal
within the class of quantization-and-binning schemes
Distributed Structure: Joint Expurgation for the Multiple-Access Channel
In this work we show how an improved lower bound to the error exponent of the
memoryless multiple-access (MAC) channel is attained via the use of linear
codes, thus demonstrating that structure can be beneficial even in cases where
there is no capacity gain. We show that if the MAC channel is modulo-additive,
then any error probability, and hence any error exponent, achievable by a
linear code for the corresponding single-user channel, is also achievable for
the MAC channel. Specifically, for an alphabet of prime cardinality, where
linear codes achieve the best known exponents in the single-user setting and
the optimal exponent above the critical rate, this performance carries over to
the MAC setting. At least at low rates, where expurgation is needed, our
approach strictly improves performance over previous results, where expurgation
was used at most for one of the users. Even when the MAC channel is not
additive, it may be transformed into such a channel. While the transformation
is lossy, we show that the distributed structure gain in some "nearly additive"
cases outweighs the loss, and thus the error exponent can improve upon the best
known error exponent for these cases as well. Finally we apply a similar
approach to the Gaussian MAC channel. We obtain an improvement over the best
known achievable exponent, given by Gallager, for certain rate pairs, using
lattice codes which satisfy a nesting condition.Comment: Submitted to the IEEE Trans. Info. Theor
How to Achieve the Capacity of Asymmetric Channels
We survey coding techniques that enable reliable transmission at rates that
approach the capacity of an arbitrary discrete memoryless channel. In
particular, we take the point of view of modern coding theory and discuss how
recent advances in coding for symmetric channels help provide more efficient
solutions for the asymmetric case. We consider, in more detail, three basic
coding paradigms.
The first one is Gallager's scheme that consists of concatenating a linear
code with a non-linear mapping so that the input distribution can be
appropriately shaped. We explicitly show that both polar codes and spatially
coupled codes can be employed in this scenario. Furthermore, we derive a
scaling law between the gap to capacity, the cardinality of the input and
output alphabets, and the required size of the mapper.
The second one is an integrated scheme in which the code is used both for
source coding, in order to create codewords distributed according to the
capacity-achieving input distribution, and for channel coding, in order to
provide error protection. Such a technique has been recently introduced by
Honda and Yamamoto in the context of polar codes, and we show how to apply it
also to the design of sparse graph codes.
The third paradigm is based on an idea of B\"ocherer and Mathar, and
separates the two tasks of source coding and channel coding by a chaining
construction that binds together several codewords. We present conditions for
the source code and the channel code, and we describe how to combine any source
code with any channel code that fulfill those conditions, in order to provide
capacity-achieving schemes for asymmetric channels. In particular, we show that
polar codes, spatially coupled codes, and homophonic codes are suitable as
basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published
in IEEE Trans. Inform. Theor
- …