855 research outputs found
On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection
The distributed hypothesis testing problem with full side-information is
studied. The trade-off (reliability function) between the two types of error
exponents under limited rate is studied in the following way. First, the
problem is reduced to the problem of determining the reliability function of
channel codes designed for detection (in analogy to a similar result which
connects the reliability function of distributed lossless compression and
ordinary channel codes). Second, a single-letter random-coding bound based on a
hierarchical ensemble, as well as a single-letter expurgated bound, are derived
for the reliability of channel-detection codes. Both bounds are derived for a
system which employs the optimal detection rule. We conjecture that the
resulting random-coding bound is ensemble-tight, and consequently optimal
within the class of quantization-and-binning schemes
Strong converse exponents for a quantum channel discrimination problem and quantum-feedback-assisted communication
This paper studies the difficulty of discriminating between an arbitrary
quantum channel and a "replacer" channel that discards its input and replaces
it with a fixed state. We show that, in this particular setting, the most
general adaptive discrimination strategies provide no asymptotic advantage over
non-adaptive tensor-power strategies. This conclusion follows by proving a
quantum Stein's lemma for this channel discrimination setting, showing that a
constant bound on the Type I error leads to the Type II error decreasing to
zero exponentially quickly at a rate determined by the maximum relative entropy
registered between the channels. The strong converse part of the lemma states
that any attempt to make the Type II error decay to zero at a rate faster than
the channel relative entropy implies that the Type I error necessarily
converges to one. We then refine this latter result by identifying the optimal
strong converse exponent for this task. As a consequence of these results, we
can establish a strong converse theorem for the quantum-feedback-assisted
capacity of a channel, sharpening a result due to Bowen. Furthermore, our
channel discrimination result demonstrates the asymptotic optimality of a
non-adaptive tensor-power strategy in the setting of quantum illumination, as
was used in prior work on the topic. The sandwiched Renyi relative entropy is a
key tool in our analysis. Finally, by combining our results with recent results
of Hayashi and Tomamichel, we find a novel operational interpretation of the
mutual information of a quantum channel N as the optimal type II error exponent
when discriminating between a large number of independent instances of N and an
arbitrary "worst-case" replacer channel chosen from the set of all replacer
channels.Comment: v3: 35 pages, 4 figures, accepted for publication in Communications
in Mathematical Physic
Efficient human-machine control with asymmetric marginal reliability input devices
Input devices such as motor-imagery brain-computer interfaces (BCIs) are often unreliable. In theory, channel coding can be used in the human-machine loop to robustly encapsulate intention through noisy input devices but standard feedforward error correction codes cannot be practically applied. We present a practical and general probabilistic user interface for binary input devices with very high noise levels. Our approach allows any level of robustness to be achieved, regardless of noise level, where reliable feedback such as a visual display is available. In particular, we show efficient zooming interfaces based on feedback channel codes for two-class binary problems with noise levels characteristic of modalities such as motor-imagery based BCI, with accuracy <75%. We outline general principles based on separating channel, line and source coding in human-machine loop design. We develop a novel selection mechanism which can achieve arbitrarily reliable selection with a noisy two-state button. We show automatic online adaptation to changing channel statistics, and operation without precise calibration of error rates. A range of visualisations are used to construct user interfaces which implicitly code for these channels in a way that it is transparent to users. We validate our approach with a set of Monte Carlo simulations, and empirical results from a human-in-the-loop experiment showing the approach operates effectively at 50-70% of the theoretical optimum across a range of channel conditions
Divergence Measures
Data science, information theory, probability theory, statistical learning and other related disciplines greatly benefit from non-negative measures of dissimilarity between pairs of probability measures. These are known as divergence measures, and exploring their mathematical foundations and diverse applications is of significant interest. The present Special Issue, entitled “Divergence Measures: Mathematical Foundations and Applications in Information-Theoretic and Statistical Problems”, includes eight original contributions, and it is focused on the study of the mathematical properties and applications of classical and generalized divergence measures from an information-theoretic perspective. It mainly deals with two key generalizations of the relative entropy: namely, the R_ényi divergence and the important class of f -divergences. It is our hope that the readers will find interest in this Special Issue, which will stimulate further research in the study of the mathematical foundations and applications of divergence measures
- …