1,263 research outputs found
Comparison of Channels: Criteria for Domination by a Symmetric Channel
This paper studies the basic question of whether a given channel can be
dominated (in the precise sense of being more noisy) by a -ary symmetric
channel. The concept of "less noisy" relation between channels originated in
network information theory (broadcast channels) and is defined in terms of
mutual information or Kullback-Leibler divergence. We provide an equivalent
characterization in terms of -divergence. Furthermore, we develop a
simple criterion for domination by a -ary symmetric channel in terms of the
minimum entry of the stochastic matrix defining the channel . The criterion
is strengthened for the special case of additive noise channels over finite
Abelian groups. Finally, it is shown that domination by a symmetric channel
implies (via comparison of Dirichlet forms) a logarithmic Sobolev inequality
for the original channel.Comment: 31 pages, 2 figures. Presented at 2017 IEEE International Symposium
on Information Theory (ISIT
Error-and-Erasure Decoding for Block Codes with Feedback
Inner and outer bounds are derived on the optimal performance of fixed length
block codes on discrete memoryless channels with feedback and
errors-and-erasures decoding. First an inner bound is derived using a two phase
encoding scheme with communication and control phases together with the optimal
decoding rule for the given encoding scheme, among decoding rules that can be
represented in terms of pairwise comparisons between the messages. Then an
outer bound is derived using a generalization of the straight-line bound to
errors-and-erasures decoders and the optimal error exponent trade off of a
feedback encoder with two messages. In addition upper and lower bounds are
derived, for the optimal erasure exponent of error free block codes in terms of
the rate. Finally we present a proof of the fact that the optimal trade off
between error exponents of a two message code does not increase with feedback
on DMCs.Comment: 33 pages, 1 figure
On contraction coefficients, partial orders and approximation of capacities for quantum channels
The data processing inequality is the most basic requirement for any
meaningful measure of information. It essentially states that
distinguishability measures between states decrease if we apply a quantum
channel. It is the centerpiece of many results in information theory and
justifies the operational interpretation of most entropic quantities. In this
work, we revisit the notion of contraction coefficients of quantum channels,
which provide sharper and specialized versions of the data processing
inequality. A concept closely related to data processing are partial orders on
quantum channels. We discuss several quantum extensions of the well known less
noisy ordering and then relate them to contraction coefficients. We further
define approximate versions of the partial orders and show how they can give
strengthened and conceptually simple proofs of several results on approximating
capacities. Moreover, we investigate the relation to other partial orders in
the literature and their properties, particularly with regards to
tensorization. We then investigate further properties of contraction
coefficients and their relation to other properties of quantum channels, such
as hypercontractivity. Next, we extend the framework of contraction
coefficients to general f-divergences and prove several structural results.
Finally, we consider two important classes of quantum channels, namely
Weyl-covariant and bosonic Gaussian channels. For those, we determine new
contraction coefficients and relations for various partial orders.Comment: 47 pages, 2 figure
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
- …