480 research outputs found

### Dissipation of information in channels with input constraints

One of the basic tenets in information theory, the data processing inequality
states that output divergence does not exceed the input divergence for any
channel. For channels without input constraints, various estimates on the
amount of such contraction are known, Dobrushin's coefficient for the total
variation being perhaps the most well-known. This work investigates channels
with average input cost constraint. It is found that while the contraction
coefficient typically equals one (no contraction), the information nevertheless
dissipates. A certain non-linear function, the \emph{Dobrushin curve} of the
channel, is proposed to quantify the amount of dissipation. Tools for
evaluating the Dobrushin curve of additive-noise channels are developed based
on coupling arguments. Some basic applications in stochastic control,
uniqueness of Gibbs measures and fundamental limits of noisy circuits are
discussed.
As an application, it shown that in the chain of $n$ power-constrained relays
and Gaussian channels the end-to-end mutual information and maximal squared
correlation decay as $\Theta(\frac{\log\log n}{\log n})$, which is in stark
contrast with the exponential decay in chains of discrete channels. Similarly,
the behavior of noisy circuits (composed of gates with bounded fan-in) and
broadcasting of information on trees (of bounded degree) does not experience
threshold behavior in the signal-to-noise ratio (SNR). Namely, unlike the case
of discrete channels, the probability of bit error stays bounded away from
$1\over 2$ regardless of the SNR.Comment: revised; include appendix B on contraction coefficient for mutual
information on general alphabet

### Peak-to-average power ratio of good codes for Gaussian channel

Consider a problem of forward error-correction for the additive white
Gaussian noise (AWGN) channel. For finite blocklength codes the backoff from
the channel capacity is inversely proportional to the square root of the
blocklength. In this paper it is shown that codes achieving this tradeoff must
necessarily have peak-to-average power ratio (PAPR) proportional to logarithm
of the blocklength. This is extended to codes approaching capacity slower, and
to PAPR measured at the output of an OFDM modulator. As a by-product the
convergence of (Smith's) amplitude-constrained AWGN capacity to Shannon's
classical formula is characterized in the regime of large amplitudes. This
converse-type result builds upon recent contributions in the study of empirical
output distributions of good channel codes

### Computational barriers in minimax submatrix detection

This paper studies the minimax detection of a small submatrix of elevated
mean in a large matrix contaminated by additive Gaussian noise. To investigate
the tradeoff between statistical performance and computational cost from a
complexity-theoretic perspective, we consider a sequence of discretized models
which are asymptotically equivalent to the Gaussian model. Under the hypothesis
that the planted clique detection problem cannot be solved in randomized
polynomial time when the clique size is of smaller order than the square root
of the graph size, the following phase transition phenomenon is established:
when the size of the large matrix $p\to\infty$, if the submatrix size
$k=\Theta(p^{\alpha})$ for any $\alpha\in(0,{2}/{3})$, computational complexity
constraints can incur a severe penalty on the statistical performance in the
sense that any randomized polynomial-time test is minimax suboptimal by a
polynomial factor in $p$; if $k=\Theta(p^{\alpha})$ for any
$\alpha\in({2}/{3},1)$, minimax optimal detection can be attained within
constant factors in linear time. Using Schatten norm loss as a representative
example, we show that the hardness of attaining the minimax estimation rate can
crucially depend on the loss function. Implications on the hardness of support
recovery are also obtained.Comment: Published at http://dx.doi.org/10.1214/14-AOS1300 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org

- β¦