4,615 research outputs found
Peak-to-average power ratio of good codes for Gaussian channel
Consider a problem of forward error-correction for the additive white
Gaussian noise (AWGN) channel. For finite blocklength codes the backoff from
the channel capacity is inversely proportional to the square root of the
blocklength. In this paper it is shown that codes achieving this tradeoff must
necessarily have peak-to-average power ratio (PAPR) proportional to logarithm
of the blocklength. This is extended to codes approaching capacity slower, and
to PAPR measured at the output of an OFDM modulator. As a by-product the
convergence of (Smith's) amplitude-constrained AWGN capacity to Shannon's
classical formula is characterized in the regime of large amplitudes. This
converse-type result builds upon recent contributions in the study of empirical
output distributions of good channel codes
Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities
This monograph presents a unified treatment of single- and multi-user
problems in Shannon's information theory where we depart from the requirement
that the error probability decays asymptotically in the blocklength. Instead,
the error probabilities for various problems are bounded above by a
non-vanishing constant and the spotlight is shone on achievable coding rates as
functions of the growing blocklengths. This represents the study of asymptotic
estimates with non-vanishing error probabilities.
In Part I, after reviewing the fundamentals of information theory, we discuss
Strassen's seminal result for binary hypothesis testing where the type-I error
probability is non-vanishing and the rate of decay of the type-II error
probability with growing number of independent observations is characterized.
In Part II, we use this basic hypothesis testing result to develop second- and
sometimes, even third-order asymptotic expansions for point-to-point
communication. Finally in Part III, we consider network information theory
problems for which the second-order asymptotics are known. These problems
include some classes of channels with random state, the multiple-encoder
distributed lossless source coding (Slepian-Wolf) problem and special cases of
the Gaussian interference and multiple-access channels. Finally, we discuss
avenues for further research.Comment: Further comments welcom
Wiretap and Gelfand-Pinsker Channels Analogy and its Applications
An analogy framework between wiretap channels (WTCs) and state-dependent
point-to-point channels with non-causal encoder channel state information
(referred to as Gelfand-Pinker channels (GPCs)) is proposed. A good sequence of
stealth-wiretap codes is shown to induce a good sequence of codes for a
corresponding GPC. Consequently, the framework enables exploiting existing
results for GPCs to produce converse proofs for their wiretap analogs. The
analogy readily extends to multiuser broadcasting scenarios, encompassing
broadcast channels (BCs) with deterministic components, degradation ordering
between users, and BCs with cooperative receivers. Given a wiretap BC (WTBC)
with two receivers and one eavesdropper, an analogous Gelfand-Pinsker BC (GPBC)
is constructed by converting the eavesdropper's observation sequence into a
state sequence with an appropriate product distribution (induced by the
stealth-wiretap code for the WTBC), and non-causally revealing the states to
the encoder. The transition matrix of the state-dependent GPBC is extracted
from WTBC's transition law, with the eavesdropper's output playing the role of
the channel state. Past capacity results for the semi-deterministic (SD) GPBC
and the physically-degraded (PD) GPBC with an informed receiver are leveraged
to furnish analogy-based converse proofs for the analogous WTBC setups. This
characterizes the secrecy-capacity regions of the SD-WTBC and the PD-WTBC, in
which the stronger receiver also observes the eavesdropper's channel output.
These derivations exemplify how the wiretap-GP analogy enables translating
results on one problem into advances in the study of the other
Exact Asymptotics for the Random Coding Error Probability
Error probabilities of random codes for memoryless channels are considered in
this paper. In the area of communication systems, admissible error probability
is very small and it is sometimes more important to discuss the relative gap
between the achievable error probability and its bound than to discuss the
absolute gap. Scarlett et al. derived a good upper bound of a random coding
union bound based on the technique of saddlepoint approximation but it is not
proved that the relative gap of their bound converges to zero. This paper
derives a new bound on the achievable error probability in this viewpoint for a
class of memoryless channels. The derived bound is strictly smaller than that
by Scarlett et al. and its relative gap with the random coding error
probability (not a union bound) vanishes as the block length increases for a
fixed coding rate.Comment: Full version of the paper in ISIT2015 with some corrections and
refinement
Variable-to-Fixed Length Homophonic Coding Suitable for Asymmetric Channel Coding
In communication through asymmetric channels the capacity-achieving input
distribution is not uniform in general. Homophonic coding is a framework to
invertibly convert a (usually uniform) message into a sequence with some target
distribution, and is a promising candidate to generate codewords with the
nonuniform target distribution for asymmetric channels. In particular, a
Variable-to-Fixed length (VF) homophonic code can be used as a suitable
component for channel codes to avoid decoding error propagation. However, the
existing VF homophonic code requires the knowledge of the maximum relative gap
of probabilities between two adjacent sequences beforehand, which is an
unrealistic assumption for long block codes. In this paper we propose a new VF
homophonic code without such a requirement by allowing one-symbol decoding
delay. We evaluate this code theoretically and experimentally to verify its
asymptotic optimality.Comment: Full version of the paper to appear in 2017 IEEE International
Symposium on Information Theory (ISIT2017
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Empirical Distribution of Good Channel Codes With Nonvanishing Error Probability
Abstract—This paper studies several properties of channel codes that approach the fundamental limits of a given (discrete or Gaussian) memoryless channel with a nonvanishing probability of error. The output distribution induced by an-capacity-achieving code is shown to be close in a strong sense to the capacity achieving output distribution. Relying on the concentration of measure (isoperimetry) property enjoyed by the latter, it is shown that regular (Lipschitz) functions of channel outputs can be precisely estimated and turn out to be essentially nonrandom and independent of the actual code. It is also shown that the output distribution of a good code and the capacity achieving one cannot be distinguished with exponential reliability. The random process produced at the output of the channel is shown to satisfy the asymptotic equipartition property. Index Terms—Additive white Gaussian noise, asymptotic equipartition property, concentration of measure, discrete memoryless channels, empirical output statistics, relative entropy, Shannon theory. To approximate this limiting rate of transmission the transmitted signals must approximate, in statistical properties, a white noise. A general and formal statement of this property of optimal codes was put forward by Han and Verdú [2, Th. 15]: Theorem 1: Fixanarbitrary. For any channel with finite input alphabet and capacity that satisfies the strong converse, and sufficiently large, where is the maximal mutual information output distribution and is the output distribution induced by the codebook (assuming equiprobable codewords) of any code such that as,and (1) I
How to Achieve the Capacity of Asymmetric Channels
We survey coding techniques that enable reliable transmission at rates that
approach the capacity of an arbitrary discrete memoryless channel. In
particular, we take the point of view of modern coding theory and discuss how
recent advances in coding for symmetric channels help provide more efficient
solutions for the asymmetric case. We consider, in more detail, three basic
coding paradigms.
The first one is Gallager's scheme that consists of concatenating a linear
code with a non-linear mapping so that the input distribution can be
appropriately shaped. We explicitly show that both polar codes and spatially
coupled codes can be employed in this scenario. Furthermore, we derive a
scaling law between the gap to capacity, the cardinality of the input and
output alphabets, and the required size of the mapper.
The second one is an integrated scheme in which the code is used both for
source coding, in order to create codewords distributed according to the
capacity-achieving input distribution, and for channel coding, in order to
provide error protection. Such a technique has been recently introduced by
Honda and Yamamoto in the context of polar codes, and we show how to apply it
also to the design of sparse graph codes.
The third paradigm is based on an idea of B\"ocherer and Mathar, and
separates the two tasks of source coding and channel coding by a chaining
construction that binds together several codewords. We present conditions for
the source code and the channel code, and we describe how to combine any source
code with any channel code that fulfill those conditions, in order to provide
capacity-achieving schemes for asymmetric channels. In particular, we show that
polar codes, spatially coupled codes, and homophonic codes are suitable as
basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published
in IEEE Trans. Inform. Theor
- …