38,718 research outputs found
Relations between random coding exponents and the statistical physics of random codes
The partition function pertaining to finite--temperature decoding of a
(typical) randomly chosen code is known to have three types of behavior,
corresponding to three phases in the plane of rate vs. temperature: the {\it
ferromagnetic phase}, corresponding to correct decoding, the {\it paramagnetic
phase}, of complete disorder, which is dominated by exponentially many
incorrect codewords, and the {\it glassy phase} (or the condensed phase), where
the system is frozen at minimum energy and dominated by subexponentially many
incorrect codewords. We show that the statistical physics associated with the
two latter phases are intimately related to random coding exponents. In
particular, the exponent associated with the probability of correct decoding at
rates above capacity is directly related to the free energy in the glassy
phase, and the exponent associated with probability of error (the error
exponent) at rates below capacity, is strongly related to the free energy in
the paramagnetic phase. In fact, we derive alternative expressions of these
exponents in terms of the corresponding free energies, and make an attempt to
obtain some insights from these expressions. Finally, as a side result, we also
compare the phase diagram associated with a simple finite-temperature universal
decoder for discrete memoryless channels, to that of the finite--temperature
decoder that is aware of the channel statistics.Comment: 26 pages, 2 figures, submitted to IEEE Transactions on Information
Theor
On empirical cumulant generating functions of code lengths for individual sequences
We consider the problem of lossless compression of individual sequences using
finite-state (FS) machines, from the perspective of the best achievable
empirical cumulant generating function (CGF) of the code length, i.e., the
normalized logarithm of the empirical average of the exponentiated code length.
Since the probabilistic CGF is minimized in terms of the R\'enyi entropy of the
source, one of the motivations of this study is to derive an
individual-sequence analogue of the R\'enyi entropy, in the same way that the
FS compressibility is the individual-sequence counterpart of the Shannon
entropy. We consider the CGF of the code-length both from the perspective of
fixed-to-variable (F-V) length coding and the perspective of
variable-to-variable (V-V) length coding, where the latter turns out to yield a
better result, that coincides with the FS compressibility. We also extend our
results to compression with side information, available at both the encoder and
decoder. In this case, the V-V version no longer coincides with the FS
compressibility, but results in a different complexity measure.Comment: 15 pages; submitted for publicatio
An identity of Chernoff bounds with an interpretation in statistical physics and applications in information theory
An identity between two versions of the Chernoff bound on the probability a
certain large deviations event, is established. This identity has an
interpretation in statistical physics, namely, an isothermal equilibrium of a
composite system that consists of multiple subsystems of particles. Several
information--theoretic application examples, where the analysis of this large
deviations probability naturally arises, are then described from the viewpoint
of this statistical mechanical interpretation. This results in several
relationships between information theory and statistical physics, which we
hope, the reader will find insightful.Comment: 29 pages, 1 figure. Submitted to IEEE Trans. on Information Theor
On optimum parameter modulation-estimation from a large deviations perspective
We consider the problem of jointly optimum modulation and estimation of a
real-valued random parameter, conveyed over an additive white Gaussian noise
(AWGN) channel, where the performance metric is the large deviations behavior
of the estimator, namely, the exponential decay rate (as a function of the
observation time) of the probability that the estimation error would exceed a
certain threshold. Our basic result is in providing an exact characterization
of the fastest achievable exponential decay rate, among all possible
modulator-estimator (transmitter-receiver) pairs, where the modulator is
limited only in the signal power, but not in bandwidth. This exponential rate
turns out to be given by the reliability function of the AWGN channel. We also
discuss several ways to achieve this optimum performance, and one of them is
based on quantization of the parameter, followed by optimum channel coding and
modulation, which gives rise to a separation-based transmitter, if one views
this setting from the perspective of joint source-channel coding. This is in
spite of the fact that, in general, when error exponents are considered, the
source-channel separation theorem does not hold true. We also discuss several
observations, modifications and extensions of this result in several
directions, including other channels, and the case of multidimensional
parameter vectors. One of our findings concerning the latter, is that there is
an abrupt threshold effect in the dimensionality of the parameter vector: below
a certain critical dimension, the probability of excess estimation error may
still decay exponentially, but beyond this value, it must converge to unity.Comment: 26 pages; Submitted to the IEEE Transactions on Information Theor
- …