55,381 research outputs found
On empirical cumulant generating functions of code lengths for individual sequences
We consider the problem of lossless compression of individual sequences using
finite-state (FS) machines, from the perspective of the best achievable
empirical cumulant generating function (CGF) of the code length, i.e., the
normalized logarithm of the empirical average of the exponentiated code length.
Since the probabilistic CGF is minimized in terms of the R\'enyi entropy of the
source, one of the motivations of this study is to derive an
individual-sequence analogue of the R\'enyi entropy, in the same way that the
FS compressibility is the individual-sequence counterpart of the Shannon
entropy. We consider the CGF of the code-length both from the perspective of
fixed-to-variable (F-V) length coding and the perspective of
variable-to-variable (V-V) length coding, where the latter turns out to yield a
better result, that coincides with the FS compressibility. We also extend our
results to compression with side information, available at both the encoder and
decoder. In this case, the V-V version no longer coincides with the FS
compressibility, but results in a different complexity measure.Comment: 15 pages; submitted for publicatio
On optimum parameter modulation-estimation from a large deviations perspective
We consider the problem of jointly optimum modulation and estimation of a
real-valued random parameter, conveyed over an additive white Gaussian noise
(AWGN) channel, where the performance metric is the large deviations behavior
of the estimator, namely, the exponential decay rate (as a function of the
observation time) of the probability that the estimation error would exceed a
certain threshold. Our basic result is in providing an exact characterization
of the fastest achievable exponential decay rate, among all possible
modulator-estimator (transmitter-receiver) pairs, where the modulator is
limited only in the signal power, but not in bandwidth. This exponential rate
turns out to be given by the reliability function of the AWGN channel. We also
discuss several ways to achieve this optimum performance, and one of them is
based on quantization of the parameter, followed by optimum channel coding and
modulation, which gives rise to a separation-based transmitter, if one views
this setting from the perspective of joint source-channel coding. This is in
spite of the fact that, in general, when error exponents are considered, the
source-channel separation theorem does not hold true. We also discuss several
observations, modifications and extensions of this result in several
directions, including other channels, and the case of multidimensional
parameter vectors. One of our findings concerning the latter, is that there is
an abrupt threshold effect in the dimensionality of the parameter vector: below
a certain critical dimension, the probability of excess estimation error may
still decay exponentially, but beyond this value, it must converge to unity.Comment: 26 pages; Submitted to the IEEE Transactions on Information Theor
Optimum estimation via gradients of partition functions and information measures: a statistical-mechanical perspective
In continuation to a recent work on the statistical--mechanical analysis of
minimum mean square error (MMSE) estimation in Gaussian noise via its relation
to the mutual information (the I-MMSE relation), here we propose a simple and
more direct relationship between optimum estimation and certain information
measures (e.g., the information density and the Fisher information), which can
be viewed as partition functions and hence are amenable to analysis using
statistical--mechanical techniques. The proposed approach has several
advantages, most notably, its applicability to general sources and channels, as
opposed to the I-MMSE relation and its variants which hold only for certain
classes of channels (e.g., additive white Gaussian noise channels). We then
demonstrate the derivation of the conditional mean estimator and the MMSE in a
few examples. Two of these examples turn out to be generalizable to a fairly
wide class of sources and channels. For this class, the proposed approach is
shown to yield an approximate conditional mean estimator and an MMSE formula
that has the flavor of a single-letter expression. We also show how our
approach can easily be generalized to situations of mismatched estimation.Comment: 21 pages; submitted to the IEEE Transactions on Information Theor
A statistical-mechanical view on source coding: physical compression and data compression
We draw a certain analogy between the classical information-theoretic problem
of lossy data compression (source coding) of memoryless information sources and
the statistical mechanical behavior of a certain model of a chain of connected
particles (e.g., a polymer) that is subjected to a contracting force. The free
energy difference pertaining to such a contraction turns out to be proportional
to the rate-distortion function in the analogous data compression model, and
the contracting force is proportional to the derivative this function. Beyond
the fact that this analogy may be interesting on its own right, it may provide
a physical perspective on the behavior of optimum schemes for lossy data
compression (and perhaps also, an information-theoretic perspective on certain
physical system models). Moreover, it triggers the derivation of lossy
compression performance for systems with memory, using analysis tools and
insights from statistical mechanics.Comment: 17 pages, 2 figures; submitted to the Journal of Statistical
Mechanics: Theory and Experimen
- …
