10 research outputs found
Nonasymptotic noisy lossy source coding
This paper shows new general nonasymptotic achievability and converse bounds
and performs their dispersion analysis for the lossy compression problem in
which the compressor observes the source through a noisy channel. While this
problem is asymptotically equivalent to a noiseless lossy source coding problem
with a modified distortion function, nonasymptotically there is a noticeable
gap in how fast their minimum achievable coding rates approach the common
rate-distortion function, as evidenced both by the refined asymptotic analysis
(dispersion) and the numerical results. The size of the gap between the
dispersions of the noisy problem and the asymptotically equivalent noiseless
problem depends on the stochastic variability of the channel through which the
compressor observes the source.Comment: IEEE Transactions on Information Theory, 201
Infinite Divisibility of Information
We study an information analogue of infinitely divisible probability
distributions, where the i.i.d. sum is replaced by the joint distribution of an
i.i.d. sequence. A random variable is called informationally infinitely
divisible if, for any , there exists an i.i.d. sequence of random
variables that contains the same information as , i.e.,
there exists an injective function such that .
While there does not exist informationally infinitely divisible discrete random
variable, we show that any discrete random variable has a bounded
multiplicative gap to infinite divisibility, that is, if we remove the
injectivity requirement on , then there exists i.i.d.
and satisfying , and the entropy satisfies
. We also study a new class of discrete
probability distributions, called spectral infinitely divisible distributions,
where we can remove the multiplicative gap . Furthermore, we study the
case where is itself an i.i.d. sequence, , for
which the multiplicative gap can be replaced by .
This means that as increases, becomes closer to
being spectral infinitely divisible in a uniform manner. This can be regarded
as an information analogue of Kolmogorov's uniform theorem. Applications of our
result include independent component analysis, distributed storage with a
secrecy constraint, and distributed random number generation.Comment: 22 page
Hardware-Limited Task-Based Quantization
Quantization plays a critical role in digital signal
processing systems. Quantizers are typically designed to obtain
an accurate digital representation of the input signal, operating
independently of the system task, and are commonly implemented
using serial scalar analog-to-digital converters (ADCs). In this
work, we study hardware-limited task-based quantization, where
a system utilizing a serial scalar ADC is designed to provide a suitable representation in order to allow the recovery of a parameter
vector underlying the input signal. We propose hardware-limited
task-based quantization systems for a fixed and finite quantization
resolution, and characterize their achievable distortion. We then
apply the analysis to the practical setups of channel estimation
and eigen-spectrum recovery from quantized measurements. Our
results illustrate that properly designed hardware-limited systems
can approach the optimal performance achievable with vector
quantizers, and that by taking the underlying task into account,
the quantization error can be made negligible with a relatively
small number of bits
Hardware-Limited Task-Based Quantization
Quantization plays a critical role in digital signal processing systems.
Quantizers are typically designed to obtain an accurate digital representation
of the input signal, operating independently of the system task, and are
commonly implemented using serial scalar analog-to-digital converters (ADCs).
In this work, we study hardware-limited task-based quantization, where a system
utilizing a serial scalar ADC is designed to provide a suitable representation
in order to allow the recovery of a parameter vector underlying the input
signal. We propose hardware-limited task-based quantization systems for a fixed
and finite quantization resolution, and characterize their achievable
distortion. We then apply the analysis to the practical setups of channel
estimation and eigen-spectrum recovery from quantized measurements. Our results
illustrate that properly designed hardware-limited systems can approach the
optimal performance achievable with vector quantizers, and that by taking the
underlying task into account, the quantization error can be made negligible
with a relatively small number of bits
Sampling of the Wiener Process for Remote Estimation over a Channel with Random Delay
In this paper, we consider a problem of sampling a Wiener process, with
samples forwarded to a remote estimator over a channel that is modeled as a
queue. The estimator reconstructs an estimate of the real-time signal value
from causally received samples. We study the optimal online sampling strategy
that minimizes the mean square estimation error subject to a sampling rate
constraint. We prove that the optimal sampling strategy is a threshold policy,
and find the optimal threshold. This threshold is determined by how much the
Wiener process varies during the random service time and the maximum allowed
sampling rate. Further, if the sampling times are independent of the observed
Wiener process, the above sampling problem for minimizing the estimation error
is equivalent to a sampling problem for minimizing the age of information. This
reveals an interesting connection between the age of information and remote
estimation error. Our comparisons show that the estimation error achieved by
the optimal sampling policy can be much smaller than those of age-optimal
sampling, zero-wait sampling, and periodic sampling.Comment: Accepted by IEEE Transactions on Information Theor