1,506 research outputs found
Mean Estimation from One-Bit Measurements
We consider the problem of estimating the mean of a symmetric log-concave
distribution under the constraint that only a single bit per sample from this
distribution is available to the estimator. We study the mean squared error as
a function of the sample size (and hence the number of bits). We consider three
settings: first, a centralized setting, where an encoder may release bits
given a sample of size , and for which there is no asymptotic penalty for
quantization; second, an adaptive setting in which each bit is a function of
the current observation and previously recorded bits, where we show that the
optimal relative efficiency compared to the sample mean is precisely the
efficiency of the median; lastly, we show that in a distributed setting where
each bit is only a function of a local sample, no estimator can achieve optimal
efficiency uniformly over the parameter space. We additionally complement our
results in the adaptive setting by showing that \emph{one} round of adaptivity
is sufficient to achieve optimal mean-square error
Mean Estimation from Adaptive One-bit Measurements
We consider the problem of estimating the mean of a normal distribution under
the following constraint: the estimator can access only a single bit from each
sample from this distribution. We study the squared error risk in this
estimation as a function of the number of samples and one-bit measurements .
We consider an adaptive estimation setting where the single-bit sent at step
is a function of both the new sample and the previous acquired bits.
For this setting, we show that no estimator can attain asymptotic mean squared
error smaller than times the variance. In other words,
one-bit restriction increases the number of samples required for a prescribed
accuracy of estimation by a factor of at least compared to the
unrestricted case. In addition, we provide an explicit estimator that attains
this asymptotic error, showing that, rather surprisingly, only times
more samples are required in order to attain estimation performance equivalent
to the unrestricted case
Distortion-Rate Function of Sub-Nyquist Sampled Gaussian Sources
The amount of information lost in sub-Nyquist sampling of a continuous-time
Gaussian stationary process is quantified. We consider a combined source coding
and sub-Nyquist reconstruction problem in which the input to the encoder is a
noisy sub-Nyquist sampled version of the analog source. We first derive an
expression for the mean squared error in the reconstruction of the process from
a noisy and information rate-limited version of its samples. This expression is
a function of the sampling frequency and the average number of bits describing
each sample. It is given as the sum of two terms: Minimum mean square error in
estimating the source from its noisy but otherwise fully observed sub-Nyquist
samples, and a second term obtained by reverse waterfilling over an average of
spectral densities associated with the polyphase components of the source. We
extend this result to multi-branch uniform sampling, where the samples are
available through a set of parallel channels with a uniform sampler and a
pre-sampling filter in each branch. Further optimization to reduce distortion
is then performed over the pre-sampling filters, and an optimal set of
pre-sampling filters associated with the statistics of the input signal and the
sampling frequency is found. This results in an expression for the minimal
possible distortion achievable under any analog to digital conversion scheme
involving uniform sampling and linear filtering. These results thus unify the
Shannon-Whittaker-Kotelnikov sampling theorem and Shannon rate-distortion
theory for Gaussian sources.Comment: Accepted for publication at the IEEE transactions on information
theor
A note on a local ergodic theorem for an infinite tower of coverings
This is a note on a local ergodic theorem for a symmetric exclusion process
defined on an infinite tower of coverings, which is associated with a finitely
generated residually finite amenable group.Comment: Final version to appear in Springer Proceedings in Mathematics and
Statistic
Maximum likelihood reconstruction for Ising models with asynchronous updates
We describe how the couplings in an asynchronous kinetic Ising model can be
inferred. We consider two cases, one in which we know both the spin history and
the update times and one in which we only know the spin history. For the first
case, we show that one can average over all possible choices of update times to
obtain a learning rule that depends only on spin correlations and can also be
derived from the equations of motion for the correlations. For the second case,
the same rule can be derived within a further decoupling approximation. We
study all methods numerically for fully asymmetric Sherrington-Kirkpatrick
models, varying the data length, system size, temperature, and external field.
Good convergence is observed in accordance with the theoretical expectations
Entropy and efficiency of a molecular motor model
In this paper we investigate the use of path-integral formalism and the
concepts of entropy and traffic in the context of molecular motors. We show
that together with time-reversal symmetry breaking arguments one can find
bounds on efficiencies of such motors. To clarify this techinque we use it on
one specific model to find both the thermodynamic and the Stokes efficiencies,
although the arguments themselves are more general and can be used on a wide
class of models. We also show that by considering the molecular motor as a
ratchet, one can find additional bounds on the thermodynamic efficiency
Hydrodynamic limit for a boundary driven stochastic lattice gas model with many conserved quantities
We prove the hydrodynamic limit for a particle system in which particles may
have different velocities. We assume that we have two infinite reservoirs of
particles at the boundary: this is the so-called boundary driven process. The
dynamics we considered consists of a weakly asymmetric simple exclusion process
with collision among particles having different velocities
- …