1,380 research outputs found
Syndrome-Based Encoding of Compressible Sources for M2M Communication
Data originating from many devices and sensors can be modeled as sparse signals. Hence, efficient compression techniques of such data are essential to reduce bandwidth and transmission power, especially for energy constrained devices within machine to machine communication scenarios. This paper provides accurate analysis of the operational distortion-rate function (ODR) for syndrome-based source encoders of noisy sparse sources. We derive the probability density function of error due to both quantization and pre- quantization noise for a type of mixed distributed source comprising Bernoulli and an arbitrary continuous distribution, e.g., Bernoulli- uniform sources. Then, we derive the ODR for two encoding schemes based on the syndromes of Reed-Solomon (RS) and Bose, Chaudhuri, and Hocquenghem (BCH) codes. The presented analysis allows designing a quantizer such that a target average distortion is achieved. As confirmed by numerical results, the closed-form expression for ODR perfectly coincides with the simulation. Also, the performance loss compared to an entropy based encoder is tolerable
An MDL framework for sparse coding and dictionary learning
The power of sparse signal modeling with learned over-complete dictionaries
has been demonstrated in a variety of applications and fields, from signal
processing to statistical inference and machine learning. However, the
statistical properties of these models, such as under-fitting or over-fitting
given sets of data, are still not well characterized in the literature. As a
result, the success of sparse modeling depends on hand-tuning critical
parameters for each data and application. This work aims at addressing this by
providing a practical and objective characterization of sparse models by means
of the Minimum Description Length (MDL) principle -- a well established
information-theoretic approach to model selection in statistical inference. The
resulting framework derives a family of efficient sparse coding and dictionary
learning algorithms which, by virtue of the MDL principle, are completely
parameter free. Furthermore, such framework allows to incorporate additional
prior information to existing models, such as Markovian dependencies, or to
define completely new problem formulations, including in the matrix analysis
area, in a natural way. These virtues will be demonstrated with parameter-free
algorithms for the classic image denoising and classification problems, and for
low-rank matrix recovery in video applications
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
Stochastic approximation of score functions for Gaussian processes
We discuss the statistical properties of a recently introduced unbiased
stochastic approximation to the score equations for maximum likelihood
calculation for Gaussian processes. Under certain conditions, including bounded
condition number of the covariance matrix, the approach achieves storage
and nearly computational effort per optimization step, where is the
number of data sites. Here, we prove that if the condition number of the
covariance matrix is bounded, then the approximate score equations are nearly
optimal in a well-defined sense. Therefore, not only is the approximation
efficient to compute, but it also has comparable statistical properties to the
exact maximum likelihood estimates. We discuss a modification of the stochastic
approximation in which design elements of the stochastic terms mimic patterns
from a factorial design. We prove these designs are always at least as
good as the unstructured design, and we demonstrate through simulation that
they can produce a substantial improvement over random designs. Our findings
are validated by numerical experiments on simulated data sets of up to 1
million observations. We apply the approach to fit a space-time model to over
80,000 observations of total column ozone contained in the latitude band
-N during April 2012.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS627 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing
Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a limited number of noisy linear measurements is an important problem in
compressed sensing. In the high-dimensional setting, it is known that recovery
with a vanishing fraction of errors is impossible if the measurement rate and
the per-sample signal-to-noise ratio (SNR) are finite constants, independent of
the vector length. In this paper, it is shown that recovery with an arbitrarily
small but constant fraction of errors is, however, possible, and that in some
cases computationally simple estimators are near-optimal. Bounds on the
measurement rate needed to attain a desired fraction of errors are given in
terms of the SNR and various key parameters of the unknown vector for several
different recovery algorithms. The tightness of the bounds, in a scaling sense,
as a function of the SNR and the fraction of errors, is established by
comparison with existing information-theoretic necessary bounds. Near
optimality is shown for a wide variety of practically motivated signal models
- …