9,408 research outputs found
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach
This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.Comment: 5 pages, 4 figures. Accepted for publication at The Asilomar
Conference on Signals, Systems, and Computers 201
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
Communications-Inspired Projection Design with Application to Compressive Sensing
We consider the recovery of an underlying signal x \in C^m based on
projection measurements of the form y=Mx+w, where y \in C^l and w is
measurement noise; we are interested in the case l < m. It is assumed that the
signal model p(x) is known, and w CN(w;0,S_w), for known S_W. The objective is
to design a projection matrix M \in C^(l x m) to maximize key
information-theoretic quantities with operational significance, including the
mutual information between the signal and the projections I(x;y) or the Renyi
entropy of the projections h_a(y) (Shannon entropy is a special case). By
capitalizing on explicit characterizations of the gradients of the information
measures with respect to the projections matrix, where we also partially extend
the well-known results of Palomar and Verdu from the mutual information to the
Renyi entropy domain, we unveil the key operations carried out by the optimal
projections designs: mode exposure and mode alignment. Experiments are
considered for the case of compressive sensing (CS) applied to imagery. In this
context, we provide a demonstration of the performance improvement possible
through the application of the novel projection designs in relation to
conventional ones, as well as justification for a fast online projections
design method with which state-of-the-art adaptive CS signal recovery is
achieved.Comment: 25 pages, 7 figures, parts of material published in IEEE ICASSP 2012,
submitted to SIIM
Bayesian compressive sensing framework for spectrum reconstruction in Rayleigh fading channels
Compressive sensing (CS) is a novel digital signal processing technique that has found great interest in
many applications including communication theory and wireless communications. In wireless communications, CS
is particularly suitable for its application in the area of spectrum sensing for cognitive radios, where the complete
spectrum under observation, with many spectral holes, can be modeled as a sparse wide-band signal in the frequency
domain. Considering the initial works performed to exploit the benefits of Bayesian CS in spectrum sensing, the fading
characteristic of wireless communications has not been considered yet to a great extent, although it is an inherent feature
for all sorts of wireless communications and it must be considered for the design of any practically viable wireless system.
In this paper, we extend the Bayesian CS framework for the recovery of a sparse signal, whose nonzero coefficients follow
a Rayleigh distribution. It is then demonstrated via simulations that mean square error significantly improves when
appropriate prior distribution is used for the faded signal coefficients and thus, in turns, the spectrum reconstruction
improves. Different parameters of the system model, e.g., sparsity level and number of measurements, are then varied
to show the consistency of the results for different cases
Statistical Compressive Sensing of Gaussian Mixture Models
A new framework of compressive sensing (CS), namely statistical compressive
sensing (SCS), that aims at efficiently sampling a collection of signals that
follow a statistical distribution and achieving accurate reconstruction on
average, is introduced. For signals following a Gaussian distribution, with
Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably
smaller than the O(k log(N/k)) required by conventional CS, where N is the
signal dimension, and with an optimal decoder implemented with linear
filtering, significantly faster than the pursuit decoders applied in
conventional CS, the error of SCS is shown tightly upper bounded by a constant
times the k-best term approximation error, with overwhelming probability. The
failure probability is also significantly smaller than that of conventional CS.
Stronger yet simpler results further show that for any sensing matrix, the
error of Gaussian SCS is upper bounded by a constant times the k-best term
approximation with probability one, and the bound constant can be efficiently
calculated. For signals following Gaussian mixture models, SCS with a piecewise
linear decoder is introduced and shown to produce for real images better
results than conventional CS based on sparse models
- …