6,099 research outputs found
Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach
This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.Comment: 5 pages, 4 figures. Accepted for publication at The Asilomar
Conference on Signals, Systems, and Computers 201
Signal Recovery From 1-Bit Quantized Noisy Samples via Adaptive Thresholding
In this paper, we consider the problem of signal recovery from 1-bit noisy
measurements. We present an efficient method to obtain an estimation of the
signal of interest when the measurements are corrupted by white or colored
noise. To the best of our knowledge, the proposed framework is the pioneer
effort in the area of 1-bit sampling and signal recovery in providing a unified
framework to deal with the presence of noise with an arbitrary covariance
matrix including that of the colored noise. The proposed method is based on a
constrained quadratic program (CQP) formulation utilizing an adaptive
quantization thresholding approach, that further enables us to accurately
recover the signal of interest from its 1-bit noisy measurements. In addition,
due to the adaptive nature of the proposed method, it can recover both fixed
and time-varying parameters from their quantized 1-bit samples.Comment: This is a pre-print version of the original conference paper that has
been accepted at the 2018 IEEE Asilomar Conference on Signals, Systems, and
Computer
Multiple and single snapshot compressive beamforming
For a sound field observed on a sensor array, compressive sensing (CS)
reconstructs the direction-of-arrival (DOA) of multiple sources using a
sparsity constraint. The DOA estimation is posed as an underdetermined problem
by expressing the acoustic pressure at each sensor as a phase-lagged
superposition of source amplitudes at all hypothetical DOAs. Regularizing with
an -norm constraint renders the problem solvable with convex
optimization, and promoting sparsity gives high-resolution DOA maps. Here, the
sparse source distribution is derived using maximum a posteriori (MAP)
estimates for both single and multiple snapshots. CS does not require inversion
of the data covariance matrix and thus works well even for a single snapshot
where it gives higher resolution than conventional beamforming. For multiple
snapshots, CS outperforms conventional high-resolution methods, even with
coherent arrivals and at low signal-to-noise ratio. The superior resolution of
CS is demonstrated with vertical array data from the SWellEx96 experiment for
coherent multi-paths.Comment: In press Journal of Acoustical Society of Americ
Analysis of Fisher Information and the Cram\'{e}r-Rao Bound for Nonlinear Parameter Estimation after Compressed Sensing
In this paper, we analyze the impact of compressed sensing with complex
random matrices on Fisher information and the Cram\'{e}r-Rao Bound (CRB) for
estimating unknown parameters in the mean value function of a complex
multivariate normal distribution. We consider the class of random compression
matrices whose distribution is right-orthogonally invariant. The compression
matrix whose elements are i.i.d. standard normal random variables is one such
matrix. We show that for all such compression matrices, the Fisher information
matrix has a complex matrix beta distribution. We also derive the distribution
of CRB. These distributions can be used to quantify the loss in CRB as a
function of the Fisher information of the non-compressed data. In our numerical
examples, we consider a direction of arrival estimation problem and discuss the
use of these distributions as guidelines for choosing compression ratios based
on the resulting loss in CRB.Comment: 12 pages, 3figure
Compressive Privacy for a Linear Dynamical System
We consider a linear dynamical system in which the state vector consists of
both public and private states. One or more sensors make measurements of the
state vector and sends information to a fusion center, which performs the final
state estimation. To achieve an optimal tradeoff between the utility of
estimating the public states and protection of the private states, the
measurements at each time step are linearly compressed into a lower dimensional
space. Under the centralized setting where all measurements are collected by a
single sensor, we propose an optimization problem and an algorithm to find the
best compression matrix. Under the decentralized setting where measurements are
made separately at multiple sensors, each sensor optimizes its own local
compression matrix. We propose methods to separate the overall optimization
problem into multiple sub-problems that can be solved locally at each sensor.
We consider the cases where there is no message exchange between the sensors;
and where each sensor takes turns to transmit messages to the other sensors.
Simulations and empirical experiments demonstrate the efficiency of our
proposed approach in allowing the fusion center to estimate the public states
with good accuracy while preventing it from estimating the private states
accurately
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
- …