4,218 research outputs found
Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach
This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.Comment: 5 pages, 4 figures. Accepted for publication at The Asilomar
Conference on Signals, Systems, and Computers 201
Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism
Testing for the significance of a subset of regression coefficients in a
linear model, a staple of statistical analysis, goes back at least to the work
of Fisher who introduced the analysis of variance (ANOVA). We study this
problem under the assumption that the coefficient vector is sparse, a common
situation in modern high-dimensional settings. Suppose we have covariates
and that under the alternative, the response only depends upon the order of
of those, . Under moderate sparsity levels, that
is, , we show that ANOVA is essentially optimal under some
conditions on the design. This is no longer the case under strong sparsity
constraints, that is, . In such settings, a multiple comparison
procedure is often preferred and we establish its optimality when
. However, these two very popular methods are suboptimal, and
sometimes powerless, under moderately strong sparsity where .
We suggest a method based on the higher criticism that is powerful in the whole
range . This optimality property is true for a variety of designs,
including the classical (balanced) multi-way designs and more modern ""
designs arising in genetics and signal processing. In addition to the standard
fixed effects model, we establish similar results for a random effects model
where the nonzero coefficients of the regression vector are normally
distributed.Comment: Published in at http://dx.doi.org/10.1214/11-AOS910 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Max-Product EM Algorithm for Reconstructing Markov-tree Sparse Signals from Compressive Samples
We propose a Bayesian expectation-maximization (EM) algorithm for
reconstructing Markov-tree sparse signals via belief propagation. The
measurements follow an underdetermined linear model where the
regression-coefficient vector is the sum of an unknown approximately sparse
signal and a zero-mean white Gaussian noise with an unknown variance. The
signal is composed of large- and small-magnitude components identified by
binary state variables whose probabilistic dependence structure is described by
a Markov tree. Gaussian priors are assigned to the signal coefficients given
their state variables and the Jeffreys' noninformative prior is assigned to the
noise variance. Our signal reconstruction scheme is based on an EM iteration
that aims at maximizing the posterior distribution of the signal and its state
variables given the noise variance. We construct the missing data for the EM
iteration so that the complete-data posterior distribution corresponds to a
hidden Markov tree (HMT) probabilistic graphical model that contains no loops
and implement its maximization (M) step via a max-product algorithm. This EM
algorithm estimates the vector of state variables as well as solves iteratively
a linear system of equations to obtain the corresponding signal estimate. We
select the noise variance so that the corresponding estimated signal and state
variables obtained upon convergence of the EM iteration have the largest
marginal posterior distribution. We compare the proposed and existing
state-of-the-art reconstruction methods via signal and image reconstruction
experiments.Comment: To appear in IEEE Transactions on Signal Processin
Info-Greedy sequential adaptive compressed sensing
We present an information-theoretic framework for sequential adaptive
compressed sensing, Info-Greedy Sensing, where measurements are chosen to
maximize the extracted information conditioned on the previous measurements. We
show that the widely used bisection approach is Info-Greedy for a family of
-sparse signals by connecting compressed sensing and blackbox complexity of
sequential query algorithms, and present Info-Greedy algorithms for Gaussian
and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse
Info-Greedy measurements. Numerical examples demonstrate the good performance
of the proposed algorithms using simulated and real data: Info-Greedy Sensing
shows significant improvement over random projection for signals with sparse
and low-rank covariance matrices, and adaptivity brings robustness when there
is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear
in IEEE Journal Selected Topics on Signal Processin
- âŠ