17,615 research outputs found
Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach
This work considers an estimation task in compressive sensing, where the goal
is to estimate an unknown signal from compressive measurements that are
corrupted by additive pre-measurement noise (interference, or clutter) as well
as post-measurement noise, in the specific setting where some (perhaps limited)
prior knowledge on the signal, interference, and noise is available. The
specific aim here is to devise a strategy for incorporating this prior
information into the design of an appropriate compressive measurement strategy.
Here, the prior information is interpreted as statistics of a prior
distribution on the relevant quantities, and an approach based on Bayesian
Experimental Design is proposed. Experimental results on synthetic data
demonstrate that the proposed approach outperforms traditional random
compressive measurement designs, which are agnostic to the prior information,
as well as several other knowledge-enhanced sensing matrix designs based on
more heuristic notions.Comment: 5 pages, 4 figures. Accepted for publication at The Asilomar
Conference on Signals, Systems, and Computers 201
Detecting a Vector Based on Linear Measurements
We consider a situation where the state of a system is represented by a
real-valued vector. Under normal circumstances, the vector is zero, while an
event manifests as non-zero entries in this vector, possibly few. Our interest
is in the design of algorithms that can reliably detect events (i.e., test
whether the vector is zero or not) with the least amount of information. We
place ourselves in a situation, now common in the signal processing literature,
where information about the vector comes in the form of noisy linear
measurements. We derive information bounds in an active learning setup and
exhibit some simple near-optimal algorithms. In particular, our results show
that the task of detection within this setting is at once much easier, simpler
and different than the tasks of estimation and support recovery
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
Phase retrieval from low-rate samples
The paper considers the phase retrieval problem in N-dimensional complex
vector spaces. It provides two sets of deterministic measurement vectors which
guarantee signal recovery for all signals, excluding only a specific subspace
and a union of subspaces, respectively. A stable analytic reconstruction
procedure of low complexity is given. Additionally it is proven that signal
recovery from these measurements can be solved exactly via a semidefinite
program. A practical implementation with 4 deterministic diffraction patterns
is provided and some numerical experiments with noisy measurements complement
the analytic approach.Comment: Preprint accepted for publication in Sampling Theory in Signal and
Image Processing -- Special issue on SampTa 201
The generalized Lasso with non-linear observations
We study the problem of signal estimation from non-linear observations when
the signal belongs to a low-dimensional set buried in a high-dimensional space.
A rough heuristic often used in practice postulates that non-linear
observations may be treated as noisy linear observations, and thus the signal
may be estimated using the generalized Lasso. This is appealing because of the
abundance of efficient, specialized solvers for this program. Just as noise may
be diminished by projecting onto the lower dimensional space, the error from
modeling non-linear observations with linear observations will be greatly
reduced when using the signal structure in the reconstruction. We allow general
signal structure, only assuming that the signal belongs to some set K in R^n.
We consider the single-index model of non-linearity. Our theory allows the
non-linearity to be discontinuous, not one-to-one and even unknown. We assume a
random Gaussian model for the measurement matrix, but allow the rows to have an
unknown covariance matrix. As special cases of our results, we recover
near-optimal theory for noisy linear observations, and also give the first
theoretical accuracy guarantee for 1-bit compressed sensing with unknown
covariance matrix of the measurement vectors.Comment: 21 page
High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity
Although the standard formulations of prediction problems involve
fully-observed and noiseless data drawn in an i.i.d. manner, many applications
involve noisy and/or missing data, possibly involving dependence, as well. We
study these issues in the context of high-dimensional sparse linear regression,
and propose novel estimators for the cases of noisy, missing and/or dependent
data. Many standard approaches to noisy or missing data, such as those using
the EM algorithm, lead to optimization problems that are inherently nonconvex,
and it is difficult to establish theoretical guarantees on practical
algorithms. While our approach also involves optimizing nonconvex programs, we
are able to both analyze the statistical error associated with any global
optimum, and more surprisingly, to prove that a simple algorithm based on
projected gradient descent will converge in polynomial time to a small
neighborhood of the set of all global minimizers. On the statistical side, we
provide nonasymptotic bounds that hold with high probability for the cases of
noisy, missing and/or dependent data. On the computational side, we prove that
under the same types of conditions required for statistical consistency, the
projected gradient descent algorithm is guaranteed to converge at a geometric
rate to a near-global minimizer. We illustrate these theoretical predictions
with simulations, showing close agreement with the predicted scalings.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1018 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …