1,012 research outputs found
How well can we estimate a sparse vector?
The estimation of a sparse vector in the linear model is a fundamental
problem in signal processing, statistics, and compressive sensing. This paper
establishes a lower bound on the mean-squared error, which holds regardless of
the sensing/design matrix being used and regardless of the estimation
procedure. This lower bound very nearly matches the known upper bound one gets
by taking a random projection of the sparse vector followed by an
estimation procedure such as the Dantzig selector. In this sense, compressive
sensing techniques cannot essentially be improved
Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences
Compressive sensing (CS) has recently emerged as a framework for efficiently
capturing signals that are sparse or compressible in an appropriate basis.
While often motivated as an alternative to Nyquist-rate sampling, there remains
a gap between the discrete, finite-dimensional CS framework and the problem of
acquiring a continuous-time signal. In this paper, we attempt to bridge this
gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a
collection of functions that trace back to the seminal work by Slepian, Landau,
and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's
form a highly efficient basis for sampled bandlimited functions; by modulating
and merging DPSS bases, we obtain a dictionary that offers high-quality sparse
approximations for most sampled multiband signals. This multiband modulated
DPSS dictionary can be readily incorporated into the CS framework. We provide
theoretical guarantees and practical insight into the use of this dictionary
for recovery of sampled multiband signals from compressive measurements
Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries
Compressive sensing (CS) has recently emerged as a powerful framework for
acquiring sparse signals. The bulk of the CS literature has focused on the case
where the acquired signal has a sparse or compressible representation in an
orthonormal basis. In practice, however, there are many signals that cannot be
sparsely represented or approximated using an orthonormal basis, but that do
have sparse representations in a redundant dictionary. Standard results in CS
can sometimes be extended to handle this case provided that the dictionary is
sufficiently incoherent or well-conditioned, but these approaches fail to
address the case of a truly redundant or overcomplete dictionary. In this paper
we describe a variant of the iterative recovery algorithm CoSaMP for this more
challenging setting. We utilize the D-RIP, a condition on the sensing matrix
analogous to the well-known restricted isometry property. In contrast to prior
work, the method and analysis are "signal-focused"; that is, they are oriented
around recovering the signal rather than its dictionary coefficients. Under the
assumption that we have a near-optimal scheme for projecting vectors in signal
space onto the model family of candidate sparse signals, we provide provable
recovery guarantees. Developing a practical algorithm that can provably compute
the required near-optimal projections remains a significant open problem, but
we include simulation results using various heuristics that empirically exhibit
superior performance to traditional recovery algorithms
Six ministry strategies for planting a seeker sensitive church
https://place.asburyseminary.edu/ecommonsatsdissertations/1108/thumbnail.jp
1-Bit Matrix Completion
In this paper we develop a theory of matrix completion for the extreme case
of noisy 1-bit observations. Instead of observing a subset of the real-valued
entries of a matrix M, we obtain a small number of binary (1-bit) measurements
generated according to a probability distribution determined by the real-valued
entries of M. The central question we ask is whether or not it is possible to
obtain an accurate estimate of M from this data. In general this would seem
impossible, but we show that the maximum likelihood estimate under a suitable
constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and
rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic
or probit observation models), then we can obtain this maximum likelihood
estimate by optimizing a convex program. In addition, we also show that if
instead of recovering M we simply wish to obtain an estimate of the
distribution generating the 1-bit measurements, then we can eliminate the
requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower
bounds showing that these estimates are near-optimal. We conclude with a suite
of experiments that both verify the implications of our theorems as well as
illustrate some of the practical applications of 1-bit matrix completion. In
particular, we compare our program to standard matrix completion methods on
movie rating data in which users submit ratings from 1 to 5. In order to use
our program, we quantize this data to a single bit, but we allow the standard
matrix completion program to have access to the original ratings (from 1 to 5).
Surprisingly, the approach based on binary data performs significantly better
- …