9,327 research outputs found
Necessary and Sufficient Conditions on Sparsity Pattern Recovery
The problem of detecting the sparsity pattern of a k-sparse vector in R^n
from m random noisy measurements is of interest in many areas such as system
identification, denoising, pattern recognition, and compressed sensing. This
paper addresses the scaling of the number of measurements m, with signal
dimension n and sparsity-level nonzeros k, for asymptotically-reliable
detection. We show a necessary condition for perfect recovery at any given SNR
for all algorithms, regardless of complexity, is m = Omega(k log(n-k))
measurements. Conversely, it is shown that this scaling of Omega(k log(n-k))
measurements is sufficient for a remarkably simple ``maximum correlation''
estimator. Hence this scaling is optimal and does not require more
sophisticated techniques such as lasso or matching pursuit. The constants for
both the necessary and sufficient conditions are precisely defined in terms of
the minimum-to-average ratio of the nonzero components and the SNR. The
necessary condition improves upon previous results for maximum likelihood
estimation. For lasso, it also provides a necessary condition at any SNR and
for low SNR improves upon previous work. The sufficient condition provides the
first asymptotically-reliable detection guarantee at finite SNR.Comment: Submitted to IEEE Transactions on Information Theor
Orthogonal Matching Pursuit: A Brownian Motion Analysis
A well-known analysis of Tropp and Gilbert shows that orthogonal matching
pursuit (OMP) can recover a k-sparse n-dimensional real vector from 4 k log(n)
noise-free linear measurements obtained through a random Gaussian measurement
matrix with a probability that approaches one as n approaches infinity. This
work strengthens this result by showing that a lower number of measurements, 2
k log(n - k), is in fact sufficient for asymptotic recovery. More generally,
when the sparsity level satisfies kmin <= k <= kmax but is unknown, 2 kmax
log(n - kmin) measurements is sufficient. Furthermore, this number of
measurements is also sufficient for detection of the sparsity pattern (support)
of the vector with measurement errors provided the signal-to-noise ratio (SNR)
scales to infinity. The scaling 2 k log(n - k) exactly matches the number of
measurements required by the more complex lasso method for signal recovery with
a similar SNR scaling.Comment: 11 pages, 2 figure
Sharp thresholds for high-dimensional and noisy recovery of sparsity
The problem of consistently estimating the sparsity pattern of a vector
\betastar \in \real^\mdim based on observations contaminated by noise arises
in various contexts, including subset selection in regression, structure
estimation in graphical models, sparse approximation, and signal denoising. We
analyze the behavior of -constrained quadratic programming (QP), also
referred to as the Lasso, for recovering the sparsity pattern. Our main result
is to establish a sharp relation between the problem dimension \mdim, the
number \spindex of non-zero elements in \betastar, and the number of
observations \numobs that are required for reliable recovery. For a broad
class of Gaussian ensembles satisfying mutual incoherence conditions, we
establish existence and compute explicit values of thresholds \ThreshLow and
\ThreshUp with the following properties: for any , if \numobs
> 2 (\ThreshUp + \epsilon) \log (\mdim - \spindex) + \spindex + 1, then the
Lasso succeeds in recovering the sparsity pattern with probability converging
to one for large problems, whereas for \numobs < 2 (\ThreshLow - \epsilon)
\log (\mdim - \spindex) + \spindex + 1, then the probability of successful
recovery converges to zero. For the special case of the uniform Gaussian
ensemble, we show that \ThreshLow = \ThreshUp = 1, so that the threshold is
sharp and exactly determined.Comment: Appeared as Technical Report 708, Department of Statistics, UC
Berkele
Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds
Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a small number of noisy linear measurements is an important problem in
compressed sensing. In this paper, the high-dimensional setting is considered.
It is shown that if the measurement rate and per-sample signal-to-noise ratio
(SNR) are finite constants independent of the length of the vector, then the
optimal sparsity pattern estimate will have a constant fraction of errors.
Lower bounds on the measurement rate needed to attain a desired fraction of
errors are given in terms of the SNR and various key parameters of the unknown
vector. The tightness of the bounds in a scaling sense, as a function of the
SNR and the fraction of errors, is established by comparison with existing
achievable bounds. Near optimality is shown for a wide variety of practically
motivated signal models
Rank-Sparsity Incoherence for Matrix Decomposition
Suppose we are given a matrix that is formed by adding an unknown sparse
matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix
into its sparse and low-rank components. Such a problem arises in a number of
applications in model and system identification, and is NP-hard in general. In
this paper we consider a convex optimization formulation to splitting the
specified matrix into its components, by minimizing a linear combination of the
norm and the nuclear norm of the components. We develop a notion of
\emph{rank-sparsity incoherence}, expressed as an uncertainty principle between
the sparsity pattern of a matrix and its row and column spaces, and use it to
characterize both fundamental identifiability as well as (deterministic)
sufficient conditions for exact recovery. Our analysis is geometric in nature,
with the tangent spaces to the algebraic varieties of sparse and low-rank
matrices playing a prominent role. When the sparse and low-rank matrices are
drawn from certain natural random ensembles, we show that the sufficient
conditions for exact recovery are satisfied with high probability. We conclude
with simulation results on synthetic matrix decomposition problems
- …