668 research outputs found
RSP-Based Analysis for Sparsest and Least -Norm Solutions to Underdetermined Linear Systems
Recently, the worse-case analysis, probabilistic analysis and empirical
justification have been employed to address the fundamental question: When does
-minimization find the sparsest solution to an underdetermined linear
system? In this paper, a deterministic analysis, rooted in the classic linear
programming theory, is carried out to further address this question. We first
identify a necessary and sufficient condition for the uniqueness of least
-norm solutions to linear systems. From this condition, we deduce that
a sparsest solution coincides with the unique least -norm solution to a
linear system if and only if the so-called \emph{range space property} (RSP)
holds at this solution. This yields a broad understanding of the relationship
between - and -minimization problems. Our analysis indicates
that the RSP truly lies at the heart of the relationship between these two
problems. Through RSP-based analysis, several important questions in this field
can be largely addressed. For instance, how to efficiently interpret the gap
between the current theory and the actual numerical performance of
-minimization by a deterministic analysis, and if a linear system has
multiple sparsest solutions, when does -minimization guarantee to find
one of them? Moreover, new matrix properties (such as the \emph{RSP of order
} and the \emph{Weak-RSP of order }) are introduced in this paper, and a
new theory for sparse signal recovery based on the RSP of order is
established
The Lasso Problem and Uniqueness
The lasso is a popular tool for sparse linear regression, especially for
problems in which the number of variables p exceeds the number of observations
n. But when p>n, the lasso criterion is not strictly convex, and hence it may
not have a unique minimum. An important question is: when is the lasso solution
well-defined (unique)? We review results from the literature, which show that
if the predictor variables are drawn from a continuous probability
distribution, then there is a unique lasso solution with probability one,
regardless of the sizes of n and p. We also show that this result extends
easily to penalized minimization problems over a wide range of loss
functions.
A second important question is: how can we deal with the case of
non-uniqueness in lasso solutions? In light of the aforementioned result, this
case really only arises when some of the predictor variables are discrete, or
when some post-processing has been performed on continuous predictor
measurements. Though we certainly cannot claim to provide a complete answer to
such a broad question, we do present progress towards understanding some
aspects of non-uniqueness. First, we extend the LARS algorithm for computing
the lasso solution path to cover the non-unique case, so that this path
algorithm works for any predictor matrix. Next, we derive a simple method for
computing the component-wise uncertainty in lasso solutions of any given
problem instance, based on linear programming. Finally, we review results from
the literature on some of the unifying properties of lasso solutions, and also
point out particular forms of solutions that have distinctive properties.Comment: 25 pages, 0 figure
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization
Nuclear norm minimization (NNM) has recently gained significant attention for
its use in rank minimization problems. Similar to compressed sensing, using
null space characterizations, recovery thresholds for NNM have been studied in
\cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are
far from optimal, especially in the low rank region. In this paper we apply the
recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null
space conditions of NNM. The resulting thresholds are significantly better and
in particular our weak threshold appears to match with simulation results.
Further our curves suggest for any rank growing linearly with matrix size
we need only three times of oversampling (the model complexity) for weak
recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional
and strong thresholds. Additionally a separate analysis is given for special
case of positive semidefinite matrices. We conclude by discussing simulation
results and future research directions.Comment: 28 pages, 2 figure
Analysis of A Nonsmooth Optimization Approach to Robust Estimation
In this paper, we consider the problem of identifying a linear map from
measurements which are subject to intermittent and arbitarily large errors.
This is a fundamental problem in many estimation-related applications such as
fault detection, state estimation in lossy networks, hybrid system
identification, robust estimation, etc. The problem is hard because it exhibits
some intrinsic combinatorial features. Therefore, obtaining an effective
solution necessitates relaxations that are both solvable at a reasonable cost
and effective in the sense that they can return the true parameter vector. The
current paper discusses a nonsmooth convex optimization approach and provides a
new analysis of its behavior. In particular, it is shown that under appropriate
conditions on the data, an exact estimate can be recovered from data corrupted
by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure
From Sparse Signals to Sparse Residuals for Robust Sensing
One of the key challenges in sensor networks is the extraction of information
by fusing data from a multitude of distinct, but possibly unreliable sensors.
Recovering information from the maximum number of dependable sensors while
specifying the unreliable ones is critical for robust sensing. This sensing
task is formulated here as that of finding the maximum number of feasible
subsystems of linear equations, and proved to be NP-hard. Useful links are
established with compressive sampling, which aims at recovering vectors that
are sparse. In contrast, the signals here are not sparse, but give rise to
sparse residuals. Capitalizing on this form of sparsity, four sensing schemes
with complementary strengths are developed. The first scheme is a convex
relaxation of the original problem expressed as a second-order cone program
(SOCP). It is shown that when the involved sensing matrices are Gaussian and
the reliable measurements are sufficiently many, the SOCP can recover the
optimal solution with overwhelming probability. The second scheme is obtained
by replacing the initial objective function with a concave one. The third and
fourth schemes are tailored for noisy sensor data. The noisy case is cast as a
combinatorial problem that is subsequently surrogated by a (weighted) SOCP.
Interestingly, the derived cost functions fall into the framework of robust
multivariate linear regression, while an efficient block-coordinate descent
algorithm is developed for their minimization. The robust sensing capabilities
of all schemes are verified by simulated tests.Comment: Under review for publication in the IEEE Transactions on Signal
Processing (revised version
- …