1,179 research outputs found
RSP-Based Analysis for Sparsest and Least -Norm Solutions to Underdetermined Linear Systems
Recently, the worse-case analysis, probabilistic analysis and empirical
justification have been employed to address the fundamental question: When does
-minimization find the sparsest solution to an underdetermined linear
system? In this paper, a deterministic analysis, rooted in the classic linear
programming theory, is carried out to further address this question. We first
identify a necessary and sufficient condition for the uniqueness of least
-norm solutions to linear systems. From this condition, we deduce that
a sparsest solution coincides with the unique least -norm solution to a
linear system if and only if the so-called \emph{range space property} (RSP)
holds at this solution. This yields a broad understanding of the relationship
between - and -minimization problems. Our analysis indicates
that the RSP truly lies at the heart of the relationship between these two
problems. Through RSP-based analysis, several important questions in this field
can be largely addressed. For instance, how to efficiently interpret the gap
between the current theory and the actual numerical performance of
-minimization by a deterministic analysis, and if a linear system has
multiple sparsest solutions, when does -minimization guarantee to find
one of them? Moreover, new matrix properties (such as the \emph{RSP of order
} and the \emph{Weak-RSP of order }) are introduced in this paper, and a
new theory for sparse signal recovery based on the RSP of order is
established
Enhancing Sparsity by Reweighted â„“(1) Minimization
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing
Noisy Signal Recovery via Iterative Reweighted L1-Minimization
Compressed sensing has shown that it is possible to reconstruct sparse high
dimensional signals from few linear measurements. In many cases, the solution
can be obtained by solving an L1-minimization problem, and this method is
accurate even in the presence of noise. Recent a modified version of this
method, reweighted L1-minimization, has been suggested. Although no provable
results have yet been attained, empirical studies have suggested the reweighted
version outperforms the standard method. Here we analyze the reweighted
L1-minimization method in the noisy case, and provide provable results showing
an improvement in the error bound over the standard bounds
- …