305 research outputs found
RSP-Based Analysis for Sparsest and Least -Norm Solutions to Underdetermined Linear Systems
Recently, the worse-case analysis, probabilistic analysis and empirical
justification have been employed to address the fundamental question: When does
-minimization find the sparsest solution to an underdetermined linear
system? In this paper, a deterministic analysis, rooted in the classic linear
programming theory, is carried out to further address this question. We first
identify a necessary and sufficient condition for the uniqueness of least
-norm solutions to linear systems. From this condition, we deduce that
a sparsest solution coincides with the unique least -norm solution to a
linear system if and only if the so-called \emph{range space property} (RSP)
holds at this solution. This yields a broad understanding of the relationship
between - and -minimization problems. Our analysis indicates
that the RSP truly lies at the heart of the relationship between these two
problems. Through RSP-based analysis, several important questions in this field
can be largely addressed. For instance, how to efficiently interpret the gap
between the current theory and the actual numerical performance of
-minimization by a deterministic analysis, and if a linear system has
multiple sparsest solutions, when does -minimization guarantee to find
one of them? Moreover, new matrix properties (such as the \emph{RSP of order
} and the \emph{Weak-RSP of order }) are introduced in this paper, and a
new theory for sparse signal recovery based on the RSP of order is
established
Analysis of A Nonsmooth Optimization Approach to Robust Estimation
In this paper, we consider the problem of identifying a linear map from
measurements which are subject to intermittent and arbitarily large errors.
This is a fundamental problem in many estimation-related applications such as
fault detection, state estimation in lossy networks, hybrid system
identification, robust estimation, etc. The problem is hard because it exhibits
some intrinsic combinatorial features. Therefore, obtaining an effective
solution necessitates relaxations that are both solvable at a reasonable cost
and effective in the sense that they can return the true parameter vector. The
current paper discusses a nonsmooth convex optimization approach and provides a
new analysis of its behavior. In particular, it is shown that under appropriate
conditions on the data, an exact estimate can be recovered from data corrupted
by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure
A fast approach for overcomplete sparse decomposition based on smoothed L0 norm
In this paper, a fast algorithm for overcomplete sparse decomposition, called
SL0, is proposed. The algorithm is essentially a method for obtaining sparse
solutions of underdetermined systems of linear equations, and its applications
include underdetermined Sparse Component Analysis (SCA), atomic decomposition
on overcomplete dictionaries, compressed sensing, and decoding real field
codes. Contrary to previous methods, which usually solve this problem by
minimizing the L1 norm using Linear Programming (LP) techniques, our algorithm
tries to directly minimize the L0 norm. It is experimentally shown that the
proposed algorithm is about two to three orders of magnitude faster than the
state-of-the-art interior-point LP solvers, while providing the same (or
better) accuracy.Comment: Accepted in IEEE Transactions on Signal Processing. For MATLAB codes,
see (http://ee.sharif.ir/~SLzero). File replaced, because Fig. 5 was missing
erroneousl
On problems in the calculus of variations in increasingly elongated domains
We consider minimization problems in the calculus of variations set in a
sequence of domains the size of which tends to infinity in certain directions
and such that the data only depend on the coordinates in the directions that
remain constant. We study the asymptotic behavior of minimizers in various
situations and show that they converge in an appropriate sense toward
minimizers of a related energy functional in the constant directions
Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing
This paper studies the convergence of the adaptively iterative thresholding
(AIT) algorithm for compressed sensing. We first introduce a generalized
restricted isometry property (gRIP). Then we prove that the AIT algorithm
converges to the original sparse solution at a linear rate under a certain gRIP
condition in the noise free case. While in the noisy case, its convergence rate
is also linear until attaining a certain error bound. Moreover, as by-products,
we also provide some sufficient conditions for the convergence of the AIT
algorithm based on the two well-known properties, i.e., the coherence property
and the restricted isometry property (RIP), respectively. It should be pointed
out that such two properties are special cases of gRIP. The solid improvements
on the theoretical results are demonstrated and compared with the known
results. Finally, we provide a series of simulations to verify the correctness
of the theoretical assertions as well as the effectiveness of the AIT
algorithm.Comment: 15 pages, 5 figure
- …