17,058 research outputs found
RSP-Based Analysis for Sparsest and Least -Norm Solutions to Underdetermined Linear Systems
Recently, the worse-case analysis, probabilistic analysis and empirical
justification have been employed to address the fundamental question: When does
-minimization find the sparsest solution to an underdetermined linear
system? In this paper, a deterministic analysis, rooted in the classic linear
programming theory, is carried out to further address this question. We first
identify a necessary and sufficient condition for the uniqueness of least
-norm solutions to linear systems. From this condition, we deduce that
a sparsest solution coincides with the unique least -norm solution to a
linear system if and only if the so-called \emph{range space property} (RSP)
holds at this solution. This yields a broad understanding of the relationship
between - and -minimization problems. Our analysis indicates
that the RSP truly lies at the heart of the relationship between these two
problems. Through RSP-based analysis, several important questions in this field
can be largely addressed. For instance, how to efficiently interpret the gap
between the current theory and the actual numerical performance of
-minimization by a deterministic analysis, and if a linear system has
multiple sparsest solutions, when does -minimization guarantee to find
one of them? Moreover, new matrix properties (such as the \emph{RSP of order
} and the \emph{Weak-RSP of order }) are introduced in this paper, and a
new theory for sparse signal recovery based on the RSP of order is
established
Polar Polytopes and Recovery of Sparse Representations
Suppose we have a signal y which we wish to represent using a linear
combination of a number of basis atoms a_i, y=sum_i x_i a_i = Ax. The problem
of finding the minimum L0 norm representation for y is a hard problem. The
Basis Pursuit (BP) approach proposes to find the minimum L1 norm representation
instead, which corresponds to a linear program (LP) that can be solved using
modern LP techniques, and several recent authors have given conditions for the
BP (minimum L1 norm) and sparse (minimum L0 solutions) representations to be
identical. In this paper, we explore this sparse representation problem} using
the geometry of convex polytopes, as recently introduced into the field by
Donoho. By considering the dual LP we find that the so-called polar polytope P
of the centrally-symmetric polytope P whose vertices are the atom pairs +-a_i
is particularly helpful in providing us with geometrical insight into
optimality conditions given by Fuchs and Tropp for non-unit-norm atom sets. In
exploring this geometry we are able to tighten some of these earlier results,
showing for example that the Fuchs condition is both necessary and sufficient
for L1-unique-optimality, and that there are situations where Orthogonal
Matching Pursuit (OMP) can eventually find all L1-unique-optimal solutions with
m nonzeros even if ERC fails for m, if allowed to run for more than m steps
Block-Sparse Recovery via Convex Optimization
Given a dictionary that consists of multiple blocks and a signal that lives
in the range space of only a few blocks, we study the problem of finding a
block-sparse representation of the signal, i.e., a representation that uses the
minimum number of blocks. Motivated by signal/image processing and computer
vision applications, such as face recognition, we consider the block-sparse
recovery problem in the case where the number of atoms in each block is
arbitrary, possibly much larger than the dimension of the underlying subspace.
To find a block-sparse representation of a signal, we propose two classes of
non-convex optimization programs, which aim to minimize the number of nonzero
coefficient blocks and the number of nonzero reconstructed vectors from the
blocks, respectively. Since both classes of problems are NP-hard, we propose
convex relaxations and derive conditions under which each class of the convex
programs is equivalent to the original non-convex formulation. Our conditions
depend on the notions of mutual and cumulative subspace coherence of a
dictionary, which are natural generalizations of existing notions of mutual and
cumulative coherence. We evaluate the performance of the proposed convex
programs through simulations as well as real experiments on face recognition.
We show that treating the face recognition problem as a block-sparse recovery
problem improves the state-of-the-art results by 10% with only 25% of the
training data.Comment: IEEE Transactions on Signal Processin
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
Recently there is a line of research work proposing to employ Spectral
Clustering (SC) to segment (group){Throughout the paper, we use segmentation,
clustering, and grouping, and their verb forms, interchangeably.}
high-dimensional structural data such as those (approximately) lying on
subspaces {We follow {liu2010robust} and use the term "subspace" to denote both
linear subspaces and affine subspaces. There is a trivial conversion between
linear subspaces and affine subspaces as mentioned therein.} or low-dimensional
manifolds. By learning the affinity matrix in the form of sparse
reconstruction, techniques proposed in this vein often considerably boost the
performance in subspace settings where traditional SC can fail. Despite the
success, there are fundamental problems that have been left unsolved: the
spectrum property of the learned affinity matrix cannot be gauged in advance,
and there is often one ugly symmetrization step that post-processes the
affinity for SC input. Hence we advocate to enforce the symmetric positive
semidefinite constraint explicitly during learning (Low-Rank Representation
with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it
can be solved in an exquisite scheme efficiently instead of general-purpose SDP
solvers that usually scale up poorly. We provide rigorous mathematical
derivations to show that, in its canonical form, LRR-PSD is equivalent to the
recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and
hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting
future research. As per the computational cost, our proposal is at most
comparable to that of LRR, if not less. We validate our theoretic analysis and
optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based
Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified
and typos corrected. Experimental data slightly adde
- …