4,451 research outputs found
Chordal Decomposition in Rank Minimized Semidefinite Programs with Applications to Subspace Clustering
Semidefinite programs (SDPs) often arise in relaxations of some NP-hard
problems, and if the solution of the SDP obeys certain rank constraints, the
relaxation will be tight. Decomposition methods based on chordal sparsity have
already been applied to speed up the solution of sparse SDPs, but methods for
dealing with rank constraints are underdeveloped. This paper leverages a
minimum rank completion result to decompose the rank constraint on a single
large matrix into multiple rank constraints on a set of smaller matrices. The
re-weighted heuristic is used as a proxy for rank, and the specific form of the
heuristic preserves the sparsity pattern between iterations. Implementations of
rank-minimized SDPs through interior-point and first-order algorithms are
discussed. The problem of subspace clustering is used to demonstrate the
computational improvement of the proposed method.Comment: 6 pages, 6 figure
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
Recently there is a line of research work proposing to employ Spectral
Clustering (SC) to segment (group){Throughout the paper, we use segmentation,
clustering, and grouping, and their verb forms, interchangeably.}
high-dimensional structural data such as those (approximately) lying on
subspaces {We follow {liu2010robust} and use the term "subspace" to denote both
linear subspaces and affine subspaces. There is a trivial conversion between
linear subspaces and affine subspaces as mentioned therein.} or low-dimensional
manifolds. By learning the affinity matrix in the form of sparse
reconstruction, techniques proposed in this vein often considerably boost the
performance in subspace settings where traditional SC can fail. Despite the
success, there are fundamental problems that have been left unsolved: the
spectrum property of the learned affinity matrix cannot be gauged in advance,
and there is often one ugly symmetrization step that post-processes the
affinity for SC input. Hence we advocate to enforce the symmetric positive
semidefinite constraint explicitly during learning (Low-Rank Representation
with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it
can be solved in an exquisite scheme efficiently instead of general-purpose SDP
solvers that usually scale up poorly. We provide rigorous mathematical
derivations to show that, in its canonical form, LRR-PSD is equivalent to the
recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and
hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting
future research. As per the computational cost, our proposal is at most
comparable to that of LRR, if not less. We validate our theoretic analysis and
optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based
Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified
and typos corrected. Experimental data slightly adde
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
The affine rank minimization problem consists of finding a matrix of minimum
rank that satisfies a given system of linear equality constraints. Such
problems have appeared in the literature of a diverse set of fields including
system identification and control, Euclidean embedding, and collaborative
filtering. Although specific instances can often be solved with specialized
algorithms, the general affine rank minimization problem is NP-hard. In this
paper, we show that if a certain restricted isometry property holds for the
linear transformation defining the constraints, the minimum rank solution can
be recovered by solving a convex optimization problem, namely the minimization
of the nuclear norm over the given affine space. We present several random
ensembles of equations where the restricted isometry property holds with
overwhelming probability. The techniques used in our analysis have strong
parallels in the compressed sensing framework. We discuss how affine rank
minimization generalizes this pre-existing concept and outline a dictionary
relating concepts from cardinality minimization to those of rank minimization
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
(where ) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace that
ensures any positive semidefinite matrix with column space can be
recovered from for any diagonal matrix using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.Comment: 20 page
Denise: Deep Learning based Robust PCA for Positive Semidefinite Matrices
The robust PCA of high-dimensional matrices plays an essential role when
isolating key explanatory features. The currently available methods for
performing such a low-rank plus sparse decomposition are matrix specific,
meaning, the algorithm must re-run each time a new matrix should be decomposed.
Since these algorithms are computationally expensive, it is preferable to learn
and store a function that instantaneously performs this decomposition when
evaluated. Therefore, we introduce Denise, a deep learning-based algorithm for
robust PCA of symmetric positive semidefinite matrices, which learns precisely
such a function. Theoretical guarantees that Denise's architecture can
approximate the decomposition function, to arbitrary precision and with
arbitrarily high probability, are obtained. The training scheme is also shown
to convergence to a stationary point of the robust PCA's loss-function. We
train Denise on a randomly generated dataset, and evaluate the performance of
the DNN on synthetic and real-world covariance matrices. Denise achieves
comparable results to several state-of-the-art algorithms in terms of
decomposition quality, but as only one evaluation of the learned DNN is needed,
Denise outperforms all existing algorithms in terms of computation time
- …