2,380 research outputs found
Fast Robust PCA on Graphs
Mining useful clusters from high dimensional data has received significant
attention of the computer vision and pattern recognition community in the
recent years. Linear and non-linear dimensionality reduction has played an
important role to overcome the curse of dimensionality. However, often such
methods are accompanied with three different problems: high computational
complexity (usually associated with the nuclear norm minimization),
non-convexity (for matrix factorization methods) and susceptibility to gross
corruptions in the data. In this paper we propose a principal component
analysis (PCA) based solution that overcomes these three issues and
approximates a low-rank recovery method for high dimensional datasets. We
target the low-rank recovery by enforcing two types of graph smoothness
assumptions, one on the data samples and the other on the features by designing
a convex optimization problem. The resulting algorithm is fast, efficient and
scalable for huge datasets with O(nlog(n)) computational complexity in the
number of data samples. It is also robust to gross corruptions in the dataset
as well as to the model parameters. Clustering experiments on 7 benchmark
datasets with different types of corruptions and background separation
experiments on 3 video datasets show that our proposed model outperforms 10
state-of-the-art dimensionality reduction models. Our theoretical analysis
proves that the proposed model is able to recover approximate low-rank
representations with a bounded error for clusterable data
Non-Negative Local Sparse Coding for Subspace Clustering
Subspace sparse coding (SSC) algorithms have proven to be beneficial to
clustering problems. They provide an alternative data representation in which
the underlying structure of the clusters can be better captured. However, most
of the research in this area is mainly focused on enhancing the sparse coding
part of the problem. In contrast, we introduce a novel objective term in our
proposed SSC framework which focuses on the separability of data points in the
coding space. We also provide mathematical insights into how this
local-separability term improves the clustering result of the SSC framework.
Our proposed non-linear local SSC algorithm (NLSSC) also benefits from the
efficient choice of its sparsity terms and constraints. The NLSSC algorithm is
also formulated in the kernel-based framework (NLKSSC) which can represent the
nonlinear structure of data. In addition, we address the possibility of having
redundancies in sparse coding results and its negative effect on graph-based
clustering problems. We introduce the link-restore post-processing step to
improve the representation graph of non-negative SSC algorithms such as ours.
Empirical evaluations on well-known clustering benchmarks show that our
proposed NLSSC framework results in better clusterings compared to the
state-of-the-art baselines and demonstrate the effectiveness of the
link-restore post-processing in improving the clustering accuracy via
correcting the broken links of the representation graph.Comment: 15 pages, IDA 2018 conferenc
- …