3,227 research outputs found
Scaled Simplex Representation for Subspace Clustering
The self-expressive property of data points, i.e., each data point can be
linearly represented by the other data points in the same subspace, has proven
effective in leading subspace clustering methods. Most self-expressive methods
usually construct a feasible affinity matrix from a coefficient matrix,
obtained by solving an optimization problem. However, the negative entries in
the coefficient matrix are forced to be positive when constructing the affinity
matrix via exponentiation, absolute symmetrization, or squaring operations.
This consequently damages the inherent correlations among the data. Besides,
the affine constraint used in these methods is not flexible enough for
practical applications. To overcome these problems, in this paper, we introduce
a scaled simplex representation (SSR) for subspace clustering problem.
Specifically, the non-negative constraint is used to make the coefficient
matrix physically meaningful, and the coefficient vector is constrained to be
summed up to a scalar s<1 to make it more discriminative. The proposed SSR
based subspace clustering (SSRSC) model is reformulated as a linear
equality-constrained problem, which is solved efficiently under the alternating
direction method of multipliers framework. Experiments on benchmark datasets
demonstrate that the proposed SSRSC algorithm is very efficient and outperforms
state-of-the-art subspace clustering methods on accuracy. The code can be found
at https://github.com/csjunxu/SSRSC.Comment: Accepted by IEEE Transactions on Cybernetics. 13 pages, 9 figures, 10
tables. Code can be found at https://github.com/csjunxu/SSRS
Kernelized Low Rank Representation on Grassmann Manifolds
Low rank representation (LRR) has recently attracted great interest due to
its pleasing efficacy in exploring low-dimensional subspace structures embedded
in data. One of its successful applications is subspace clustering which means
data are clustered according to the subspaces they belong to. In this paper, at
a higher level, we intend to cluster subspaces into classes of subspaces. This
is naturally described as a clustering problem on Grassmann manifold. The
novelty of this paper is to generalize LRR on Euclidean space onto an LRR model
on Grassmann manifold in a uniform kernelized framework. The new methods have
many applications in computer vision tasks. Several clustering experiments are
conducted on handwritten digit images, dynamic textures, human face clips and
traffic scene sequences. The experimental results show that the proposed
methods outperform a number of state-of-the-art subspace clustering methods.Comment: 13 page
Kernelized LRR on Grassmann Manifolds for Subspace Clustering
Low rank representation (LRR) has recently attracted great interest due to
its pleasing efficacy in exploring low-dimensional sub- space structures
embedded in data. One of its successful applications is subspace clustering, by
which data are clustered according to the subspaces they belong to. In this
paper, at a higher level, we intend to cluster subspaces into classes of
subspaces. This is naturally described as a clustering problem on Grassmann
manifold. The novelty of this paper is to generalize LRR on Euclidean space
onto an LRR model on Grassmann manifold in a uniform kernelized LRR framework.
The new method has many applications in data analysis in computer vision tasks.
The proposed models have been evaluated on a number of practical data analysis
applications. The experimental results show that the proposed models outperform
a number of state-of-the-art subspace clustering methods
Deep Sparse Subspace Clustering
In this paper, we present a deep extension of Sparse Subspace Clustering,
termed Deep Sparse Subspace Clustering (DSSC). Regularized by the unit sphere
distribution assumption for the learned deep features, DSSC can infer a new
data affinity matrix by simultaneously satisfying the sparsity principle of SSC
and the nonlinearity given by neural networks. One of the appealing advantages
brought by DSSC is: when original real-world data do not meet the
class-specific linear subspace distribution assumption, DSSC can employ neural
networks to make the assumption valid with its hierarchical nonlinear
transformations. To the best of our knowledge, this is among the first deep
learning based subspace clustering methods. Extensive experiments are conducted
on four real-world datasets to show the proposed DSSC is significantly superior
to 12 existing methods for subspace clustering.Comment: The initial version is completed at the beginning of 201
Robust Multi-subspace Analysis Using Novel Column L0-norm Constrained Matrix Factorization
We study the underlying structure of data (approximately) generated from a
union of independent subspaces. Traditional methods learn only one subspace,
failing to discover the multi-subspace structure, while state-of-the-art
methods analyze the multi-subspace structure using data themselves as the
dictionary, which cannot offer the explicit basis to span each subspace and are
sensitive to errors via an indirect representation. Additionally, they also
suffer from a high computational complexity, being quadratic or cubic to the
sample size. To tackle all these problems, we propose a method, called Matrix
Factorization with Column L0-norm constraint (MFC0), that can simultaneously
learn the basis for each subspace, generate a direct sparse representation for
each data sample, as well as removing errors in the data in an efficient way.
Furthermore, we develop a first-order alternating direction algorithm, whose
computational complexity is linear to the sample size, to stably and
effectively solve the nonconvex objective function and non- smooth l0-norm
constraint of MFC0. Experimental results on both synthetic and real-world
datasets demonstrate that besides the superiority over traditional and
state-of-the-art methods for subspace clustering, data reconstruction, error
correction, MFC0 also shows its uniqueness for multi-subspace basis learning
and direct sparse representation.Comment: 13 pages, 8 figures, 8 table
Self-Supervised Convolutional Subspace Clustering Network
Subspace clustering methods based on data self-expression have become very
popular for learning from data that lie in a union of low-dimensional linear
subspaces. However, the applicability of subspace clustering has been limited
because practical visual data in raw form do not necessarily lie in such linear
subspaces. On the other hand, while Convolutional Neural Network (ConvNet) has
been demonstrated to be a powerful tool for extracting discriminative features
from visual data, training such a ConvNet usually requires a large amount of
labeled data, which are unavailable in subspace clustering applications. To
achieve simultaneous feature learning and subspace clustering, we propose an
end-to-end trainable framework, called Self-Supervised Convolutional Subspace
Clustering Network (SConvSCN), that combines a ConvNet module (for feature
learning), a self-expression module (for subspace clustering) and a spectral
clustering module (for self-supervision) into a joint optimization framework.
Particularly, we introduce a dual self-supervision that exploits the output of
spectral clustering to supervise the training of the feature learning module
(via a classification loss) and the self-expression module (via a spectral
clustering loss). Our experiments on four benchmark datasets show the
effectiveness of the dual self-supervision and demonstrate superior performance
of our proposed approach.Comment: 10 pages, 2 figures, and 5 tables. This paper has been accepted by
CVPR201
Evolutionary Self-Expressive Models for Subspace Clustering
The problem of organizing data that evolves over time into clusters is
encountered in a number of practical settings. We introduce evolutionary
subspace clustering, a method whose objective is to cluster a collection of
evolving data points that lie on a union of low-dimensional evolving subspaces.
To learn the parsimonious representation of the data points at each time step,
we propose a non-convex optimization framework that exploits the
self-expressiveness property of the evolving data while taking into account
representation from the preceding time step. To find an approximate solution to
the aforementioned non-convex optimization problem, we develop a scheme based
on alternating minimization that both learns the parsimonious representation as
well as adaptively tunes and infers a smoothing parameter reflective of the
rate of data evolution. The latter addresses a fundamental challenge in
evolutionary clustering -- determining if and to what extent one should
consider previous clustering solutions when analyzing an evolving data
collection. Our experiments on both synthetic and real-world datasets
demonstrate that the proposed framework outperforms state-of-the-art static
subspace clustering algorithms and existing evolutionary clustering schemes in
terms of both accuracy and running time, in a range of scenarios
Partial Sum Minimization of Singular Values Representation on Grassmann Manifolds
As a significant subspace clustering method, low rank representation (LRR)
has attracted great attention in recent years. To further improve the
performance of LRR and extend its applications, there are several issues to be
resolved. The nuclear norm in LRR does not sufficiently use the prior knowledge
of the rank which is known in many practical problems. The LRR is designed for
vectorial data from linear spaces, thus not suitable for high dimensional data
with intrinsic non-linear manifold structure. This paper proposes an extended
LRR model for manifold-valued Grassmann data which incorporates prior knowledge
by minimizing partial sum of singular values instead of the nuclear norm,
namely Partial Sum minimization of Singular Values Representation (GPSSVR). The
new model not only enforces the global structure of data in low rank, but also
retains important information by minimizing only smaller singular values. To
further maintain the local structures among Grassmann points, we also integrate
the Laplacian penalty with GPSSVR. An effective algorithm is proposed to solve
the optimization problem based on the GPSSVR model. The proposed model and
algorithms are assessed on some widely used human action video datasets and a
real scenery dataset. The experimental results show that the proposed methods
obviously outperform other state-of-the-art methods.Comment: Submitting to ACM Transactions on Knowledge Discovery from Data with
minor revisio
Deep Multimodal Subspace Clustering Networks
We present convolutional neural network (CNN) based approaches for
unsupervised multimodal subspace clustering. The proposed framework consists of
three main stages - multimodal encoder, self-expressive layer, and multimodal
decoder. The encoder takes multimodal data as input and fuses them to a latent
space representation. The self-expressive layer is responsible for enforcing
the self-expressiveness property and acquiring an affinity matrix corresponding
to the data points. The decoder reconstructs the original input data. The
network uses the distance between the decoder's reconstruction and the original
input in its training. We investigate early, late and intermediate fusion
techniques and propose three different encoders corresponding to them for
spatial fusion. The self-expressive layers and multimodal decoders are
essentially the same for different spatial fusion-based approaches. In addition
to various spatial fusion-based methods, an affinity fusion-based network is
also proposed in which the self-expressive layer corresponding to different
modalities is enforced to be the same. Extensive experiments on three datasets
show that the proposed methods significantly outperform the state-of-the-art
multimodal subspace clustering methods
Groupwise Constrained Reconstruction for Subspace Clustering
Reconstruction based subspace clustering methods compute a self
reconstruction matrix over the samples and use it for spectral clustering to
obtain the final clustering result. Their success largely relies on the
assumption that the underlying subspaces are independent, which, however, does
not always hold in the applications with increasing number of subspaces. In
this paper, we propose a novel reconstruction based subspace clustering model
without making the subspace independence assumption. In our model, certain
properties of the reconstruction matrix are explicitly characterized using the
latent cluster indicators, and the affinity matrix used for spectral clustering
can be directly built from the posterior of the latent cluster indicators
instead of the reconstruction matrix. Experimental results on both synthetic
and real-world datasets show that the proposed model can outperform the
state-of-the-art methods.Comment: ICML201
- …