7,827 research outputs found

    Symmetric low-rank representation for subspace clustering

    Get PDF
    We propose a symmetric low-rank representation (SLRR) method for subspace clustering, which assumes that a data set is approximately drawn from the union of multiple subspaces. The proposed technique can reveal the membership of multiple subspaces through the self-expressiveness property of the data. In particular, the SLRR method considers a collaborative representation combined with low-rank matrix recovery techniques as a low-rank representation to learn a symmetric low-rank representation, which preserves the subspace structures of high-dimensional data. In contrast to performing iterative singular value decomposition in some existing low-rank representation based algorithms, the symmetric low-rank representation in the SLRR method can be calculated as a closed form solution by solving the symmetric low-rank optimization problem. By making use of the angular information of the principal directions of the symmetric low-rank representation, an affinity graph matrix is constructed for spectral clustering. Extensive experimental results show that it outperforms state-of-the-art subspace clustering algorithms.Comment: 13 page

    Multiview subspace clustering using low-rank representation

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Multiview subspace clustering is one of the most widely used methods for exploiting the internal structures of multiview data. Most previous studies have performed the task of learning multiview representations by individually constructing an affinity matrix for each view without simultaneously exploiting the intrinsic characteristics of multiview data. In this paper, we propose a multiview low-rank representation (MLRR) method to comprehensively discover the correlation of multiview data for multiview subspace clustering. MLRR considers symmetric low-rank representations (LRRs) to be an approximately linear spatial transformation under the new base, i.e., the multiview data themselves, to fully exploit the angular information of the principal directions of LRRs, which is adopted to construct an affinity matrix for multiview subspace clustering, under a symmetric condition. MLRR takes full advantage of LRR techniques and a diversity regularization term to exploit the diversity and consistency of multiple views, respectively, and this method simultaneously imposes a symmetry constraint on LRRs. Hence, the angular information of the principal directions of rows is consistent with that of columns in symmetric LRRs. The MLRR model can be efficiently calculated by solving a convex optimization problem. Moreover, we present an intuitive fusion strategy for symmetric LRRs from the perspective of spectral clustering to obtain a compact representation, which can be shared by multiple views and comprehensively represents the intrinsic features of multiview data. Finally, the experimental results based on benchmark datasets demonstrate the effectiveness and robustness of MLRR compared with several state-of-the-art multiview subspace clustering algorithms

    Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees

    Full text link
    Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group){Throughout the paper, we use segmentation, clustering, and grouping, and their verb forms, interchangeably.} high-dimensional structural data such as those (approximately) lying on subspaces {We follow {liu2010robust} and use the term "subspace" to denote both linear subspaces and affine subspaces. There is a trivial conversion between linear subspaces and affine subspaces as mentioned therein.} or low-dimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that post-processes the affinity for SC input. Hence we advocate to enforce the symmetric positive semidefinite constraint explicitly during learning (Low-Rank Representation with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it can be solved in an exquisite scheme efficiently instead of general-purpose SDP solvers that usually scale up poorly. We provide rigorous mathematical derivations to show that, in its canonical form, LRR-PSD is equivalent to the recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting future research. As per the computational cost, our proposal is at most comparable to that of LRR, if not less. We validate our theoretic analysis and optimization scheme by experiments on both synthetic and real data sets.Comment: 10 pages, 4 figures. Accepted by ICDM Workshop on Optimization Based Methods for Emerging Data Mining Problems (OEDM), 2010. Main proof simplified and typos corrected. Experimental data slightly adde

    Kernel Truncated Regression Representation for Robust Subspace Clustering

    Get PDF
    Subspace clustering aims to group data points into multiple clusters of which each corresponds to one subspace. Most existing subspace clustering approaches assume that input data lie on linear subspaces. In practice, however, this assumption usually does not hold. To achieve nonlinear subspace clustering, we propose a novel method, called kernel truncated regression representation. Our method consists of the following four steps: 1) projecting the input data into a hidden space, where each data point can be linearly represented by other data points; 2) calculating the linear representation coefficients of the data representations in the hidden space; 3) truncating the trivial coefficients to achieve robustness and block-diagonality; and 4) executing the graph cutting operation on the coefficient matrix by solving a graph Laplacian problem. Our method has the advantages of a closed-form solution and the capacity of clustering data points that lie on nonlinear subspaces. The first advantage makes our method efficient in handling large-scale datasets, and the second one enables the proposed method to conquer the nonlinear subspace clustering challenge. Extensive experiments on six benchmarks demonstrate the effectiveness and the efficiency of the proposed method in comparison with current state-of-the-art approaches.Comment: 14 page
    corecore