847 research outputs found
Kernel Truncated Regression Representation for Robust Subspace Clustering
Subspace clustering aims to group data points into multiple clusters of which
each corresponds to one subspace. Most existing subspace clustering approaches
assume that input data lie on linear subspaces. In practice, however, this
assumption usually does not hold. To achieve nonlinear subspace clustering, we
propose a novel method, called kernel truncated regression representation. Our
method consists of the following four steps: 1) projecting the input data into
a hidden space, where each data point can be linearly represented by other data
points; 2) calculating the linear representation coefficients of the data
representations in the hidden space; 3) truncating the trivial coefficients to
achieve robustness and block-diagonality; and 4) executing the graph cutting
operation on the coefficient matrix by solving a graph Laplacian problem. Our
method has the advantages of a closed-form solution and the capacity of
clustering data points that lie on nonlinear subspaces. The first advantage
makes our method efficient in handling large-scale datasets, and the second one
enables the proposed method to conquer the nonlinear subspace clustering
challenge. Extensive experiments on six benchmarks demonstrate the
effectiveness and the efficiency of the proposed method in comparison with
current state-of-the-art approaches.Comment: 14 page
Convex Graph Invariant Relaxations For Graph Edit Distance
The edit distance between two graphs is a widely used measure of similarity
that evaluates the smallest number of vertex and edge deletions/insertions
required to transform one graph to another. It is NP-hard to compute in
general, and a large number of heuristics have been proposed for approximating
this quantity. With few exceptions, these methods generally provide upper
bounds on the edit distance between two graphs. In this paper, we propose a new
family of computationally tractable convex relaxations for obtaining lower
bounds on graph edit distance. These relaxations can be tailored to the
structural properties of the particular graphs via convex graph invariants.
Specific examples that we highlight in this paper include constraints on the
graph spectrum as well as (tractable approximations of) the stability number
and the maximum-cut values of graphs. We prove under suitable conditions that
our relaxations are tight (i.e., exactly compute the graph edit distance) when
one of the graphs consists of few eigenvalues. We also validate the utility of
our framework on synthetic problems as well as real applications involving
molecular structure comparison problems in chemistry.Comment: 27 pages, 7 figure
Learning Robust and Discriminative Subspace With Low-Rank Constraints
IEEE Transactions on Neural Networks and Learning SystemsThe article of record as published may be found at http://dx.doi.org/10.1109/tnnls.2015.2464090In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classifica- tion. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization- based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The exper- imental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.Funded by Naval Postgraduate SchoolNational Science Foundation Computer and Network SystemsONR Young InvestigatorOffice of Naval ResearchU.S. Army Research Office Young Investigato
- …