4,729 research outputs found
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible
writin
Recommended from our members
Adaptive multi-view semi-supervised nonnegative matrix factorization
Multi-view clustering, which explores complementary information between multiple distinct feature sets, has received considerable attention. For accurate clustering, all data with the same label should be clustered together regardless of their multiple views. However, this is not guaranteed in existing approaches. To address this issue, we propose Adaptive Multi-View Semi-Supervised Nonnegative Matrix Factorization (AMVNMF), which uses label information as hard constraints to ensure data with same label are clustered together, so that the discriminating power of new representations are enhanced. Besides, AMVNMF provides a viable solution to learn the weight of each view adaptively with only a single parameter. Using L2,1 -norm, AMVNMF is also robust to noises and outliers. We further develop an efficient iterative algorithm for solving the optimization problem. Experiments carried out on five well-known datasets have demonstrated the effectiveness of AMVNMF in comparison to other existing state-of-the-art approaches in terms of accuracy and normalized mutual information
Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification
Multiple kernel learning (MKL) method is generally believed to perform better
than single kernel method. However, some empirical studies show that this is
not always true: the combination of multiple kernels may even yield an even
worse performance than using a single kernel. There are two possible reasons
for the failure: (i) most existing MKL methods assume that the optimal kernel
is a linear combination of base kernels, which may not hold true; and (ii) some
kernel weights are inappropriately assigned due to noises and carelessly
designed algorithms. In this paper, we propose a novel MKL framework by
following two intuitive assumptions: (i) each kernel is a perturbation of the
consensus kernel; and (ii) the kernel that is close to the consensus kernel
should be assigned a large weight. Impressively, the proposed method can
automatically assign an appropriate weight to each kernel without introducing
additional parameters, as existing methods do. The proposed framework is
integrated into a unified framework for graph-based clustering and
semi-supervised classification. We have conducted experiments on multiple
benchmark datasets and our empirical results verify the superiority of the
proposed framework.Comment: Accepted by IJCAI 2018, Code is availabl
Similarity Learning via Kernel Preserving Embedding
Data similarity is a key concept in many data-driven applications. Many
algorithms are sensitive to similarity measures. To tackle this fundamental
problem, automatically learning of similarity information from data via
self-expression has been developed and successfully applied in various models,
such as low-rank representation, sparse subspace learning, semi-supervised
learning. However, it just tries to reconstruct the original data and some
valuable information, e.g., the manifold structure, is largely ignored. In this
paper, we argue that it is beneficial to preserve the overall relations when we
extract similarity information. Specifically, we propose a novel similarity
learning framework by minimizing the reconstruction error of kernel matrices,
rather than the reconstruction error of original data adopted by existing work.
Taking the clustering task as an example to evaluate our method, we observe
considerable improvements compared to other state-of-the-art methods. More
importantly, our proposed framework is very general and provides a novel and
fundamental building block for many other similarity-based tasks. Besides, our
proposed kernel preserving opens up a large number of possibilities to embed
high-dimensional data into low-dimensional space.Comment: Published in AAAI 201
- …