13,140 research outputs found
Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification
Multiple kernel learning (MKL) method is generally believed to perform better
than single kernel method. However, some empirical studies show that this is
not always true: the combination of multiple kernels may even yield an even
worse performance than using a single kernel. There are two possible reasons
for the failure: (i) most existing MKL methods assume that the optimal kernel
is a linear combination of base kernels, which may not hold true; and (ii) some
kernel weights are inappropriately assigned due to noises and carelessly
designed algorithms. In this paper, we propose a novel MKL framework by
following two intuitive assumptions: (i) each kernel is a perturbation of the
consensus kernel; and (ii) the kernel that is close to the consensus kernel
should be assigned a large weight. Impressively, the proposed method can
automatically assign an appropriate weight to each kernel without introducing
additional parameters, as existing methods do. The proposed framework is
integrated into a unified framework for graph-based clustering and
semi-supervised classification. We have conducted experiments on multiple
benchmark datasets and our empirical results verify the superiority of the
proposed framework.Comment: Accepted by IJCAI 2018, Code is availabl
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.Comment: This paper has been withdrawn by the author due to the terrible
writin
Graph Few-shot Learning via Knowledge Transfer
Towards the challenging problem of semi-supervised node classification, there
have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have
aroused great interest recently, which update the representation of each node
by aggregating information of its neighbors. However, most GNNs have shallow
layers with a limited receptive field and may not achieve satisfactory
performance especially when the number of labeled nodes is quite small. To
address this challenge, we innovatively propose a graph few-shot learning (GFL)
algorithm that incorporates prior knowledge learned from auxiliary graphs to
improve classification accuracy on the target graph. Specifically, a
transferable metric space characterized by a node embedding and a
graph-specific prototype embedding function is shared between auxiliary graphs
and the target, facilitating the transfer of structural knowledge. Extensive
experiments and ablation studies on four real-world graph datasets demonstrate
the effectiveness of our proposed model.Comment: Full paper (with Appendix) of AAAI 202
Similarity Learning via Kernel Preserving Embedding
Data similarity is a key concept in many data-driven applications. Many
algorithms are sensitive to similarity measures. To tackle this fundamental
problem, automatically learning of similarity information from data via
self-expression has been developed and successfully applied in various models,
such as low-rank representation, sparse subspace learning, semi-supervised
learning. However, it just tries to reconstruct the original data and some
valuable information, e.g., the manifold structure, is largely ignored. In this
paper, we argue that it is beneficial to preserve the overall relations when we
extract similarity information. Specifically, we propose a novel similarity
learning framework by minimizing the reconstruction error of kernel matrices,
rather than the reconstruction error of original data adopted by existing work.
Taking the clustering task as an example to evaluate our method, we observe
considerable improvements compared to other state-of-the-art methods. More
importantly, our proposed framework is very general and provides a novel and
fundamental building block for many other similarity-based tasks. Besides, our
proposed kernel preserving opens up a large number of possibilities to embed
high-dimensional data into low-dimensional space.Comment: Published in AAAI 201
- …