5,414 research outputs found
MILE: A Multi-Level Framework for Scalable Graph Embedding
Recently there has been a surge of interest in designing graph embedding
methods. Few, if any, can scale to a large-sized graph with millions of nodes
due to both computational complexity and memory requirements. In this paper, we
relax this limitation by introducing the MultI-Level Embedding (MILE) framework
-- a generic methodology allowing contemporary graph embedding methods to scale
to large graphs. MILE repeatedly coarsens the graph into smaller ones using a
hybrid matching technique to maintain the backbone structure of the graph. It
then applies existing embedding methods on the coarsest graph and refines the
embeddings to the original graph through a graph convolution neural network
that it learns. The proposed MILE framework is agnostic to the underlying graph
embedding techniques and can be applied to many existing graph embedding
methods without modifying them. We employ our framework on several popular
graph embedding techniques and conduct embedding for real-world graphs.
Experimental results on five large-scale datasets demonstrate that MILE
significantly boosts the speed (order of magnitude) of graph embedding while
generating embeddings of better quality, for the task of node classification.
MILE can comfortably scale to a graph with 9 million nodes and 40 million
edges, on which existing methods run out of memory or take too long to compute
on a modern workstation. Our code and data are publicly available with detailed
instructions for adding new base embedding methods:
\url{https://github.com/jiongqian/MILE}.Comment: Accepted in ICWSM 202
Graph Convolutional Neural Networks based on Quantum Vertex Saliency
This paper proposes a new Quantum Spatial Graph Convolutional Neural Network
(QSGCNN) model that can directly learn a classification function for graphs of
arbitrary sizes. Unlike state-of-the-art Graph Convolutional Neural Network
(GCNN) models, the proposed QSGCNN model incorporates the process of
identifying transitive aligned vertices between graphs, and transforms
arbitrary sized graphs into fixed-sized aligned vertex grid structures. In
order to learn representative graph characteristics, a new quantum spatial
graph convolution is proposed and employed to extract multi-scale vertex
features, in terms of quantum information propagation between grid vertices of
each graph. Since the quantum spatial convolution preserves the grid structures
of the input vertices (i.e., the convolution layer does not change the original
spatial sequence of vertices), the proposed QSGCNN model allows to directly
employ the traditional convolutional neural network architecture to further
learn from the global graph topology, providing an end-to-end deep learning
architecture that integrates the graph representation and learning in the
quantum spatial graph convolution layer and the traditional convolutional layer
for graph classifications. We demonstrate the effectiveness of the proposed
QSGCNN model in relation to existing state-of-the-art methods. The proposed
QSGCNN model addresses the shortcomings of information loss and imprecise
information representation arising in existing GCN models associated with the
use of SortPooling or SumPooling layers. Experiments on benchmark graph
classification datasets demonstrate the effectiveness of the proposed QSGCNN
model
- …