5 research outputs found
Tensor graph convolutional neural network
In this paper, we propose a novel tensor graph convolutional neural network
(TGCNN) to conduct convolution on factorizable graphs, for which here two types
of problems are focused, one is sequential dynamic graphs and the other is
cross-attribute graphs. Especially, we propose a graph preserving layer to
memorize salient nodes of those factorized subgraphs, i.e. cross graph
convolution and graph pooling. For cross graph convolution, a parameterized
Kronecker sum operation is proposed to generate a conjunctive adjacency matrix
characterizing the relationship between every pair of nodes across two
subgraphs. Taking this operation, then general graph convolution may be
efficiently performed followed by the composition of small matrices, which thus
reduces high memory and computational burden. Encapsuling sequence graphs into
a recursive learning, the dynamics of graphs can be efficiently encoded as well
as the spatial layout of graphs. To validate the proposed TGCNN, experiments
are conducted on skeleton action datasets as well as matrix completion dataset.
The experiment results demonstrate that our method can achieve more competitive
performance with the state-of-the-art methods
Spatial-Temporal Tensor Graph Convolutional Network for Traffic Prediction
Accurate traffic prediction is crucial to the guidance and management of
urban traffics. However, most of the existing traffic prediction models do not
consider the computational burden and memory space when they capture
spatial-temporal dependence among traffic data. In this work, we propose a
factorized Spatial-Temporal Tensor Graph Convolutional Network to deal with
traffic speed prediction. Traffic networks are modeled and unified into a graph
that integrates spatial and temporal information simultaneously. We further
extend graph convolution into tensor space and propose a tensor graph
convolution network to extract more discriminating features from
spatial-temporal graph data. To reduce the computational burden, we take Tucker
tensor decomposition and derive factorized a tensor convolution, which performs
separate filtering in small-scale space, time, and feature modes. Besides, we
can benefit from noise suppression of traffic data when discarding those
trivial components in the process of tensor decomposition. Extensive
experiments on two real-world traffic speed datasets demonstrate our method is
more effective than those traditional traffic prediction methods, and meantime
achieves state-of-the-art performance
Classifying Signals on Irregular Domains via Convolutional Cluster Pooling
We present a novel and hierarchical approach for supervised classification of
signals spanning over a fixed graph, reflecting shared properties of the
dataset. To this end, we introduce a Convolutional Cluster Pooling layer
exploiting a multi-scale clustering in order to highlight, at different
resolutions, locally connected regions on the input graph. Our proposal
generalises well-established neural models such as Convolutional Neural
Networks (CNNs) on irregular and complex domains, by means of the exploitation
of the weight sharing property in a graph-oriented architecture. In this work,
such property is based on the centrality of each vertex within its
soft-assigned cluster. Extensive experiments on NTU RGB+D, CIFAR-10 and 20NEWS
demonstrate the effectiveness of the proposed technique in capturing both local
and global patterns in graph-structured data out of different domains.Comment: 12 pages, 6 figures. To appear in the Proceedings of the 22nd
International Conference on Artificial Intelligence and Statistics (AISTATS)
2019, Naha, Okinawa, Japan. PMLR: Volume 8
Graph Prolongation Convolutional Networks: Explicitly Multiscale Machine Learning on Graphs with Applications to Modeling of Cytoskeleton
We define a novel type of ensemble Graph Convolutional Network (GCN) model.
Using optimized linear projection operators to map between spatial scales of
graph, this ensemble model learns to aggregate information from each scale for
its final prediction. We calculate these linear projection operators as the
infima of an objective function relating the structure matrices used for each
GCN. Equipped with these projections, our model (a Graph
Prolongation-Convolutional Network) outperforms other GCN ensemble models at
predicting the potential energy of monomer subunits in a coarse-grained
mechanochemical simulation of microtubule bending. We demonstrate these
performance gains by measuring an estimate of the FLOPs spent to train each
model, as well as wall-clock time. Because our model learns at multiple scales,
it is possible to train at each scale according to a predetermined schedule of
coarse vs. fine training. We examine several such schedules adapted from the
Algebraic Multigrid (AMG) literature, and quantify the computational benefit of
each. We also compare this model to another model which features an optimized
coarsening of the input graph. Finally, we derive backpropagation rules for the
input of our network model with respect to its output, and discuss how our
method may be extended to very large graphs.Comment: Revised version submitted to IOP: Machine Learning, Science, and
Technolog
Graph Neural Networks: Taxonomy, Advances and Trends
Graph neural networks provide a powerful toolkit for embedding real-world
graphs into low-dimensional spaces according to specific tasks. Up to now,
there have been several surveys on this topic. However, they usually lay
emphasis on different angles so that the readers can not see a panorama of the
graph neural networks. This survey aims to overcome this limitation, and
provide a comprehensive review on the graph neural networks. First of all, we
provide a novel taxonomy for the graph neural networks, and then refer to up to
400 relevant literatures to show the panorama of the graph neural networks. All
of them are classified into the corresponding categories. In order to drive the
graph neural networks into a new stage, we summarize four future research
directions so as to overcome the facing challenges. It is expected that more
and more scholars can understand and exploit the graph neural networks, and use
them in their research community.Comment: 42 pages, 7 figure