235,967 research outputs found
GMNN: Graph Markov Neural Networks
This paper studies semi-supervised object classification in relational data,
which is a fundamental problem in relational data modeling. The problem has
been extensively studied in the literature of both statistical relational
learning (e.g. relational Markov networks) and graph neural networks (e.g.
graph convolutional networks). Statistical relational learning methods can
effectively model the dependency of object labels through conditional random
fields for collective classification, whereas graph neural networks learn
effective object representations for classification through end-to-end
training. In this paper, we propose the Graph Markov Neural Network (GMNN) that
combines the advantages of both worlds. A GMNN models the joint distribution of
object labels with a conditional random field, which can be effectively trained
with the variational EM algorithm. In the E-step, one graph neural network
learns effective object representations for approximating the posterior
distributions of object labels. In the M-step, another graph neural network is
used to model the local label dependency. Experiments on object classification,
link classification, and unsupervised node representation learning show that
GMNN achieves state-of-the-art results.Comment: icml 201
Heterogeneous Graph Attention Network
Graph neural network, as a powerful graph representation technique based on
deep learning, has shown superior performance and attracted considerable
research interest. However, it has not been fully considered in graph neural
network for heterogeneous graph which contains different types of nodes and
links. The heterogeneity and rich semantic information bring great challenges
for designing a graph neural network for heterogeneous graph. Recently, one of
the most exciting advancements in deep learning is the attention mechanism,
whose great potential has been well demonstrated in various areas. In this
paper, we first propose a novel heterogeneous graph neural network based on the
hierarchical attention, including node-level and semantic-level attentions.
Specifically, the node-level attention aims to learn the importance between a
node and its metapath based neighbors, while the semantic-level attention is
able to learn the importance of different meta-paths. With the learned
importance from both node-level and semantic-level attention, the importance of
node and meta-path can be fully considered. Then the proposed model can
generate node embedding by aggregating features from meta-path based neighbors
in a hierarchical manner. Extensive experimental results on three real-world
heterogeneous graphs not only show the superior performance of our proposed
model over the state-of-the-arts, but also demonstrate its potentially good
interpretability for graph analysis.Comment: 10 page
Multivariate Time Series Forecasting with Transfer Entropy Graph
Multivariate time series (MTS) forecasting is an essential problem in many
fields. Accurate forecasting results can effectively help decision-making. To
date, many MTS forecasting methods have been proposed and widely applied.
However, these methods assume that the predicted value of a single variable is
affected by all other variables, which ignores the causal relationship among
variables. To address the above issue, we propose a novel end-to-end deep
learning model, termed graph neural network with Neural Granger Causality
(CauGNN) in this paper. To characterize the causal information among variables,
we introduce the Neural Granger Causality graph in our model. Each variable is
regarded as a graph node, and each edge represents the casual relationship
between variables. In addition, convolutional neural network (CNN) filters with
different perception scales are used for time series feature extraction, which
is used to generate the feature of each node. Finally, Graph Neural Network
(GNN) is adopted to tackle the forecasting problem of graph structure generated
by MTS. Three benchmark datasets from the real world are used to evaluate the
proposed CauGNN. The comprehensive experiments show that the proposed method
achieves state-of-the-art results in the MTS forecasting task
Streaming Graph Neural Networks
Graphs are essential representations of many real-world data such as social
networks. Recent years have witnessed the increasing efforts made to extend the
neural network models to graph-structured data. These methods, which are
usually known as the graph neural networks, have been applied to advance many
graphs related tasks such as reasoning dynamics of the physical system, graph
classification, and node classification. Most of the existing graph neural
network models have been designed for static graphs, while many real-world
graphs are inherently dynamic. For example, social networks are naturally
evolving as new users joining and new relations being created. Current graph
neural network models cannot utilize the dynamic information in dynamic graphs.
However, the dynamic information has been proven to enhance the performance of
many graph analytic tasks such as community detection and link prediction.
Hence, it is necessary to design dedicated graph neural networks for dynamic
graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf
N}eural {\bf N}etwork model, which can model the dynamic information as the
graph evolving. In particular, the proposed framework can keep updating node
information by capturing the sequential information of edges (interactions),
the time intervals between edges and information propagation coherently.
Experimental results on various dynamic graphs demonstrate the effectiveness of
the proposed framework
Fisher-Bures Adversary Graph Convolutional Networks
In a graph convolutional network, we assume that the graph is generated
wrt some observation noise. During learning, we make small random perturbations
of the graph and try to improve generalization. Based on quantum
information geometry, can be characterized by the
eigendecomposition of the graph Laplacian matrix. We try to minimize the loss
wrt the perturbed while making to be effective in
terms of the Fisher information of the neural network. Our proposed model can
consistently improve graph convolutional networks on semi-supervised node
classification tasks with reasonable computational overhead. We present three
different geometries on the manifold of graphs: the intrinsic geometry measures
the information theoretic dynamics of a graph; the extrinsic geometry
characterizes how such dynamics can affect externally a graph neural network;
the embedding geometry is for measuring node embeddings. These new analytical
tools are useful in developing a good understanding of graph neural networks
and fostering new techniques.Comment: Published in UAI 201
Machine Learning on Graphs: A Model and Comprehensive Taxonomy
There has been a surge of recent interest in learning representations for
graph-structured data. Graph representation learning methods have generally
fallen into three main categories, based on the availability of labeled data.
The first, network embedding (such as shallow graph embedding or graph
auto-encoders), focuses on learning unsupervised representations of relational
structure. The second, graph regularized neural networks, leverages graphs to
augment neural network losses with a regularization objective for
semi-supervised learning. The third, graph neural networks, aims to learn
differentiable functions over discrete topologies with arbitrary structure.
However, despite the popularity of these areas there has been surprisingly
little work on unifying the three paradigms. Here, we aim to bridge the gap
between graph neural networks, network embedding and graph regularization
models. We propose a comprehensive taxonomy of representation learning methods
for graph-structured data, aiming to unify several disparate bodies of work.
Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which
generalizes popular algorithms for semi-supervised learning on graphs (e.g.
GraphSage, Graph Convolutional Networks, Graph Attention Networks), and
unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)
into a single consistent approach. To illustrate the generality of this
approach, we fit over thirty existing methods into this framework. We believe
that this unifying view both provides a solid foundation for understanding the
intuition behind these methods, and enables future research in the area
SimGNN: A Neural Network Approach to Fast Graph Similarity Computation
Graph similarity search is among the most important graph-based applications,
e.g. finding the chemical compounds that are most similar to a query compound.
Graph similarity computation, such as Graph Edit Distance (GED) and Maximum
Common Subgraph (MCS), is the core operation of graph similarity search and
many other applications, but very costly to compute in practice. Inspired by
the recent success of neural network approaches to several graph applications,
such as node or graph classification, we propose a novel neural network based
approach to address this classic yet challenging graph problem, aiming to
alleviate the computational burden while preserving a good performance.
The proposed approach, called SimGNN, combines two strategies. First, we
design a learnable embedding function that maps every graph into a vector,
which provides a global summary of a graph. A novel attention mechanism is
proposed to emphasize the important nodes with respect to a specific similarity
metric. Second, we design a pairwise node comparison method to supplement the
graph-level embeddings with fine-grained node-level information. Our model
achieves better generalization on unseen graphs, and in the worst case runs in
quadratic time with respect to the number of nodes in two graphs. Taking GED
computation as an example, experimental results on three real graph datasets
demonstrate the effectiveness and efficiency of our approach. Specifically, our
model achieves smaller error rate and great time reduction compared against a
series of baselines, including several approximation algorithms on GED
computation, and many existing graph neural network based models. To the best
of our knowledge, we are among the first to adopt neural networks to explicitly
model the similarity between two graphs, and provide a new direction for future
research on graph similarity computation and graph similarity search.Comment: WSDM 201
DGCNN: Disordered Graph Convolutional Neural Network Based on the Gaussian Mixture Model
Convolutional neural networks (CNNs) can be applied to graph similarity
matching, in which case they are called graph CNNs. Graph CNNs are attracting
increasing attention due to their effectiveness and efficiency. However, the
existing convolution approaches focus only on regular data forms and require
the transfer of the graph or key node neighborhoods of the graph into the same
fixed form. During this transfer process, structural information of the graph
can be lost, and some redundant information can be incorporated. To overcome
this problem, we propose the disordered graph convolutional neural network
(DGCNN) based on the mixed Gaussian model, which extends the CNN by adding a
preprocessing layer called the disordered graph convolutional layer (DGCL). The
DGCL uses a mixed Gaussian function to realize the mapping between the
convolution kernel and the nodes in the neighborhood of the graph. The output
of the DGCL is the input of the CNN. We further implement a
backward-propagation optimization process of the convolutional layer by which
we incorporate the feature-learning model of the irregular node neighborhood
structure into the network. Thereafter, the optimization of the convolution
kernel becomes part of the neural network learning process. The DGCNN can
accept arbitrary scaled and disordered neighborhood graph structures as the
receptive fields of CNNs, which reduces information loss during graph
transformation. Finally, we perform experiments on multiple standard graph
datasets. The results show that the proposed method outperforms the
state-of-the-art methods in graph classification and retrieval.Comment: 16 pages,8 figure
edGNN: a Simple and Powerful GNN for Directed Labeled Graphs
The ability of a graph neural network (GNN) to leverage both the graph
topology and graph labels is fundamental to building discriminative node and
graph embeddings. Building on previous work, we theoretically show that edGNN,
our model for directed labeled graphs, is as powerful as the Weisfeiler-Lehman
algorithm for graph isomorphism. Our experiments support our theoretical
findings, confirming that graph neural networks can be used effectively for
inference problems on directed graphs with both node and edge labels. Code
available at https://github.com/guillaumejaume/edGNN.Comment: Representation Learning on Graphs and Manifolds @ ICLR1
Typed Graph Networks
Recently, the deep learning community has given growing attention to neural
architectures engineered to learn problems in relational domains. Convolutional
Neural Networks employ parameter sharing over the image domain, tying the
weights of neural connections on a grid topology and thus enforcing the
learning of a number of convolutional kernels. By instantiating trainable
neural modules and assembling them in varied configurations (apart from grids),
one can enforce parameter sharing over graphs, yielding models which can
effectively be fed with relational data. In this context, vertices in a graph
can be projected into a hyperdimensional real space and iteratively refined
over many message-passing iterations in an end-to-end differentiable
architecture. Architectures of this family have been referred to with several
definitions in the literature, such as Graph Neural Networks, Message-passing
Neural Networks, Relational Networks and Graph Networks. In this paper, we
revisit the original Graph Neural Network model and show that it generalises
many of the recent models, which in turn benefit from the insight of thinking
about vertex \textbf{types}. To illustrate the generality of the original
model, we present a Graph Neural Network formalisation, which partitions the
vertices of a graph into a number of types. Each type represents an entity in
the ontology of the problem one wants to learn. This allows - for instance -
one to assign embeddings to edges, hyperedges, and any number of global
attributes of the graph. As a companion to this paper we provide a
Python/Tensorflow library to facilitate the development of such architectures,
with which we instantiate the formalisation to reproduce a number of models
proposed in the current literature.Comment: Under submissio
- …