26 research outputs found
Gradient Derivation for Learnable Parameters in Graph Attention Networks
This work provides a comprehensive derivation of the parameter gradients for
GATv2 [4], a widely used implementation of Graph Attention Networks (GATs).
GATs have proven to be powerful frameworks for processing graph-structured data
and, hence, have been used in a range of applications. However, the achieved
performance by these attempts has been found to be inconsistent across
different datasets and the reasons for this remains an open research question.
As the gradient flow provides valuable insights into the training dynamics of
statistically learning models, this work obtains the gradients for the
trainable model parameters of GATv2. The gradient derivations supplement the
efforts of [2], where potential pitfalls of GATv2 are investigated
MQENet: A Mesh Quality Evaluation Neural Network Based on Dynamic Graph Attention
With the development of computational fluid dynamics, the requirements for
the fluid simulation accuracy in industrial applications have also increased.
The quality of the generated mesh directly affects the simulation accuracy.
However, previous mesh quality metrics and models cannot evaluate meshes
comprehensively and objectively. To this end, we propose MQENet, a structured
mesh quality evaluation neural network based on dynamic graph attention. MQENet
treats the mesh evaluation task as a graph classification task for classifying
the quality of the input structured mesh. To make graphs generated from
structured meshes more informative, MQENet introduces two novel structured mesh
preprocessing algorithms. These two algorithms can also improve the conversion
efficiency of structured mesh data. Experimental results on the benchmark
structured mesh dataset NACA-Market show the effectiveness of MQENet in the
mesh quality evaluation task
Path Integral Based Convolution and Pooling for Graph Neural Networks
Graph neural networks (GNNs) extends the functionality of traditional neural
networks to graph-structured data. Similar to CNNs, an optimized design of
graph convolution and pooling is key to success. Borrowing ideas from physics,
we propose a path integral based graph neural networks (PAN) for classification
and regression tasks on graphs. Specifically, we consider a convolution
operation that involves every path linking the message sender and receiver with
learnable weights depending on the path length, which corresponds to the
maximal entropy random walk. It generalizes the graph Laplacian to a new
transition matrix we call maximal entropy transition (MET) matrix derived from
a path integral formalism. Importantly, the diagonal entries of the MET matrix
are directly related to the subgraph centrality, thus providing a natural and
adaptive pooling mechanism. PAN provides a versatile framework that can be
tailored for different graph data with varying sizes and structures. We can
view most existing GNN architectures as special cases of PAN. Experimental
results show that PAN achieves state-of-the-art performance on various graph
classification/regression tasks, including a new benchmark dataset from
statistical mechanics we propose to boost applications of GNN in physical
sciences.Comment: 15 pages, 4 figures, 6 tables. arXiv admin note: text overlap with
arXiv:1904.1099
Automated Data Augmentations for Graph Classification
Data augmentations are effective in improving the invariance of learning
machines. We argue that the corechallenge of data augmentations lies in
designing data transformations that preserve labels. This is
relativelystraightforward for images, but much more challenging for graphs. In
this work, we propose GraphAug, a novelautomated data augmentation method
aiming at computing label-invariant augmentations for graph
classification.Instead of using uniform transformations as in existing studies,
GraphAug uses an automated augmentationmodel to avoid compromising critical
label-related information of the graph, thereby producing
label-invariantaugmentations at most times. To ensure label-invariance, we
develop a training method based on reinforcementlearning to maximize an
estimated label-invariance probability. Comprehensive experiments show that
GraphAugoutperforms previous graph augmentation methods on various graph
classification tasks
Adaptive-Step Graph Meta-Learner for Few-Shot Graph Classification
Graph classification aims to extract accurate information from
graph-structured data for classification and is becoming more and more
important in graph learning community. Although Graph Neural Networks (GNNs)
have been successfully applied to graph classification tasks, most of them
overlook the scarcity of labeled graph data in many applications. For example,
in bioinformatics, obtaining protein graph labels usually needs laborious
experiments. Recently, few-shot learning has been explored to alleviate this
problem with only given a few labeled graph samples of test classes. The shared
sub-structures between training classes and test classes are essential in
few-shot graph classification. Exiting methods assume that the test classes
belong to the same set of super-classes clustered from training classes.
However, according to our observations, the label spaces of training classes
and test classes usually do not overlap in real-world scenario. As a result,
the existing methods don't well capture the local structures of unseen test
classes. To overcome the limitation, in this paper, we propose a direct method
to capture the sub-structures with well initialized meta-learner within a few
adaptation steps. More specifically, (1) we propose a novel framework
consisting of a graph meta-learner, which uses GNNs based modules for fast
adaptation on graph data, and a step controller for the robustness and
generalization of meta-learner; (2) we provide quantitative analysis for the
framework and give a graph-dependent upper bound of the generalization error
based on our framework; (3) the extensive experiments on real-world datasets
demonstrate that our framework gets state-of-the-art results on several
few-shot graph classification tasks compared to baselines
Haar Graph Pooling
Deep Graph Neural Networks (GNNs) are useful models for graph classification
and graph-based regression tasks. In these tasks, graph pooling is a critical
ingredient by which GNNs adapt to input graphs of varying size and structure.
We propose a new graph pooling operation based on compressive Haar transforms
-- HaarPooling. HaarPooling implements a cascade of pooling operations; it is
computed by following a sequence of clusterings of the input graph. A
HaarPooling layer transforms a given input graph to an output graph with a
smaller node number and the same feature dimension; the compressive Haar
transform filters out fine detail information in the Haar wavelet domain. In
this way, all the HaarPooling layers together synthesize the features of any
given input graph into a feature vector of uniform size. Such transforms
provide a sparse characterization of the data and preserve the structure
information of the input graph. GNNs implemented with standard graph
convolution layers and HaarPooling layers achieve state of the art performance
on diverse graph classification and regression problems.Comment: 14 pages, 4 figures, 7 tables; Published in ICML202