120,475 research outputs found
CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural Networks
Dynamic graph data mining has gained popularity in recent years due to the
rich information contained in dynamic graphs and their widespread use in the
real world. Despite the advances in dynamic graph neural networks (DGNNs), the
rich information and diverse downstream tasks have posed significant
difficulties for the practical application of DGNNs in industrial scenarios. To
this end, in this paper, we propose to address them by pre-training and present
the Contrastive Pre-Training Method for Dynamic Graph Neural Networks (CPDG).
CPDG tackles the challenges of pre-training for DGNNs, including generalization
and long-short term modeling capability, through a flexible structural-temporal
subgraph sampler along with structural-temporal contrastive pre-training
schemes. Extensive experiments conducted on both large-scale research and
industrial dynamic graph datasets show that CPDG outperforms existing methods
in dynamic graph pre-training for various downstream tasks under three transfer
settings.Comment: 12 pages, 6 figure
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
Graph representation learning has emerged as a powerful technique for
addressing real-world problems. Various downstream graph learning tasks have
benefited from its recent developments, such as node classification, similarity
search, and graph classification. However, prior arts on graph representation
learning focus on domain specific problems and train a dedicated model for each
graph dataset, which is usually non-transferable to out-of-domain data.
Inspired by the recent advances in pre-training from natural language
processing and computer vision, we design Graph Contrastive Coding (GCC) -- a
self-supervised graph neural network pre-training framework -- to capture the
universal network topological properties across multiple networks. We design
GCC's pre-training task as subgraph instance discrimination in and across
networks and leverage contrastive learning to empower graph neural networks to
learn the intrinsic and transferable structural representations. We conduct
extensive experiments on three graph learning tasks and ten graph datasets. The
results show that GCC pre-trained on a collection of diverse datasets can
achieve competitive or better performance to its task-specific and
trained-from-scratch counterparts. This suggests that the pre-training and
fine-tuning paradigm presents great potential for graph representation
learning.Comment: 11 pages, 5 figures, to appear in KDD 2020 proceeding
Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments
Most reinforcement learning approaches used in behavior generation utilize
vectorial information as input. However, this requires the network to have a
pre-defined input-size -- in semantic environments this means assuming the
maximum number of vehicles. Additionally, this vectorial representation is not
invariant to the order and number of vehicles. To mitigate the above-stated
disadvantages, we propose combining graph neural networks with actor-critic
reinforcement learning. As graph neural networks apply the same network to
every vehicle and aggregate incoming edge information, they are invariant to
the number and order of vehicles. This makes them ideal candidates to be used
as networks in semantic environments -- environments consisting of objects
lists. Graph neural networks exhibit some other advantages that make them
favorable to be used in semantic environments. The relational information is
explicitly given and does not have to be inferred. Moreover, graph neural
networks propagate information through the network and can gather higher-degree
information. We demonstrate our approach using a highway lane-change scenario
and compare the performance of graph neural networks to conventional ones. We
show that graph neural networks are capable of handling scenarios with a
varying number and order of vehicles during training and application
Making Neural Networks FAIR
Research on neural networks has gained significant momentum over the past few
years. Because training is a resource-intensive process and training data
cannot always be made available to everyone, there has been a trend to reuse
pre-trained neural networks. As such, neural networks themselves have become
research data. In this paper, we first present the neural network ontology
FAIRnets Ontology, an ontology to make existing neural network models findable,
accessible, interoperable, and reusable according to the FAIR principles. Our
ontology allows us to model neural networks on a meta-level in a structured
way, including the representation of all network layers and their
characteristics. Secondly, we have modeled over 18,400 neural networks from
GitHub based on this ontology, which we provide to the public as a knowledge
graph called FAIRnets, ready to be used for recommending suitable neural
networks to data scientists
Pore-GNN: A graph neural network-based framework for predicting flow properties of porous media from micro-CT images
This paper presents a hybrid deep learning framework that combines graph neural networks with convolutional neural networks to predict porous media properties. This approach capitalizes on the capabilities of pre-trained convolutional neural networks to extract n-dimensional feature vectors from processed three dimensional micro computed tomography porous media images obtained from seven different sandstone rock samples. Subsequently, two strategies for embedding the computed feature vectors into graphs were explored: extracting a single feature vector per sample (image) and treating each sample as a node in the training graph, and representing each sample as a graph by extracting a fixed number of feature vectors, which form the nodes of each training graph. Various types of graph convolutional layers were examined to evaluate the capabilities and limitations of spectral and spatial approaches. The dataset was divided into 70/20/10 for training, validation, and testing. The models were trained to predict the absolute permeability of porous media. Notably, the proposed architectures further reduce the selected objective loss function to values below 35 mD, with improvements in the coefficient of determination reaching 9%. Moreover, the generalizability of the networks was evaluated by testing their performance on unseen sandstone and carbonate rock samples that were not encountered during training. Finally, a sensitivity analysis is conducted to investigate the influence of various hyperparameters on the performance of the models. The findings highlight the potential of graph neural networks as promising deep learning-based alternatives for characterizing porous media properties. The proposed architectures efficiently predict the permeability, which is more than 500 times faster than that of numerical solvers.Document Type: Original articleCited as: Alzahrani, M. K., Shapoval, A., Chen, Z., Rahman, S. S. Pore-GNN: A graph neural network-based framework for predicting flow properties of porous media from micro-CT images. Advances in Geo-Energy Research, 2023, 10(1):39-55. https://doi.org/10.46690/ager.2023.10.0
- …