2 research outputs found
Graph Neural Distance Metric Learning with Graph-Bert
Graph distance metric learning serves as the foundation for many graph
learning problems, e.g., graph clustering, graph classification and graph
matching. Existing research works on graph distance metric (or graph kernels)
learning fail to maintain the basic properties of such metrics, e.g.,
non-negative, identity of indiscernibles, symmetry and triangle inequality,
respectively. In this paper, we will introduce a new graph neural network based
distance metric learning approaches, namely GB-DISTANCE (GRAPH-BERT based
Neural Distance). Solely based on the attention mechanism, GB-DISTANCE can
learn graph instance representations effectively based on a pre-trained
GRAPH-BERT model. Different from the existing supervised/unsupervised metrics,
GB-DISTANCE can be learned effectively in a semi-supervised manner. In
addition, GB-DISTANCE can also maintain the distance metric basic properties
mentioned above. Extensive experiments have been done on several benchmark
graph datasets, and the results demonstrate that GB-DISTANCE can out-perform
the existing baseline methods, especially the recent graph neural network model
based graph metrics, with a significant gap in computing the graph distance.Comment: 11 page
G5: A Universal GRAPH-BERT for Graph-to-Graph Transfer and Apocalypse Learning
The recent GRAPH-BERT model introduces a new approach to learning graph
representations merely based on the attention mechanism. GRAPH-BERT provides an
opportunity for transferring pre-trained models and learned graph
representations across different tasks within the same graph dataset. In this
paper, we will further investigate the graph-to-graph transfer of a universal
GRAPH-BERT for graph representation learning across different graph datasets,
and our proposed model is also referred to as the G5 for simplicity. Many
challenges exist in learning G5 to adapt the distinct input and output
configurations for each graph data source, as well as the information
distributions differences. G5 introduces a pluggable model architecture: (a)
each data source will be pre-processed with a unique input representation
learning component; (b) each output application task will also have a specific
functional component; and (c) all such diverse input and output components will
all be conjuncted with a universal GRAPH-BERT core component via an input size
unification layer and an output representation fusion layer, respectively.
The G5 model removes the last obstacle for cross-graph representation
learning and transfer. For the graph sources with very sparse training data,
the G5 model pre-trained on other graphs can still be utilized for
representation learning with necessary fine-tuning. What's more, the
architecture of G5 also allows us to learn a supervised functional classifier
for data sources without any training data at all. Such a problem is also named
as the Apocalypse Learning task in this paper. Two different label reasoning
strategies, i.e., Cross-Source Classification Consistency Maximization (CCCM)
and Cross-Source Dynamic Routing (CDR), are introduced in this paper to address
the problem.Comment: Keywords: Graph-Bert; Representation Learning; Apocalypse Learning;
Transfer Learning; Graph Mining; Data Minin