10,365 research outputs found
Relphormer: Relational Graph Transformer for Knowledge Graph Representations
Transformers have achieved remarkable performance in widespread fields,
including natural language processing, computer vision and graph mining.
However, vanilla Transformer architectures have not yielded promising
improvements in the Knowledge Graph (KG) representations, where the
translational distance paradigm dominates this area. Note that vanilla
Transformer architectures struggle to capture the intrinsically heterogeneous
structural and semantic information of knowledge graphs. To this end, we
propose a new variant of Transformer for knowledge graph representations dubbed
Relphormer. Specifically, we introduce Triple2Seq which can dynamically sample
contextualized sub-graph sequences as the input to alleviate the heterogeneity
issue. We propose a novel structure-enhanced self-attention mechanism to encode
the relational information and keep the semantic information within entities
and relations. Moreover, we utilize masked knowledge modeling for general
knowledge graph representation learning, which can be applied to various
KG-based tasks including knowledge graph completion, question answering, and
recommendation. Experimental results on six datasets show that Relphormer can
obtain better performance compared with baselines. Code is available in
https://github.com/zjunlp/Relphormer.Comment: Work in progres
Heterogeneous Graph Learning for Acoustic Event Classification
Heterogeneous graphs provide a compact, efficient, and scalable way to model
data involving multiple disparate modalities. This makes modeling audiovisual
data using heterogeneous graphs an attractive option. However, graph structure
does not appear naturally in audiovisual data. Graphs for audiovisual data are
constructed manually which is both difficult and sub-optimal. In this work, we
address this problem by (i) proposing a parametric graph construction strategy
for the intra-modal edges, and (ii) learning the crossmodal edges. To this end,
we develop a new model, heterogeneous graph crossmodal network (HGCN) that
learns the crossmodal edges. Our proposed model can adapt to various spatial
and temporal scales owing to its parametric construction, while the learnable
crossmodal edges effectively connect the relevant nodes across modalities.
Experiments on a large benchmark dataset (AudioSet) show that our model is
state-of-the-art (0.53 mean average precision), outperforming transformer-based
models and other graph-based models.Comment: arXiv admin note: text overlap with arXiv:2207.0793
Simple and Efficient Heterogeneous Graph Neural Network
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed
rich structural and semantic information of a heterogeneous graph into node
representations. Existing HGNNs inherit many mechanisms from graph neural
networks (GNNs) over homogeneous graphs, especially the attention mechanism and
the multi-layer structure. These mechanisms bring excessive complexity, but
seldom work studies whether they are really effective on heterogeneous graphs.
This paper conducts an in-depth and detailed study of these mechanisms and
proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN). To
easily capture structural information, SeHGNN pre-computes the neighbor
aggregation using a light-weight mean aggregator, which reduces complexity by
removing overused neighbor attention and avoiding repeated neighbor aggregation
in every training epoch. To better utilize semantic information, SeHGNN adopts
the single-layer structure with long metapaths to extend the receptive field,
as well as a transformer-based semantic fusion module to fuse features from
different metapaths. As a result, SeHGNN exhibits the characteristics of simple
network structure, high prediction accuracy, and fast training speed. Extensive
experiments on five real-world heterogeneous graphs demonstrate the superiority
of SeHGNN over the state-of-the-arts on both accuracy and training speed.Comment: Accepted by AAAI 202
DSHGT: Dual-Supervisors Heterogeneous Graph Transformer -- A pioneer study of using heterogeneous graph learning for detecting software vulnerabilities
Vulnerability detection is a critical problem in software security and
attracts growing attention both from academia and industry. Traditionally,
software security is safeguarded by designated rule-based detectors that
heavily rely on empirical expertise, requiring tremendous effort from software
experts to generate rule repositories for large code corpus. Recent advances in
deep learning, especially Graph Neural Networks (GNN), have uncovered the
feasibility of automatic detection of a wide range of software vulnerabilities.
However, prior learning-based works only break programs down into a sequence of
word tokens for extracting contextual features of codes, or apply GNN largely
on homogeneous graph representation (e.g., AST) without discerning complex
types of underlying program entities (e.g., methods, variables). In this work,
we are one of the first to explore heterogeneous graph representation in the
form of Code Property Graph and adapt a well-known heterogeneous graph network
with a dual-supervisor structure for the corresponding graph learning task.
Using the prototype built, we have conducted extensive experiments on both
synthetic datasets and real-world projects. Compared with the state-of-the-art
baselines, the results demonstrate promising effectiveness in this research
direction in terms of vulnerability detection performance (average F1
improvements over 10\% in real-world projects) and transferability from C/C++
to other programming languages (average F1 improvements over 11%)
- …