3,179 research outputs found
MoCoSA: Momentum Contrast for Knowledge Graph Completion with Structure-Augmented Pre-trained Language Models
Knowledge Graph Completion (KGC) aims to conduct reasoning on the facts
within knowledge graphs and automatically infer missing links. Existing methods
can mainly be categorized into structure-based or description-based. On the one
hand, structure-based methods effectively represent relational facts in
knowledge graphs using entity embeddings. However, they struggle with
semantically rich real-world entities due to limited structural information and
fail to generalize to unseen entities. On the other hand, description-based
methods leverage pre-trained language models (PLMs) to understand textual
information. They exhibit strong robustness towards unseen entities. However,
they have difficulty with larger negative sampling and often lag behind
structure-based methods. To address these issues, in this paper, we propose
Momentum Contrast for knowledge graph completion with Structure-Augmented
pre-trained language models (MoCoSA), which allows the PLM to perceive the
structural information by the adaptable structure encoder. To improve learning
efficiency, we proposed momentum hard negative and intra-relation negative
sampling. Experimental results demonstrate that our approach achieves
state-of-the-art performance in terms of mean reciprocal rank (MRR), with
improvements of 2.5% on WN18RR and 21% on OpenBG500
Interaction Embeddings for Prediction and Explanation in Knowledge Graphs
Knowledge graph embedding aims to learn distributed representations for
entities and relations, and is proven to be effective in many applications.
Crossover interactions --- bi-directional effects between entities and
relations --- help select related information when predicting a new triple, but
haven't been formally discussed before. In this paper, we propose CrossE, a
novel knowledge graph embedding which explicitly simulates crossover
interactions. It not only learns one general embedding for each entity and
relation as most previous methods do, but also generates multiple triple
specific embeddings for both of them, named interaction embeddings. We evaluate
embeddings on typical link prediction tasks and find that CrossE achieves
state-of-the-art results on complex and more challenging datasets. Furthermore,
we evaluate embeddings from a new perspective --- giving explanations for
predicted triples, which is important for real applications. In this work, an
explanation for a triple is regarded as a reliable closed-path between the head
and the tail entity. Compared to other baselines, we show experimentally that
CrossE, benefiting from interaction embeddings, is more capable of generating
reliable explanations to support its predictions.Comment: This paper is accepted by WSDM201
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
Knowledge graph embedding has been an active research topic for knowledge
base completion, with progressive improvement from the initial TransE, TransH,
DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution
over embeddings and multiple layers of nonlinear features to model knowledge
graphs. The model can be efficiently trained and scalable to large knowledge
graphs. However, there is no structure enforcement in the embedding space of
ConvE. The recent graph convolutional network (GCN) provides another way of
learning graph node embedding by successfully utilizing graph connectivity
structure. In this work, we propose a novel end-to-end Structure-Aware
Convolutional Network (SACN) that takes the benefit of GCN and ConvE together.
SACN consists of an encoder of a weighted graph convolutional network (WGCN),
and a decoder of a convolutional network called Conv-TransE. WGCN utilizes
knowledge graph node structure, node attributes and edge relation types. It has
learnable weights that adapt the amount of information from neighbors used in
local aggregation, leading to more accurate embeddings of graph nodes. Node
attributes in the graph are represented as additional nodes in the WGCN. The
decoder Conv-TransE enables the state-of-the-art ConvE to be translational
between entities and relations while keeps the same link prediction performance
as ConvE. We demonstrate the effectiveness of the proposed SACN on standard
FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over
the state-of-the-art ConvE in terms of HITS@1, HITS@3 and [email protected]: The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI
2019
- …