9,090 research outputs found
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
A Survey on Knowledge Graphs: Representation, Acquisition and Applications
Human knowledge provides a formal understanding of the world. Knowledge
graphs that represent structural relations between entities have become an
increasingly popular research direction towards cognition and human-level
intelligence. In this survey, we provide a comprehensive review of knowledge
graph covering overall research topics about 1) knowledge graph representation
learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph,
and 4) knowledge-aware applications, and summarize recent breakthroughs and
perspective directions to facilitate future research. We propose a full-view
categorization and new taxonomies on these topics. Knowledge graph embedding is
organized from four aspects of representation space, scoring function, encoding
models, and auxiliary information. For knowledge acquisition, especially
knowledge graph completion, embedding methods, path inference, and logical rule
reasoning, are reviewed. We further explore several emerging topics, including
meta relational learning, commonsense reasoning, and temporal knowledge graphs.
To facilitate future research on knowledge graphs, we also provide a curated
collection of datasets and open-source libraries on different tasks. In the
end, we have a thorough outlook on several promising research directions
Improving Transitive Embeddings in Neural Reasoning Tasks via Knowledge-Based Policy Networks
This paper proposes an approach to embed ontologies in order to deal with reasoning based on transitive relations, using the datasets provided for the SemRec Challenge at ISWC 2022. Knowledge Graph Embedding (KGE) methods provide a low-dimensional representation of the entities and relationships extracted from the knowledge graph and have been successfully used for a variety of applications such as question answering, reasoning, inference, and link prediction. However, most KGE methods cannot handle the underlying constraints and characteristics of ontologies, preventing them from performing important reasoning tasks such as subsumption and instance checking. We propose to extend translation-based embedding methods to include subsumption and instance checking reasoning by leveraging transitive relations. Experimental results show that our approach can achieve Hits@10 as high as %73 using samples generated by a policy network
Emulating the Human Mind: A Neural-symbolic Link Prediction Model with Fast and Slow Reasoning and Filtered Rules
Link prediction is an important task in addressing the incompleteness problem
of knowledge graphs (KG). Previous link prediction models suffer from issues
related to either performance or explanatory capability. Furthermore, models
that are capable of generating explanations, often struggle with erroneous
paths or reasoning leading to the correct answer. To address these challenges,
we introduce a novel Neural-Symbolic model named FaSt-FLiP (stands for Fast and
Slow Thinking with Filtered rules for Link Prediction task), inspired by two
distinct aspects of human cognition: "commonsense reasoning" and "thinking,
fast and slow." Our objective is to combine a logical and neural model for
enhanced link prediction. To tackle the challenge of dealing with incorrect
paths or rules generated by the logical model, we propose a semi-supervised
method to convert rules into sentences. These sentences are then subjected to
assessment and removal of incorrect rules using an NLI (Natural Language
Inference) model. Our approach to combining logical and neural models involves
first obtaining answers from both the logical and neural models. These answers
are subsequently unified using an Inference Engine module, which has been
realized through both algorithmic implementation and a novel neural model
architecture. To validate the efficacy of our model, we conducted a series of
experiments. The results demonstrate the superior performance of our model in
both link prediction metrics and the generation of more reliable explanations
- …