2,313 research outputs found
Context-aware Path Ranking for Knowledge Base Completion
Knowledge base (KB) completion aims to infer missing facts from existing ones
in a KB. Among various approaches, path ranking (PR) algorithms have received
increasing attention in recent years. PR algorithms enumerate paths between
entity pairs in a KB and use those paths as features to train a model for
missing fact prediction. Due to their good performances and high model
interpretability, several methods have been proposed. However, most existing
methods suffer from scalability (high RAM consumption) and feature explosion
(trains on an exponentially large number of features) problems. This paper
proposes a Context-aware Path Ranking (C-PR) algorithm to solve these problems
by introducing a selective path exploration strategy. C-PR learns global
semantics of entities in the KB using word embedding and leverages the
knowledge of entity semantics to enumerate contextually relevant paths using
bidirectional random walk. Experimental results on three large KBs show that
the path features (fewer in number) discovered by C-PR not only improve
predictive performance but also are more interpretable than existing baselines
Modeling Relation Paths for Representation Learning of Knowledge Bases
Representation learning of knowledge bases (KBs) aims to embed both entities
and relations into a low-dimensional space. Most existing methods only consider
direct relations in representation learning. We argue that multiple-step
relation paths also contain rich inference patterns between entities, and
propose a path-based representation learning model. This model considers
relation paths as translations between entities for representation learning,
and addresses two key challenges: (1) Since not all relation paths are
reliable, we design a path-constraint resource allocation algorithm to measure
the reliability of relation paths. (2) We represent relation paths via semantic
composition of relation embeddings. Experimental results on real-world datasets
show that, as compared with baselines, our model achieves significant and
consistent improvements on knowledge base completion and relation extraction
from text.Comment: 10 page
Compositional Vector Space Models for Knowledge Base Completion
Knowledge base (KB) completion adds new facts to a KB by making inferences
from existing facts, for example by inferring with high likelihood
nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop
relational synonyms like this, or use as evidence a multi-hop relational path
treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper
presents an approach that reasons about conjunctions of multi-hop relations
non-atomically, composing the implications of a path using a recursive neural
network (RNN) that takes as inputs vector embeddings of the binary relation in
the path. Not only does this allow us to generalize to paths unseen at training
time, but also, with a single high-capacity RNN, to predict new relation types
not seen when the compositional model was trained (zero-shot learning). We
assemble a new dataset of over 52M relational triples, and show that our method
improves over a traditional classifier by 11%, and a method leveraging
pre-trained embeddings by 7%.Comment: The 53rd Annual Meeting of the Association for Computational
Linguistics and The 7th International Joint Conference of the Asian
Federation of Natural Language Processing, 201
Unsupervised Extraction of Representative Concepts from Scientific Literature
This paper studies the automated categorization and extraction of scientific
concepts from titles of scientific articles, in order to gain a deeper
understanding of their key contributions and facilitate the construction of a
generic academic knowledgebase. Towards this goal, we propose an unsupervised,
domain-independent, and scalable two-phase algorithm to type and extract key
concept mentions into aspects of interest (e.g., Techniques, Applications,
etc.). In the first phase of our algorithm we propose PhraseType, a
probabilistic generative model which exploits textual features and limited POS
tags to broadly segment text snippets into aspect-typed phrases. We extend this
model to simultaneously learn aspect-specific features and identify academic
domains in multi-domain corpora, since the two tasks mutually enhance each
other. In the second phase, we propose an approach based on adaptor grammars to
extract fine grained concept mentions from the aspect-typed phrases without the
need for any external resources or human effort, in a purely data-driven
manner. We apply our technique to study literature from diverse scientific
domains and show significant gains over state-of-the-art concept extraction
techniques. We also present a qualitative analysis of the results obtained.Comment: Published as a conference paper at CIKM 201
- …