416 research outputs found
Jointly Embedding Entities and Text with Distant Supervision
Learning representations for knowledge base entities and concepts is becoming
increasingly important for NLP applications. However, recent entity embedding
methods have relied on structured resources that are expensive to create for
new domains and corpora. We present a distantly-supervised method for jointly
learning embeddings of entities and text from an unnanotated corpus, using only
a list of mappings between entities and surface forms. We learn embeddings from
open-domain and biomedical corpora, and compare against prior methods that rely
on human-annotated text or large knowledge graph structure. Our embeddings
capture entity similarity and relatedness better than prior work, both in
existing biomedical datasets and a new Wikipedia-based dataset that we release
to the community. Results on analogy completion and entity sense disambiguation
indicate that entities and words capture complementary information that can be
effectively combined for downstream use.Comment: 12 pages; Accepted to 3rd Workshop on Representation Learning for NLP
(Repl4NLP 2018). Code at https://github.com/OSU-slatelab/JE
Evaluating Word Embeddings in Multi-label Classification Using Fine-grained Name Typing
Embedding models typically associate each word with a single real-valued
vector, representing its different properties. Evaluation methods, therefore,
need to analyze the accuracy and completeness of these properties in
embeddings. This requires fine-grained analysis of embedding subspaces.
Multi-label classification is an appropriate way to do so. We propose a new
evaluation method for word embeddings based on multi-label classification given
a word embedding. The task we use is fine-grained name typing: given a large
corpus, find all types that a name can refer to based on the name embedding.
Given the scale of entities in knowledge bases, we can build datasets for this
task that are complementary to the current embedding evaluation datasets in:
they are very large, contain fine-grained classes, and allow the direct
evaluation of embeddings without confounding factors like sentence contextComment: 6 pages, The 3rd Workshop on Representation Learning for NLP
(RepL4NLP @ ACL2018
Fine-Grained Entity Typing in Hyperbolic Space
How can we represent hierarchical information present in large type
inventories for entity typing? We study the ability of hyperbolic embeddings to
capture hierarchical relations between mentions in context and their target
types in a shared vector space. We evaluate on two datasets and investigate two
different techniques for creating a large hierarchical entity type inventory:
from an expert-generated ontology and by automatically mining type
co-occurrences. We find that the hyperbolic model yields improvements over its
Euclidean counterpart in some, but not all cases. Our analysis suggests that
the adequacy of this geometry depends on the granularity of the type inventory
and the way hierarchical relations are inferred.Comment: 12 pages, 4 figures, final version, accepted at the 4th Workshop on
Representation Learning for NLP (RepL4NLP), held in conjunction with ACL 201
An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification
End-to-end neural machine translation has overtaken statistical machine
translation in terms of translation quality for some language pairs, specially
those with large amounts of parallel data. Besides this palpable improvement,
neural networks provide several new properties. A single system can be trained
to translate between many languages at almost no additional cost other than
training time. Furthermore, internal representations learned by the network
serve as a new semantic representation of words -or sentences- which, unlike
standard word embeddings, are learned in an essentially bilingual or even
multilingual context. In view of these properties, the contribution of the
present work is two-fold. First, we systematically study the NMT context
vectors, i.e. output of the encoder, and their power as an interlingua
representation of a sentence. We assess their quality and effectiveness by
measuring similarities across translations, as well as semantically related and
semantically unrelated sentence pairs. Second, as extrinsic evaluation of the
first point, we identify parallel sentences in comparable corpora, obtaining an
F1=98.2% on data from a shared task when using only NMT context vectors. Using
context vectors jointly with similarity measures F1 reaches 98.9%.Comment: 11 pages, 4 figure
- …