634 research outputs found
Neural Cross-Lingual Entity Linking
A major challenge in Entity Linking (EL) is making effective use of
contextual information to disambiguate mentions to Wikipedia that might refer
to different entities in different contexts. The problem exacerbates with
cross-lingual EL which involves linking mentions written in non-English
documents to entries in the English Wikipedia: to compare textual clues across
languages we need to compute similarity between textual fragments across
languages. In this paper, we propose a neural EL model that trains fine-grained
similarities and dissimilarities between the query and candidate document from
multiple perspectives, combined with convolution and tensor networks. Further,
we show that this English-trained system can be applied, in zero-shot learning,
to other languages by making surprisingly effective use of multi-lingual
embeddings. The proposed system has strong empirical evidence yielding
state-of-the-art results in English as well as cross-lingual: Spanish and
Chinese TAC 2015 datasets.Comment: Association for the Advancement of Artificial Intelligence (AAAI),
201
Name Variants for Improving Entity Discovery and Linking
Identifying all names that refer to a particular set of named entities is a challenging task, as quite often we need to consider many features that include a lot of variation like abbreviations, aliases, hypocorism, multilingualism or partial matches. Each entity type can also have specific rules for name variances: people names can include titles, country and branch names are sometimes removed from organization names, while locations are often plagued by the issue of nested entities. The lack of a clear strategy for collecting, processing and computing name variants significantly lowers the recall of tasks such as Named Entity Linking and Knowledge Base Population since name variances are frequently used in all kind of textual content.
This paper proposes several strategies to address these issues. Recall can be improved by combining knowledge repositories and by computing additional variances based on algorithmic approaches. Heuristics and machine learning methods then analyze the generated name variances and mark ambiguous names to increase precision. An extensive evaluation demonstrates the effects of integrating these methods into a new Named Entity Linking framework and confirms that systematically considering name variances yields significant performance improvements
Zero-shot Neural Transfer for Cross-lingual Entity Linking
Cross-lingual entity linking maps an entity mention in a source language to
its corresponding entry in a structured knowledge base that is in a different
(target) language. While previous work relies heavily on bilingual lexical
resources to bridge the gap between the source and the target languages, these
resources are scarce or unavailable for many low-resource languages. To address
this problem, we investigate zero-shot cross-lingual entity linking, in which
we assume no bilingual lexical resources are available in the source
low-resource language. Specifically, we propose pivot-based entity linking,
which leverages information from a high-resource "pivot" language to train
character-level neural entity linking models that are transferred to the source
low-resource language in a zero-shot manner. With experiments on 9 low-resource
languages and transfer through a total of 54 languages, we show that our
proposed pivot-based framework improves entity linking accuracy 17% (absolute)
on average over the baseline systems, for the zero-shot scenario. Further, we
also investigate the use of language-universal phonological representations
which improves average accuracy (absolute) by 36% when transferring between
languages that use different scripts.Comment: To appear in AAAI 201
エンティティ・リンキングのための候補検索とランキング方法に関する研究
Tohoku University乾健太郎課
Fixed Size Ordinally-Forgetting Encoding and its Applications
In this thesis, we propose the new Fixed-size Ordinally-Forgetting Encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. We address two fundamental problems in natural language processing, namely, Language Modeling (LM) and Named Entity Recognition (NER).
We have applied FOFE to FeedForward Neural Network Language Models (FFNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE-FFNN-LMs significantly outperform not only the standard fixed-input FFNN-LMs but also some popular Recurrent Neural Network Language Models (RNN-LMs).
Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on FOFE to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. This local detection approach has shown many advantages over the traditional sequence labeling methods. Our method has yielded pretty strong performance in all tasks we have examined
- …