39,360 research outputs found
Entity Recognition at First Sight: Improving NER with Eye Movement Information
Previous research shows that eye-tracking data contains information about the
lexical and syntactic properties of text, which can be used to improve natural
language processing models. In this work, we leverage eye movement features
from three corpora with recorded gaze information to augment a state-of-the-art
neural model for named entity recognition (NER) with gaze embeddings. These
corpora were manually annotated with named entity labels. Moreover, we show how
gaze features, generalized on word type level, eliminate the need for recorded
eye-tracking data at test time. The gaze-augmented models for NER using
token-level and type-level features outperform the baselines. We present the
benefits of eye-tracking features by evaluating the NER models on both
individual datasets as well as in cross-domain settings.Comment: Accepted at NAACL-HLT 201
Empower Sequence Labeling with Task-Aware Neural Language Model
Linguistic sequence labeling is a general modeling approach that encompasses
a variety of problems, such as part-of-speech tagging and named entity
recognition. Recent advances in neural networks (NNs) make it possible to build
reliable models without handcrafted features. However, in many cases, it is
hard to obtain sufficient annotations to train these models. In this study, we
develop a novel neural framework to extract abundant knowledge hidden in raw
texts to empower the sequence labeling task. Besides word-level knowledge
contained in pre-trained word embeddings, character-aware neural language
models are incorporated to extract character-level knowledge. Transfer learning
techniques are further adopted to mediate different components and guide the
language model towards the key knowledge. Comparing to previous methods, these
task-specific knowledge allows us to adopt a more concise model and conduct
more efficient training. Different from most transfer learning methods, the
proposed framework does not rely on any additional supervision. It extracts
knowledge from self-contained order information of training sequences.
Extensive experiments on benchmark datasets demonstrate the effectiveness of
leveraging character-level knowledge and the efficiency of co-training. For
example, on the CoNLL03 NER task, model training completes in about 6 hours on
a single GPU, reaching F1 score of 91.710.10 without using any extra
annotation.Comment: AAAI 201
Same but Different: Distant Supervision for Predicting and Understanding Entity Linking Difficulty
Entity Linking (EL) is the task of automatically identifying entity mentions
in a piece of text and resolving them to a corresponding entity in a reference
knowledge base like Wikipedia. There is a large number of EL tools available
for different types of documents and domains, yet EL remains a challenging task
where the lack of precision on particularly ambiguous mentions often spoils the
usefulness of automated disambiguation results in real applications. A priori
approximations of the difficulty to link a particular entity mention can
facilitate flagging of critical cases as part of semi-automated EL systems,
while detecting latent factors that affect the EL performance, like
corpus-specific features, can provide insights on how to improve a system based
on the special characteristics of the underlying corpus. In this paper, we
first introduce a consensus-based method to generate difficulty labels for
entity mentions on arbitrary corpora. The difficulty labels are then exploited
as training data for a supervised classification task able to predict the EL
difficulty of entity mentions using a variety of features. Experiments over a
corpus of news articles show that EL difficulty can be estimated with high
accuracy, revealing also latent features that affect EL performance. Finally,
evaluation results demonstrate the effectiveness of the proposed method to
inform semi-automated EL pipelines.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP
Symposium On Applied Computing (SAC 2019
Can BERT Dig It? -- Named Entity Recognition for Information Retrieval in the Archaeology Domain
The amount of archaeological literature is growing rapidly. Until recently,
these data were only accessible through metadata search. We implemented a text
retrieval engine for a large archaeological text collection ( Million
words). In archaeological IR, domain-specific entities such as locations, time
periods, and artefacts, play a central role. This motivated the development of
a named entity recognition (NER) model to annotate the full collection with
archaeological named entities. In this paper, we present ArcheoBERTje, a BERT
model pre-trained on Dutch archaeological texts. We compare the model's quality
and output on a Named Entity Recognition task to a generic multilingual model
and a generic Dutch model. We also investigate ensemble methods for combining
multiple BERT models, and combining the best BERT model with a domain thesaurus
using Conditional Random Fields (CRF). We find that ArcheoBERTje outperforms
both the multilingual and Dutch model significantly with a smaller standard
deviation between runs, reaching an average F1 score of 0.735. The model also
outperforms ensemble methods combining the three models. Combining ArcheoBERTje
predictions and explicit domain knowledge from the thesaurus did not increase
the F1 score. We quantitatively and qualitatively analyse the differences
between the vocabulary and output of the BERT models on the full collection and
provide some valuable insights in the effect of fine-tuning for specific
domains. Our results indicate that for a highly specific text domain such as
archaeology, further pre-training on domain-specific data increases the model's
quality on NER by a much larger margin than shown for other domains in the
literature, and that domain-specific pre-training makes the addition of domain
knowledge from a thesaurus unnecessary
- …