6 research outputs found
Weakly Supervised Cross-Lingual Named Entity Recognition via Effective Annotation and Representation Projection
The state-of-the-art named entity recognition (NER) systems are supervised
machine learning models that require large amounts of manually annotated data
to achieve high accuracy. However, annotating NER data by human is expensive
and time-consuming, and can be quite difficult for a new language. In this
paper, we present two weakly supervised approaches for cross-lingual NER with
no human annotation in a target language. The first approach is to create
automatically labeled NER data for a target language via annotation projection
on comparable corpora, where we develop a heuristic scheme that effectively
selects good-quality projection-labeled data from noisy data. The second
approach is to project distributed representations of words (word embeddings)
from a target language to a source language, so that the source-language NER
system can be applied to the target language without re-training. We also
design two co-decoding schemes that effectively combine the outputs of the two
projection-based approaches. We evaluate the performance of the proposed
approaches on both in-house and open NER data for several target languages. The
results show that the combined systems outperform three other weakly supervised
approaches on the CoNLL data.Comment: 11 pages, The 55th Annual Meeting of the Association for
Computational Linguistics (ACL), 201
BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision
We study the open-domain named entity recognition (NER) problem under distant
supervision. The distant supervision, though does not require large amounts of
manual annotations, yields highly incomplete and noisy distant labels via
external knowledge bases. To address this challenge, we propose a new
computational framework -- BOND, which leverages the power of pre-trained
language models (e.g., BERT and RoBERTa) to improve the prediction performance
of NER models. Specifically, we propose a two-stage training algorithm: In the
first stage, we adapt the pre-trained language model to the NER tasks using the
distant labels, which can significantly improve the recall and precision; In
the second stage, we drop the distant labels, and propose a self-training
approach to further improve the model performance. Thorough experiments on 5
benchmark datasets demonstrate the superiority of BOND over existing distantly
supervised NER methods. The code and distantly labeled data have been released
in https://github.com/cliang1453/BOND.Comment: Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery
and Data Mining (KDD '20