1,125 research outputs found
A Bayesian Approach for Sequence Tagging with Crowds
Current methods for sequence tagging, a core task in NLP, are data hungry,
which motivates the use of crowdsourcing as a cheap way to obtain labelled
data. However, annotators are often unreliable and current aggregation methods
cannot capture common types of span annotation errors. To address this, we
propose a Bayesian method for aggregating sequence tags that reduces errors by
modelling sequential dependencies between the annotations as well as the
ground-truth labels. By taking a Bayesian approach, we account for uncertainty
in the model due to both annotator errors and the lack of data for modelling
annotators who complete few tasks. We evaluate our model on crowdsourced data
for named entity recognition, information extraction and argument mining,
showing that our sequential model outperforms the previous state of the art. We
also find that our approach can reduce crowdsourcing costs through more
effective active learning, as it better captures uncertainty in the sequence
labels when there are few annotations.Comment: Accepted for EMNLP 201
Adversarial Learning for Chinese NER from Crowd Annotations
To quickly obtain new labeled data, we can choose crowdsourcing as an
alternative way at lower cost in a short time. But as an exchange, crowd
annotations from non-experts may be of lower quality than those from experts.
In this paper, we propose an approach to performing crowd annotation learning
for Chinese Named Entity Recognition (NER) to make full use of the noisy
sequence labels from multiple annotators. Inspired by adversarial learning, our
approach uses a common Bi-LSTM and a private Bi-LSTM for representing
annotator-generic and -specific information. The annotator-generic information
is the common knowledge for entities easily mastered by the crowd. Finally, we
build our Chinese NE tagger based on the LSTM-CRF model. In our experiments, we
create two data sets for Chinese NER tasks from two domains. The experimental
results show that our system achieves better scores than strong baseline
systems.Comment: 8 pages, AAAI-201
Deep learning from crowds
Over the last few years, deep learning has revolutionized the field of
machine learning by dramatically improving the state-of-the-art in various
domains. However, as the size of supervised artificial neural networks grows,
typically so does the need for larger labeled datasets. Recently, crowdsourcing
has established itself as an efficient and cost-effective solution for labeling
large sets of data in a scalable manner, but it often requires aggregating
labels from multiple noisy contributors with different levels of expertise. In
this paper, we address the problem of learning deep neural networks from
crowds. We begin by describing an EM algorithm for jointly learning the
parameters of the network and the reliabilities of the annotators. Then, a
novel general-purpose crowd layer is proposed, which allows us to train deep
neural networks end-to-end, directly from the noisy labels of multiple
annotators, using only backpropagation. We empirically show that the proposed
approach is able to internally capture the reliability and biases of different
annotators and achieve new state-of-the-art results for various crowdsourced
datasets across different settings, namely classification, regression and
sequence labeling.Comment: 10 pages, The Thirty-Second AAAI Conference on Artificial
Intelligence (AAAI), 201
Modelling Instance-Level Annotator Reliability for Natural Language Labelling Tasks
When constructing models that learn from noisy labels produced by multiple
annotators, it is important to accurately estimate the reliability of
annotators. Annotators may provide labels of inconsistent quality due to their
varying expertise and reliability in a domain. Previous studies have mostly
focused on estimating each annotator's overall reliability on the entire
annotation task. However, in practice, the reliability of an annotator may
depend on each specific instance. Only a limited number of studies have
investigated modelling per-instance reliability and these only considered
binary labels. In this paper, we propose an unsupervised model which can handle
both binary and multi-class labels. It can automatically estimate the
per-instance reliability of each annotator and the correct label for each
instance. We specify our model as a probabilistic model which incorporates
neural networks to model the dependency between latent variables and instances.
For evaluation, the proposed method is applied to both synthetic and real data,
including two labelling tasks: text classification and textual entailment.
Experimental results demonstrate our novel method can not only accurately
estimate the reliability of annotators across different instances, but also
achieve superior performance in predicting the correct labels and detecting the
least reliable annotators compared to state-of-the-art baselines.Comment: 9 pages, 1 figures, 10 tables, 2019 Annual Conference of the North
American Chapter of the Association for Computational Linguistics (NAACL2019
- …