6,816 research outputs found
Reliable classification by unreliable crowds
We consider the use of error-control codes and decoding algorithms to perform reliable classification using unreliable and anonymous human crowd workers by adapting coding-theoretic techniques for the specific crowdsourcing application. We develop an ordering principle for the quality of crowds and describe how system perfor-mance changes with the quality of the crowd. We demonstrate the effectiveness of the proposed coding scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowd-sourcing microtask platform. Results suggest that good codes may improve the performance of the crowdsourcing task over typical majority-vote approaches. Index Terms — crowdsourcing, classification, error-control code
Supervised Collective Classification for Crowdsourcing
Crowdsourcing utilizes the wisdom of crowds for collective classification via
information (e.g., labels of an item) provided by labelers. Current
crowdsourcing algorithms are mainly unsupervised methods that are unaware of
the quality of crowdsourced data. In this paper, we propose a supervised
collective classification algorithm that aims to identify reliable labelers
from the training data (e.g., items with known labels). The reliability (i.e.,
weighting factor) of each labeler is determined via a saddle point algorithm.
The results on several crowdsourced data show that supervised methods can
achieve better classification accuracy than unsupervised methods, and our
proposed method outperforms other algorithms.Comment: to appear in IEEE Global Communications Conference (GLOBECOM)
Workshop on Networking and Collaboration Issues for the Internet of
Everythin
Modelling Instance-Level Annotator Reliability for Natural Language Labelling Tasks
When constructing models that learn from noisy labels produced by multiple
annotators, it is important to accurately estimate the reliability of
annotators. Annotators may provide labels of inconsistent quality due to their
varying expertise and reliability in a domain. Previous studies have mostly
focused on estimating each annotator's overall reliability on the entire
annotation task. However, in practice, the reliability of an annotator may
depend on each specific instance. Only a limited number of studies have
investigated modelling per-instance reliability and these only considered
binary labels. In this paper, we propose an unsupervised model which can handle
both binary and multi-class labels. It can automatically estimate the
per-instance reliability of each annotator and the correct label for each
instance. We specify our model as a probabilistic model which incorporates
neural networks to model the dependency between latent variables and instances.
For evaluation, the proposed method is applied to both synthetic and real data,
including two labelling tasks: text classification and textual entailment.
Experimental results demonstrate our novel method can not only accurately
estimate the reliability of annotators across different instances, but also
achieve superior performance in predicting the correct labels and detecting the
least reliable annotators compared to state-of-the-art baselines.Comment: 9 pages, 1 figures, 10 tables, 2019 Annual Conference of the North
American Chapter of the Association for Computational Linguistics (NAACL2019
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems
Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods
- …