13,987 research outputs found
Cheaper and Better: Selecting Good Workers for Crowdsourcing
Crowdsourcing provides a popular paradigm for data collection at scale. We
study the problem of selecting subsets of workers from a given worker pool to
maximize the accuracy under a budget constraint. One natural question is
whether we should hire as many workers as the budget allows, or restrict on a
small number of top-quality workers. By theoretically analyzing the error rate
of a typical setting in crowdsourcing, we frame the worker selection problem
into a combinatorial optimization problem and propose an algorithm to solve it
efficiently. Empirical results on both simulated and real-world datasets show
that our algorithm is able to select a small number of high-quality workers,
and performs as good as, sometimes even better than, the much larger crowds as
the budget allows
Noisy Submodular Maximization via Adaptive Sampling with Applications to Crowdsourced Image Collection Summarization
We address the problem of maximizing an unknown submodular function that can
only be accessed via noisy evaluations. Our work is motivated by the task of
summarizing content, e.g., image collections, by leveraging users' feedback in
form of clicks or ratings. For summarization tasks with the goal of maximizing
coverage and diversity, submodular set functions are a natural choice. When the
underlying submodular function is unknown, users' feedback can provide noisy
evaluations of the function that we seek to maximize. We provide a generic
algorithm -- \submM{} -- for maximizing an unknown submodular function under
cardinality constraints. This algorithm makes use of a novel exploration module
-- \blbox{} -- that proposes good elements based on adaptively sampling noisy
function evaluations. \blbox{} is able to accommodate different kinds of
observation models such as value queries and pairwise comparisons. We provide
PAC-style guarantees on the quality and sampling cost of the solution obtained
by \submM{}. We demonstrate the effectiveness of our approach in an
interactive, crowdsourced image collection summarization application.Comment: Extended version of AAAI'16 pape
Crowdsourcing subjective annotations using pairwise comparisons reduces bias and error compared to the majority-vote method
How to better reduce measurement variability and bias introduced by
subjectivity in crowdsourced labelling remains an open question. We introduce a
theoretical framework for understanding how random error and measurement bias
enter into crowdsourced annotations of subjective constructs. We then propose a
pipeline that combines pairwise comparison labelling with Elo scoring, and
demonstrate that it outperforms the ubiquitous majority-voting method in
reducing both types of measurement error. To assess the performance of the
labelling approaches, we constructed an agent-based model of crowdsourced
labelling that lets us introduce different types of subjectivity into the
tasks. We find that under most conditions with task subjectivity, the
comparison approach produced higher scores. Further, the comparison
approach is less susceptible to inflating bias, which majority voting tends to
do. To facilitate applications, we show with simulated and real-world data that
the number of required random comparisons for the same classification accuracy
scales log-linearly with the number of labelled items. We also
implemented the Elo system as an open-source Python package.Comment: Accepted for publication at ACM CSCW 202
- …