3,180 research outputs found
Crowdsourcing Paper Screening in Systematic Literature Reviews
Literature reviews allow scientists to stand on the shoulders of giants,
showing promising directions, summarizing progress, and pointing out existing
challenges in research. At the same time conducting a systematic literature
review is a laborious and consequently expensive process. In the last decade,
there have a few studies on crowdsourcing in literature reviews. This paper
explores the feasibility of crowdsourcing for facilitating the literature
review process in terms of results, time and effort, as well as to identify
which crowdsourcing strategies provide the best results based on the budget
available. In particular we focus on the screening phase of the literature
review process and we contribute and assess methods for identifying the size of
tests, labels required per paper, and classification functions as well as
methods to split the crowdsourcing process in phases to improve results.
Finally, we present our findings based on experiments run on Crowdflower
Optimal Crowdsourced Classification with a Reject Option in the Presence of Spammers
We explore the design of an effective crowdsourcing system for an -ary
classification task. Crowd workers complete simple binary microtasks whose
results are aggregated to give the final decision. We consider the scenario
where the workers have a reject option so that they are allowed to skip
microtasks when they are unable to or choose not to respond to binary
microtasks. We present an aggregation approach using a weighted majority voting
rule, where each worker's response is assigned an optimized weight to maximize
crowd's classification performance.Comment: submitted to ICASSP 201
Does Confidence Reporting from the Crowd Benefit Crowdsourcing Performance?
We explore the design of an effective crowdsourcing system for an -ary
classification task. Crowd workers complete simple binary microtasks whose
results are aggregated to give the final classification decision. We consider
the scenario where the workers have a reject option so that they are allowed to
skip microtasks when they are unable to or choose not to respond to binary
microtasks. Additionally, the workers report quantized confidence levels when
they are able to submit definitive answers. We present an aggregation approach
using a weighted majority voting rule, where each worker's response is assigned
an optimized weight to maximize crowd's classification performance. We obtain a
couterintuitive result that the classification performance does not benefit
from workers reporting quantized confidence. Therefore, the crowdsourcing
system designer should employ the reject option without requiring confidence
reporting.Comment: 6 pages, 4 figures, SocialSens 2017. arXiv admin note: text overlap
with arXiv:1602.0057
- …