9,583 research outputs found
Optimization in Knowledge-Intensive Crowdsourcing
We present SmartCrowd, a framework for optimizing collaborative
knowledge-intensive crowdsourcing. SmartCrowd distinguishes itself by
accounting for human factors in the process of assigning tasks to workers.
Human factors designate workers' expertise in different skills, their expected
minimum wage, and their availability. In SmartCrowd, we formulate task
assignment as an optimization problem, and rely on pre-indexing workers and
maintaining the indexes adaptively, in such a way that the task assignment
process gets optimized both qualitatively, and computation time-wise. We
present rigorous theoretical analyses of the optimization problem and propose
optimal and approximation algorithms. We finally perform extensive performance
and quality experiments using real and synthetic data to demonstrate that
adaptive indexing in SmartCrowd is necessary to achieve efficient high quality
task assignment.Comment: 12 page
Still moving toward automation of the systematic review process: a summary of discussions at the third meeting of the International Collaboration for Automation of Systematic Reviews (ICASR)
The third meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 17–18 October 2017 in London, England. ICASR is an interdisciplinary group whose goal is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. The group seeks to facilitate the development and widespread acceptance of automated techniques for systematic reviews. The meeting’s conclusion was that the most pressing needs at present are to develop approaches for validating currently available tools and to provide increased access to curated corpora that can be used for validation. To that end, ICASR’s short-term goals in 2018–2019 are to propose and publish protocols for key tasks in systematic reviews and to develop an approach for sharing curated corpora for validating the automation of the key tasks
Interpretable classification of Wiki-review streams
Wiki articles are created and maintained by a crowd of editors, producing a continuous stream
of reviews. Reviews can take the form of additions, reverts, or both. This crowdsourcing model is exposed
to manipulation since neither reviews nor editors are automatically screened and purged. To protect articles
against vandalism or damage, the stream of reviews can be mined to classify reviews and profle editors in
real-time. The goal of this work is to anticipate and explain which reviews to revert. This way, editors are
informed why their edits will be reverted. The proposed method employs stream-based processing, updating
the profling and classifcation models on each incoming event. The profling uses side and content-based
features employing Natural Language Processing, and editor profles are incrementally updated based on
their reviews. Since the proposed method relies on self-explainable classifcation algorithms, it is possible
to understand why a review has been classifed as a revert or a non-revert. In addition, this work contributes
an algorithm for generating synthetic data for class balancing, making the fnal classifcation fairer. The
proposed online method was tested with a real data set from Wikivoyage, which was balanced through the
aforementioned synthetic data generation. The results attained near-90 % values for all evaluation metrics
(accuracy, precision, recall, and F-measure)Fundação para a Ciência e a Tecnologia | Ref. UIDB/50014/2020Xunta de Galicia | Ref. ED481B-2021-11
- …