3,929 research outputs found
Crowdsourcing Paper Screening in Systematic Literature Reviews
Literature reviews allow scientists to stand on the shoulders of giants,
showing promising directions, summarizing progress, and pointing out existing
challenges in research. At the same time conducting a systematic literature
review is a laborious and consequently expensive process. In the last decade,
there have a few studies on crowdsourcing in literature reviews. This paper
explores the feasibility of crowdsourcing for facilitating the literature
review process in terms of results, time and effort, as well as to identify
which crowdsourcing strategies provide the best results based on the budget
available. In particular we focus on the screening phase of the literature
review process and we contribute and assess methods for identifying the size of
tests, labels required per paper, and classification functions as well as
methods to split the crowdsourcing process in phases to improve results.
Finally, we present our findings based on experiments run on Crowdflower
A Review on the Applications of Crowdsourcing in Human Pathology
The advent of the digital pathology has introduced new avenues of diagnostic
medicine. Among them, crowdsourcing has attracted researchers' attention in the
recent years, allowing them to engage thousands of untrained individuals in
research and diagnosis. While there exist several articles in this regard,
prior works have not collectively documented them. We, therefore, aim to review
the applications of crowdsourcing in human pathology in a semi-systematic
manner. We firstly, introduce a novel method to do a systematic search of the
literature. Utilizing this method, we, then, collect hundreds of articles and
screen them against a pre-defined set of criteria. Furthermore, we crowdsource
part of the screening process, to examine another potential application of
crowdsourcing. Finally, we review the selected articles and characterize the
prior uses of crowdsourcing in pathology
What the Crowd Sources: A Protocol for a Contribution-Centred Systematic Literature Review of Data Crowdsourcing Research
Data crowdsourcing is the mobilization of large groups of contributors—often volunteers via the Internet—to collect and/or analyze data. Research on data crowdsourcing often prioritizes the data consumer or project sponsor. Significant gaps remain in understanding how to address design issues from the perspective of data crowdsourcing contributors. A systematic literature review is an ideal method for identifying gaps in how researchers conceptualize contributions in data crowdsourcing. This project presents a protocol for such a systematic literature review of data crowdsourcing. We will use the protocol to guide a subsequent systematic literature review and the construction of a data-information-knowledge-wisdom chart that identifies critical gaps and opportunities for research in data crowdsourcing systems
Theoretical Underpinnings and Practical Challenges of Crowdsourcing as a Mechanism for Academic Study
Researchers in a variety of fields are increasingly adopting crowdsourcing as a reliable instrument for performing tasks that are either complex for humans and computer algorithms. As a result, new forms of collective intelligence have emerged from the study of massive crowd-machine interactions in scientific work settings as a field for which there is no known theory or model able to explain how it really works. Such type of crowd work uses an open participation model that keeps the scientific activity (including datasets, methods, guidelines, and analysis results) widely available and mostly independent from institutions, which distinguishes crowd science from other crowd-assisted types of participation. In this paper, we build on the practical challenges of crowd-AI supported research and propose a conceptual framework for addressing the socio-technical aspects of crowd science from a CSCW viewpoint. Our study reinforces a manifested lack of systematic and empirical research of the symbiotic relation of AI with human computation and crowd computing in scientific endeavors
Volunteered geographic information in natural hazard analysis : a systematic literature review of current approaches with a focus on preparedness and mitigation
With the rise of new technologies, citizens can contribute to scientific research via Web 2.0 applications for collecting and distributing geospatial data. Integrating local knowledge, personal experience and up-to-date geoinformation indicates a promising approach for the theoretical framework and the methods of natural hazard analysis. Our systematic literature review aims at identifying current research and directions for future research in terms of Volunteered Geographic Information (VGI) within natural hazard analysis. Focusing on both the preparedness and mitigation phase results in eleven articles from two literature databases. A qualitative analysis for in-depth information extraction reveals auspicious approaches regarding community engagement and data fusion, but also important research gaps. Mainly based in Europe and North America, the analysed studies deal primarily with floods and forest fires, applying geodata collected by trained citizens who are improving their knowledge and making their own interpretations. Yet, there is still a lack of common scientific terms and concepts. Future research can use these findings for the adaptation of scientific models of natural hazard analysis in order to enable the fusion of data from technical sensors and VGI. The development of such general methods shall contribute to establishing the user integration into various contexts, such as natural hazard analysis
- …