Literature reviews allow scientists to stand on the shoulders of giants,
showing promising directions, summarizing progress, and pointing out existing
challenges in research. At the same time conducting a systematic literature
review is a laborious and consequently expensive process. In the last decade,
there have a few studies on crowdsourcing in literature reviews. This paper
explores the feasibility of crowdsourcing for facilitating the literature
review process in terms of results, time and effort, as well as to identify
which crowdsourcing strategies provide the best results based on the budget
available. In particular we focus on the screening phase of the literature
review process and we contribute and assess methods for identifying the size of
tests, labels required per paper, and classification functions as well as
methods to split the crowdsourcing process in phases to improve results.
Finally, we present our findings based on experiments run on Crowdflower