A Comparative Study of the Role of Examples in Microtask Crowdsourcing for Software Design

Abstract

Crowdsourcing is gradually becoming an accepted form of work across different disciplines. Not surprisingly, it has attracted the attention of the software engineering community as well. Previous work started exploring the feasibility of crowdsourcing for software design by conducting experiments in which workers from Amazon Mechanical Turk were asked to engage in a set of software design tasks. It was found that, when workers are exposed to examples of previous designs, they generate overall lower quality contributions. The intuition is that, since these experiments displayed all previous contributions as examples to workers, the presence of low quality examples may have negatively influenced workers.This thesis compares the designs produced in the previous experiments to designs obtained in a new experiment in which examples were evaluated against pre-defined quality criteria before being displayed to workers. Only examples that were of sufficient quality were shared with workers, with the hope of stimulating them to provide higher quality designs. We report results from an analysis in which we compare the designs from the current and previous experiments in terms of quantity, diversity of ideas, quality, completeness, perceived task difficulty, and how often workers borrow elements from examples. The major findings are twofold. First, workers who were exposed to sufficient quality examples produced better quality work as compared to workers exposed to all examples. Second, the quality of the designs they produced still did not reach the quality of the designs produced by workers who were not exposed to examples at all

    Similar works