2 research outputs found

    Crowdsourcing as part of producing content for a critical reading comprehension game

    Get PDF
    Abstract. The purpose of this thesis was to examine how crowdsourcing can be used to create and validate data on a topic, misleading graphs, that are difficult for people to interpret. In crowdsourcing tasks, the worker is shown a graph that is intentionally designed to be misleading, from which the worker is supposed to create four headline options that are used as content of a critical reading comprehension game. To ensure the quality of the headlines, they are validated using crowdsourcing and two expert evaluators. As a result of the thesis, a graphical user interface was created from which crowdsourcing projects could be managed. The major challenge of crowdsourcing is quality control when unknown people from different backgrounds perform tasks on a different basis. The tasks were formed around a tricky topic, in which case it is difficult to keep the amount of usable data high in relation to the total amount of gathered data. The topics of the graphs and the task interface were intentionally designed to be simple so as not to take too much focus from the context of the misleading graph. The results show that there is a lot of variation in the quality of the responses although an effort was made to select the best among the workers. It was noticeable that misleading graphs or assignments were often misinterpreted in the task of creating headlines. A small part of the responses was completely in accordance with the assignment. In the task of validating headlines, the worker’s task was to choose one of the three options, which was used to determine how well the headline formed in the previous task corresponded to the assignment. The results show that it was too easy for the worker to click and move on to the next task without proper consideration

    Playing Planning Poker in Crowds: Human Computation of Software Effort Estimates

    Get PDF
    Reliable cost effective effort estimation remains a considerable challenge for software projects. Recent work has demonstrated that the popular Planning Poker practice can produce reliable estimates when undertaken within a software team of knowledgeable domain experts. However, the process depends on the availability of experts and can be time-consuming to perform, making it impractical for large scale or open source projects that may curate many thousands of outstanding tasks. This paper reports on a full study to investigate the feasibility of using crowd workers supplied with limited information about a task to provide comparably accurate estimates using Planning Poker. We describe the design of a Crowd Planning Poker (CPP) process implemented on Amazon Mechanical Turk and the results of a substantial set of trials, involving more than 5000 crowd workers and 39 diverse software tasks. Our results show that a carefully organised and selected crowd of workers can produce effort estimates that are of similar accuracy to those of a single expert
    corecore