4 research outputs found

    Prototype tasks: Improving crowdsourcing results through rapid, iterative task design

    Get PDF
    Low-quality results have been a long-standing problem on microtask crowdsourcing platforms, driving away requesters and justifying low wages for workers. To date, workers have been blamed for low-quality results: they are said to make as little effort as possible, do not pay attention to detail, and lack expertise. In this paper, we hypothesize that requesters may also be responsible for low-quality work: they launch unclear task designs that confuse even earnest workers, under-specify edge cases, and neglect to include examples. We introduce prototype tasks, a crowdsourcing strategy requiring all new task designs to launch a small number of sample tasks. Workers attempt these tasks and leave feedback, enabling the requester to iterate on the design before publishing it. We report a field experiment in which tasks that underwent prototype task iteration produced higher-quality work results than the original task designs. With this research, we suggest that a simple and rapid iteration cycle can improve crowd work, and we provide empirical evidence that requester “quality” directly impacts result quality

    The Daemo crowdsourcing marketplace

    Get PDF
    The success of crowdsourcing markets is dependent on a strong foundation of trust between workers and requesters. In current marketplaces, workers and requesters are often unable to trust each other’s quality, and their mental models of tasks are misaligned due to ambiguous instructions or confusing edge cases. This breakdown of trust typically arises from (1) flawed reputation systems which do not accurately reflect worker and requester quality, and from (2) poorly designed tasks. In this demo, we present how Boomerang and Prototype Tasks, the fundamental building blocks of the Daemo crowdsourcing marketplace, help restore trust between workers and requesters. Daemo’s Boomerang reputation system incentivizes alignment between opinion and ratings by determining the likelihood that workers and requesters will work together in the future based on how they rate each other. Daemo’s Prototype tasks require that new tasks go through a feedback iteration phase with a small number of workers so that requesters can revise their instructions and task designs before launch

    Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms

    Get PDF
    Paid crowdsourcing platforms suffer from low-quality workand unfair rejections, but paradoxically, most workers and requesters have high reputation scores. These inflated scores, which make high-quality work and workers difficult to find,stem from social pressure to avoid giving negative feedback. We introduce Boomerang, a reputation system for crowdsourcing that elicits more accurate feedback by rebounding the consequences of feedback directly back onto the person who gave it. With Boomerang, requesters find that their highly rated workers gain earliest access to their future tasks, and workers find tasks from their highly-rated requesters at the top of their task feed. Field experiments verify that Boomerang causes both workers and requesters to provide feedback that is more closely aligned with their private opinions. Inspired by a game-theoretic notion of incentive-compatibility, Boomerang opens opportunities for interaction design to incentivize honest reporting over strategic dishonesty
    corecore