91 research outputs found

    Incentivizing High Quality Crowdwork

    Full text link
    We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW \c{opyright} 2015 International World Wide Web Conference Committe

    CROWDWORK PLATFORMS: JUXTAPOSING CENTRALIZED AND DECENTRALIZED GOVERNANCE

    Get PDF
    Crowdwork is a novel form of digitally mediated work arrangement that is managed and organized through online labor platforms. This paper focuses on the governance of platforms that facilitate creative work—that is, complex work tasks that require high-level skill and creative workers. Crowdwork platform governance faces numerous challenges as a result of technology mediation, scalable and distributed workers, and temporary work arrangements. Creative crowdwork platforms, such as Topcoder, typically require additional governance structures to manage complex tasks. However, we know relatively little about creative crowdwork platform governance, as most existing studies focus on routine work platforms, such as Amazon Mechanical Turk. Accordingly, this paper explores how incumbent and insurgent creative crowdwork platforms are governed under centralized and decentralized modes. We conducted a comparative case study based on the analysis of two different cases: Topcoder, a successful commercial platform with a largely centralized governance structure, and CanYa, an emerging innovative platform based on blockchain technology with more decentralized governance. We identified and classified different governance elements related to work control and work coordination. In addition, we explored the characteristics of creative crowdwork platform governance with different degrees of centralization. Keywords: Crowdwork Governance, Creative Crowdwork, Centralized Platforms, Decentralized Platforms, Blockchain, Tokenomics

    Revolutionizing Crowdworking Campaigns: Conquering Adverse Selection and Moral Hazard with the Help of Smart Contracts

    Get PDF
    Crowdworking is increasingly being applied by companies to outsource tasks beyond their core competencies flexibly and cost-effectively to an unknown group. However, the anonymous and financially incentivized nature of crowdworkers creates information asymmetries and conflicts of interest, leading to inefficiencies and intensifying the principal-agent problem. Our paper offers a solution to the widespread problem of inefficient Crowdworking campaigns. We first derive the currently applied Crowdworking campaign process based on a qualitative study. Subsequently, we identify the broadest adverse selection and moral hazard problems in the process. We then analyze how the blockchain application of smart contracts can counteract those challenges and develop a process model that maps a Crowdworking campaign using smart contracts. We explain how our developed process significantly reduces adverse selection and moral hazard at each stage. Thus, our research provides approaches to make online labor more attractive and transparent for companies and online workers

    Friendly Hackers to the Rescue: How Organizations Perceive Crowdsourced Vulnerability Discovery

    Get PDF
    Over the past years, crowdsourcing has increasingly been used for the discovery of vulnerabilities in software. While some organizations have extensively used crowdsourced vulnerability discovery, other organizations have been very hesitant in embracing this method. In this paper, we report the results of a qualitative study that reveals organizational concerns and fears in relation to crowdsourced vulnerability discovery. The study is based on 36 key informant interviews with various organizations. The study reveals a set of pre-adoption fears (i.e., lacking managerial expertise, low quality submissions, distrust in security professionals, cost escalation, lack of motivation of security professionals) as well as the post-adoption issues actually experienced. The study also identifies countermeasures that adopting organizations have used to mitigate fears and minimize issues. Implications for research and practice are discussed

    A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality

    Full text link
    Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks.Comment: 10 pages, 11 figures, accepted CSCW 201
    • …
    corecore