1,292 research outputs found

    A data-driven analysis of workers' earnings on Amazon Mechanical Turk

    Get PDF
    A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~2 USD/h, and only 4% earned more than 7.25 USD/h. While the average requester pays more than 11 USD/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work

    Crowdworker Economics in the Gig Economy

    Get PDF
    The nature of work is changing. As labor increasingly trends to casual work in the emerging gig economy, understanding the broader economic context is crucial to effective engage- ment with a contingent workforce. Crowdsourcing represents an early manifestation of this fluid, laisser-faire, on-demand workforce. This work analyzes the results of four large-scale surveys of US-based Amazon Mechanical Turk workers recorded over a six-year period, providing compa- rable measures to national statistics. Our results show that despite unemployment far higher than national levels, crowd- workers are seeing positive shifts in employment status and household income. Our most recent surveys indicate a trend away from full-time-equivalent crowdwork, coupled with a reduction in estimated poverty levels to below national figures. These trends are indicative of an increasingly flexible workforce, able to maximize their opportunities in a rapidly changing national labor market, which may have material impacts on existing models of crowdworker behavior.This work was supported by an EPSRC studentship and EPSRC grants EP/N010558/1 and EP/R004471/1

    Don’t Get Lost in the Crowd: Best Practices for Using Amazon’s Mechanical Turk in Behavioral Research

    Get PDF
    The use of Amazon’s Mechanical Turk (MTurk) to conduct academic research has steadily grown since its inception in 2005. The ability to control every aspect of a study, from sampling to collection, is extremely appealing to researchers. Unfortunately, the additional control offered through MTurk can also lead to poor data quality if researchers are not careful. Despite research on various aspects of data quality, participant compensation, and participant demographics, the academic literature still lacks a practical guide to the effective use of settings and features in MTurk for survey and experimental research. Therefore, the purpose of this tutorial is to provide researchers with a recommended set of best practices to follow before, during, and after collecting data via MTurk to ensure that responses are of the highest possible quality. We also recommend that editors and reviewers place more emphasis on the collection methods employed by researchers, rather than assume that all samples collected using a given online platform are of equal quality. We also recommend that editors and reviewers place more emphasis on the collection methods employed by researchers, rather than assuming that all samples collected using a given online platform are of equal quality

    Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation

    Full text link
    A human computation system can be viewed as a distributed system in which the processors are humans, called workers. Such systems harness the cognitive power of a group of workers connected to the Internet to execute relatively simple tasks, whose solutions, once grouped, solve a problem that systems equipped with only machines could not solve satisfactorily. Examples of such systems are Amazon Mechanical Turk and the Zooniverse platform. A human computation application comprises a group of tasks, each of them can be performed by one worker. Tasks might have dependencies among each other. In this study, we propose a theoretical framework to analyze such type of application from a distributed systems point of view. Our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached: quality-of-service requirements, design and management strategies, and human aspects. By using this framework, we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications. In doing so, besides integrating and organizing what has been done in this direction, we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of, for example, task assignment, dependency management, and fault prevention and tolerance. We discuss how they are related to distributed systems and other areas of knowledge.Comment: 3 figures, 1 tabl

    TurkScanner: Predicting the Hourly Wage of Microtasks

    Full text link
    Workers in crowd markets struggle to earn a living. One reason for this is that it is difficult for workers to accurately gauge the hourly wages of microtasks, and they consequently end up performing labor with little pay. In general, workers are provided with little information about tasks, and are left to rely on noisy signals, such as textual description of the task or rating of the requester. This study explores various computational methods for predicting the working times (and thus hourly wages) required for tasks based on data collected from other workers completing crowd work. We provide the following contributions. (i) A data collection method for gathering real-world training data on crowd-work tasks and the times required for workers to complete them; (ii) TurkScanner: a machine learning approach that predicts the necessary working time to complete a task (and can thus implicitly provide the expected hourly wage). We collected 9,155 data records using a web browser extension installed by 84 Amazon Mechanical Turk workers, and explored the challenge of accurately recording working times both automatically and by asking workers. TurkScanner was created using ~150 derived features, and was able to predict the hourly wages of 69.6% of all the tested microtasks within a 75% error. Directions for future research include observing the effects of tools on people's working practices, adapting this approach to a requester tool for better price setting, and predicting other elements of work (e.g., the acceptance likelihood and worker task preferences.)Comment: Proceedings of the 28th International Conference on World Wide Web (WWW '19), San Francisco, CA, USA, May 13-17, 201

    The Challenges of Crowd Workers in Rural and Urban America

    Full text link
    Crowd work has the potential of helping the financial recovery of regions traditionally plagued by a lack of economic opportunities, e.g., rural areas. However, we currently have limited information about the challenges facing crowd work-ers from rural and super rural areas as they struggle to make a living through crowd work sites. This paper examines the challenges and advantages of rural and super rural AmazonMechanical Turk (MTurk) crowd workers and contrasts them with those of workers from urban areas. Based on a survey of421 crowd workers from differing geographic regions in theU.S., we identified how across regions, people struggled with being onboarded into crowd work. We uncovered that despite the inequalities and barriers, rural workers tended to be striving more in micro-tasking than their urban counterparts. We also identified cultural traits, relating to time dimension and individualism, that offer us an insight into crowd workers and the necessary qualities for them to succeed on gig platforms. We finish by providing design implications based on our findings to create more inclusive crowd work platforms and tool
    • …
    corecore