3,956 research outputs found

    Analyzing the Amazon Mechanical Turk Marketplace

    Get PDF
    Since the concept of crowdsourcing is relatively new, many potential participants have questions about the AMT marketplace. For example, a common set of questions that pop up in an 'introduction to crowdsourcing and AMT' session are the following: What type of tasks can be completed in the marketplace? How much does it cost? How fast can I get results back? How big is the AMT marketplace? The answers for these questions remain largely anecdotal and based on personal observations and experiences. To understand better what types of tasks are being completed today using crowdsourcing techniques, we started collecting data about the AMT marketplace. We present a preliminary analysis of the dataset and provide directions for interesting future research

    Running experiments on Amazon Mechanical Turk

    Get PDF
    Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool

    Running experiments on Amazon Mechanical Turk

    Get PDF
    Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool

    Investigating the accessibility of crowdwork tasks on Mechanical Turk

    Get PDF
    Funding Information: This work was supported by the EPSRC (grants EP/R004471/1 and EP/S027432/1). Supporting data for this publication is available at https://doi.org/10.17863/CAM.62937.Crowdwork can enable invaluable opportunities for people with disabilities, not least the work fexibility and the ability to work from home, especially during the current Covid-19 pandemic. This paper investigates how engagement in crowdwork tasks is affected by individual disabilities and the resulting implications for HCI. We first surveyed 1,000 Amazon Mechanical Turk (AMT) workers to identify demographics of crowdworkers who identify as having various disabilities within the AMT ecosystem-including vision, hearing, cognition/mental, mobility, reading and motor impairments. Through a second focused survey and follow-up interviews, we provide insights into how respondents cope with crowdwork tasks. We found that standard task factors, such as task completion time and presentation, often do not account for the needs of users with disabilities, resulting in anxiety and a feeling of depression on occasion. We discuss how to alleviate barriers to enable effective interaction for crowdworkers with disabilities.Publisher PD

    A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality

    Full text link
    Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks.Comment: 10 pages, 11 figures, accepted CSCW 201
    • …
    corecore