3 research outputs found

    Hourly Wages in Crowdworking: A Meta-Analysis

    Get PDF
    In the past decade, crowdworking on online labor market platforms has become an important source of income for a growing number of people worldwide. This development has led to increasing political and scholarly interest in the wages people can earn on such platforms. This study extends the literature, which is often based on a single platform, region, or category of crowdworking, through a meta-analysis of prevalent hourly wages. After a systematic literature search, the paper considers 22 primary empirical studies, including 105 wages and 76,765 data points from 22 platforms, eight different countries, and 10 years. It is found that, on average, microtasks results in an hourly wage of less than $6. This wage is significantly lower than the mean wage of online freelancers, which is roughly three times higher when not factoring in unpaid work. Hourly wages accounting for unpaid work, such as searching for tasks and communicating with requesters, tend to be significantly lower than wages not considering unpaid work. Legislators and researchers evaluating wages in crowdworking need to be aware of this bias when assessing hourly wages, given that the majority of literature does not account for the effect of unpaid work time on crowdworking wages. To foster the comparability of different research results, the article suggests that scholars consider a wage correction factor to account for unpaid work. Finally, researchers should be aware that remuneration and work processes on crowdworking platforms can systematically affect the data collection method and inclusion of unpaid work

    Worker Retention, Response Quality, and Diversity in Microtask Crowdsourcing: An Experimental Investigation of the Potential for Priming Effects to Promote Project Goals

    Get PDF
    Online microtask crowdsourcing platforms act as efficient resources for delegating small units of work, gathering data, generating ideas, and more. Members of research and business communities have incorporated crowdsourcing into problem-solving processes. When human workers contribute to a crowdsourcing task, they are subject to various stimuli as a result of task design. Inter-task priming effects - through which work is nonconsciously, yet significantly, influenced by exposure to certain stimuli - have been shown to affect microtask crowdsourcing responses in a variety of ways. Instead of simply being wary of the potential for priming effects to skew results, task administrators can utilize proven priming procedures in order to promote project goals. In a series of three experiments conducted on Amazon’s Mechanical Turk, we investigated the effects of proposed priming treatments on worker retention, response quality, and response diversity. In our first two experiments, we studied the effect of initial response freedom on sustained worker participation and response quality. We expected that workers who were granted greater levels of freedom in an initial response would be stimulated to complete more work and deliver higher quality work than workers originally constrained in their initial response possibilities. We found no significant relationship between the initial response freedom granted to workers and the amount of optional work they completed. The degree of initial response freedom also did not have a significant impact on subsequent response quality. However, the influence of inter-task effects were evident based on response tendencies for different question types. We found evidence that consistency in task structure may play a stronger role in promoting response quality than proposed priming procedures. In our final experiment, we studied the influence of a group-level priming treatment on response diversity. Instead of varying task structure for different workers, we varied the degree of overlap in question content distributed to different workers in a group. We expected groups of workers that were exposed to more diverse preliminary question sets to offer greater diversity in response to a subsequent question. Although differences in response diversity were revealed, no consistent trend between question content overlap and response diversity prevailed. Nevertheless, combining consistent task structure with crowd-level priming procedures - to encourage diversity in inter-task effects across the crowd - offers an exciting path for future study

    Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models

    Get PDF
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied to large training datasets crawled from pre-existing text on the Web, we extend the critique to challenge datasets custom-created by crowdworkers. We present several sets of criticisms, where ethical and scientific issues in language model research reinforce each other: labour injustices in crowdwork, dataset quality and inscrutability, inequities in the research community, and centralized corporate control of the technology. We also present a new type of tool for researchers to use in examining large datasets when evaluating them for quality
    corecore