47 research outputs found
A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality
Microtask crowdsourcing is increasingly critical to the creation of extremely
large datasets. As a result, crowd workers spend weeks or months repeating the
exact same tasks, making it necessary to understand their behavior over these
long periods of time. We utilize three large, longitudinal datasets of nine
million annotations collected from Amazon Mechanical Turk to examine claims
that workers fatigue or satisfice over these long periods, producing lower
quality work. We find that, contrary to these claims, workers are extremely
stable in their quality over the entire period. To understand whether workers
set their quality based on the task's requirements for acceptance, we then
perform an experiment where we vary the required quality for a large
crowdsourcing task. Workers did not adjust their quality based on the
acceptance threshold: workers who were above the threshold continued working at
their usual quality level, and workers below the threshold self-selected
themselves out of the task. Capitalizing on this consistency, we demonstrate
that it is possible to predict workers' long-term quality using just a glimpse
of their quality on the first five tasks.Comment: 10 pages, 11 figures, accepted CSCW 201
Content marketplaces as digital labour platforms: towards accountable algorithmic management and decent work for content creators
YouTube is probably the world’s largest digital labour platform. YouTube creators report similar decent work deficits as other platform workers: economic and psychosocial impacts from opaque, error-prone algorithmic management; no collective bargaining; and possible employment misclassification. In December 2021, the European Commission announced a new proposal for a Directive ‘on improving working conditions in platform work’ (the ‘Platform Work Directive’). However, the definition of ‘platform work’ in the proposed Directive may exclude YouTube.
Commercial laws, however, may apply. In the US state of California, for example, Civil Code §1749.7 (previously AB 1790 [2019]) governs the relationship between ‘marketplaces’ and ‘marketplace sellers.’ In the European Union, Regulation 2019/1150 (the ‘Platform-to-Business Regulation’) similarly provides protections to ‘business users of online intermediation services.’
While the protections provided by these ‘marketplace laws’ are less comprehensive than those provided by the proposed Platform Work Directive, they might address some of the decent work deficits experienced by workers on content marketplaces, especially those arising from opaque and error-prone algorithmic management practices. Yet they have gone relatively underexamined in policy discussions on improving working conditions in platform work. Additionally, to our knowledge they have not been used or referred to in any legal action or public dispute against YouTube or any other digital labour platform.
This paper uses the case of YouTube to consider the regulatory situation of ‘content marketplaces,’ a category of labour platform defined in the literature on working conditions in platform work but underdiscussed in policy research and proposals on platform work regulation—at least compared to location-based, microtask, and freelance platforms. The paper makes four contributions. First, it summarizes the literature on YouTube creators’ working conditions and collective action efforts, highlighting that creators on YouTube and other content marketplaces face similar challenges to other platform workers. Second, it considers the definition of ‘digital labour platform’ in the proposed EU Platform Work Directive and notes that YouTube and other content marketplaces may be excluded, despite their relevance. Third, it compares the California and EU ‘marketplace laws’ to the proposed Platform Work Directive, concluding that the marketplace laws, while valuable, do not fully address the decent work deficits experienced by content marketplace creators. Fourth, it presents policy options for addressing these deficits from the perspective of international labour standards
Fairness and Transparency in Crowdsourcing
International audienceDespite the success of crowdsourcing, the question of ethics has not yet been addressed in its entirety. Existing efforts have studied fairness in worker compensation and in helping requesters detect malevolent workers. In this paper, we propose fairness axioms that generalize existing work and pave the way to studying fairness for task assignment, task completion, and worker compensation. Transparency on the other hand, has been addressed with the development of plug-ins and forums to track workers' performance and rate requesters. Similarly to fairness, we define transparency axioms and advocate the need to address it in a holistic manner by providing declarative specifications. We also discuss how fairness and transparency could be enforced and evaluated in a crowdsourcing platform
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Recommended from our members
An Examination of the Work Practices of Crowdfarms
Crowdsourcing is a new value creation business model. Annual revenue of the Chinese market alone is hundreds of millions of dollars, yet few studies have focused on the practices of the Chinese crowdsourcing workforce, and those that do mainly focus on solo crowdworkers. We have extended our study of solo crowdworker practices to include crowdfarms, a relatively new entry to the gig economy: small companies that carry out crowdwork as a key part of their business. We report here on interviews of people who work in 53 crowdfarms. We describe how crowdfarms procure jobs, carry out macrotasks and microtasks, manage their reputation, and employ different management practices to motivate crowdworkers and customers
Modus Operandi of Crowd Workers : The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask
crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing
is that of work environments; defined as the hardware and software affordances at the disposal of crowd workers which are used
to complete microtasks on crowdsourcing platforms. In this paper, we reveal the significant role of work environments in the
shaping of crowd work. First, through a pilot study surveying the good and bad experiences workers had with UI elements in
crowd work, we revealed the typical issues workers face. Based on these findings, we then deployed over 100 distinct microtasks
on CrowdFlower, addressing workers in India and USA in two identical batches. These tasks emulate the good and bad UI
element designs that characterize crowdsourcing microtasks. We recorded hardware specifics such as CPU speed and device
type, apart from software specifics including the browsers used to complete tasks, operating systems on the device, and other
properties that define the work environments of crowd workers. Our findings indicate that crowd workers are embedded in a
variety of work environments which influence the quality of work produced. To confirm and validate our data-driven findings we
then carried out semi-structured interviews with a sample of Indian and American crowd workers from this platform. Depending
on the design of UI elements in microtasks, we found that some work environments are more suitable than others to support
crowd workers. Based on our overall findings resulting from all the three studies, we introduce ModOp, a tool that helps to
design crowdsourcing microtasks that are suitable for diverse crowd work environments. We empirically show that the use of
ModOp results in reducing the cognitive load of workers, thereby improving their user experience without effecting the accuracy
or task completion time