151 research outputs found
Incentivizing High Quality Crowdwork
We study the causal effects of financial incentives on the quality of
crowdwork. We focus on performance-based payments (PBPs), bonus payments
awarded to workers for producing high quality work. We design and run
randomized behavioral experiments on the popular crowdsourcing platform Amazon
Mechanical Turk with the goal of understanding when, where, and why PBPs help,
identifying properties of the payment, payment structure, and the task itself
that make them most effective. We provide examples of tasks for which PBPs do
improve quality. For such tasks, the effectiveness of PBPs is not too sensitive
to the threshold for quality required to receive the bonus, while the magnitude
of the bonus must be large enough to make the reward salient. We also present
examples of tasks for which PBPs do not improve quality. Our results suggest
that for PBPs to improve quality, the task must be effort-responsive: the task
must allow workers to produce higher quality work by exerting more effort. We
also give a simple method to determine if a task is effort-responsive a priori.
Furthermore, our experiments suggest that all payments on Mechanical Turk are,
to some degree, implicitly performance-based in that workers believe their work
may be rejected if their performance is sufficiently poor. Finally, we propose
a new model of worker behavior that extends the standard principal-agent model
from economics to include a worker's subjective beliefs about his likelihood of
being paid, and show that the predictions of this model are in line with our
experimental findings. This model may be useful as a foundation for theoretical
studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW
\c{opyright} 2015 International World Wide Web Conference Committe
Recommended from our members
Crowdsourcing in China: Exploring the Work Experience of Solo Crowdworkers and Crowdfarm Workers
Recent research highlights the potential of crowdsourcing in China. Yet very few studies explore the workplace context and experiences of Chinese crowdworkers. Those that do, focus mainly on the work experiences of solo crowdworkers but do not deal with issues pertaining to the substantial amount of people working in âcrowdfarmsâ. This article addresses this gap as one of its primary concerns. Drawing on a study that involves 48 participants, our research explores, compares and contrasts the work experiences of solo crowdworkers to those of crowdfarm workers. Our findings illustrate that the work experiences and context of the solo workers and crowdfarm workers are substantially different, with regards to their motivations, the ways they engage with crowdsourcing, the tasks they work on, and the crowdsourcing platforms they utilize. Overall, our study contributes to furthering the understandings on the work experiences of crowdworkers in China
Finish Them!: Pricing Algorithms for Human Computation
Given a batch of human computation tasks, a commonly ignored aspect is how
the price (i.e., the reward paid to human workers) of these tasks must be set
or varied in order to meet latency or cost constraints. Often, the price is set
up-front and not modified, leading to either a much higher monetary cost than
needed (if the price is set too high), or to a much larger latency than
expected (if the price is set too low). Leveraging a pricing model from prior
work, we develop algorithms to optimally set and then vary price over time in
order to meet a (a) user-specified deadline while minimizing total monetary
cost (b) user-specified monetary budget constraint while minimizing total
elapsed time. We leverage techniques from decision theory (specifically, Markov
Decision Processes) for both these problems, and demonstrate that our
techniques lead to upto 30\% reduction in cost over schemes proposed in prior
work. Furthermore, we develop techniques to speed-up the computation, enabling
users to leverage the price setting algorithms on-the-fly
Ethical issues around crowdwork: How can blockchain technology help?
The practice of marketing has become increasingly technology-dependent, making organisations reliant on fragmented information systems that extend beyond organisational boundaries and requiring marketing workers to develop technology-related knowledge and/or collaborate more closely with those who have it. Despite massive investment in marketing technology, there has been little academic research on the intersection between marketing and technology knowledge. Drawing on three examples, we illustrate how complex and IS-dependent the practice of marketing and marketing decision-making have become. We then analyse those examples through the lens of knowledge management. Specifically, we consider the differences between traditional and modern marketing ecosystems and the implications for knowledge work, knowledge management, and decision-making at the level of organisations and ecosystems. We propose a provisional conceptual framework for understanding how market, marketing and technology knowledge have become intertwined and propose a research agenda for examining that more closely
The Dark Side of Micro-Task Marketplaces: Characterizing Fiverr and Automatically Detecting Crowdturfing
As human computation on crowdsourcing systems has become popular and powerful
for performing tasks, malicious users have started misusing these systems by
posting malicious tasks, propagating manipulated contents, and targeting
popular web services such as online social networks and search engines.
Recently, these malicious users moved to Fiverr, a fast-growing micro-task
marketplace, where workers can post crowdturfing tasks (i.e., astroturfing
campaigns run by crowd workers) and malicious customers can purchase those
tasks for only $5. In this paper, we present a comprehensive analysis of
Fiverr. First, we identify the most popular types of crowdturfing tasks found
in this marketplace and conduct case studies for these crowdturfing tasks.
Then, we build crowdturfing task detection classifiers to filter these tasks
and prevent them from becoming active in the marketplace. Our experimental
results show that the proposed classification approach effectively detects
crowdturfing tasks, achieving 97.35% accuracy. Finally, we analyze the real
world impact of crowdturfing tasks by purchasing active Fiverr tasks and
quantifying their impact on a target site. As part of this analysis, we show
that current security systems inadequately detect crowdsourced manipulation,
which confirms the necessity of our proposed crowdturfing task detection
approach
Rating mechanisms for sustainability of crowdsourcing platforms
Crowdsourcing leverages the diverse skill sets of large collections of individual contributors to solve problems and execute projects, where contributors may vary significantly in experience, expertise, and interest in completing tasks. Hence, to ensure the satisfaction of its task requesters, most existing crowdsourcing platforms focus primarily on supervising contributors\u27 behavior. This lopsided approach to supervision negatively impacts contributor engagement and platform sustainability
A Study of Ethics in Crowd Work-Based Research
Crowd work as a form of a social-technical system has become a popular setting for conducting and distributing academic research. Crowd work platforms such as Amazon Mechanical Turk (MTurk) are widely used by academic researchers. Recent scholarship has highlighted the importance of ethical issues because they could affect the long-term development and application of crowd work in various fields such as the gig economy. However, little study or deliberation has been conducted on the ethical issues associated with academic research in this context. Current sources for ethical research practice, such as the Belmont Report, have not been examined thoroughly on how they should be applied to tackle the ethical issues in crowd work-based research such as those in data collection and usage. Hence, how crowd work-based research should be conducted to make it respectful, beneficent, and just is still an open question.
This dissertation research has pursued this open question by interviewing 15 academic researchers and 17 IRB directors and analysts in terms of their perceptions and reflections on ethics in research on MTurk; meanwhile, it has analyzed 15 research guidelines and consent templates for research on MTurk and 14 published papers from the interviewed scholars. Based on analyzing these different sources of data, this dissertation research has identified three dimensions of ethics in crowd work-based research, including ethical issues in payment, data, and human subjects. This dissertation research also uncovered the âoriginal sinâ of these ethical issues and discussed its impact in academia, as well as the limitations of the Belmont Report and AoIR Ethical Guidelines 3.0 for Internet Research. The findings and implications of this research can help researchers and IRBs be more conscious about ethics in crowd work-based research and also inspire academic associations such as AoIR to develop ethical guidelines that can address these ethical issues
Crowdsourcing Accessibility: Human-Powered Access Technologies
People with disabilities have always engaged the people around them in order to circumvent inaccessible situations, allowing them to live more independently and get things done in their everyday lives. Increasing connectivity is allowing this approach to be extended to wherever and whenever it is needed. Technology can leverage this human work force to accomplish tasks beyond the capabilities of computers, increasing how accessible the world is for people with disabilities. This article outlines the growth of online human support, outlines a number of projects in this space, and presents a set of challenges and opportunities for this work going forward
Geometric Reasoning With a Virtual Workforce (Crowdsourcing for CAD/CAM)
This paper reports the initial results of employing a commercial Crowdsourcing (aka Micro-outsourcing) service to provide geometric analysis of complex 3D models of mechanical components. Although Crowdsourcing sites (which distribute browser based tasks to potentially large numbers of anonymous workers on the Internet) are well established for image analysis and text manipulation there is little academic work on the effectiveness or limitations of the approach. The work reported here describes the initial results of using Crowdsourcing to determine the 'best' canonical, or characteristic, views of complex 3D models of engineering components. The results suggest that the approach is a cheap, fast and effective method of solving what is a computationally difficult problem
- âŠ