1,163 research outputs found

    Worker Retention, Response Quality, and Diversity in Microtask Crowdsourcing: An Experimental Investigation of the Potential for Priming Effects to Promote Project Goals

    Get PDF
    Online microtask crowdsourcing platforms act as efficient resources for delegating small units of work, gathering data, generating ideas, and more. Members of research and business communities have incorporated crowdsourcing into problem-solving processes. When human workers contribute to a crowdsourcing task, they are subject to various stimuli as a result of task design. Inter-task priming effects - through which work is nonconsciously, yet significantly, influenced by exposure to certain stimuli - have been shown to affect microtask crowdsourcing responses in a variety of ways. Instead of simply being wary of the potential for priming effects to skew results, task administrators can utilize proven priming procedures in order to promote project goals. In a series of three experiments conducted on Amazon’s Mechanical Turk, we investigated the effects of proposed priming treatments on worker retention, response quality, and response diversity. In our first two experiments, we studied the effect of initial response freedom on sustained worker participation and response quality. We expected that workers who were granted greater levels of freedom in an initial response would be stimulated to complete more work and deliver higher quality work than workers originally constrained in their initial response possibilities. We found no significant relationship between the initial response freedom granted to workers and the amount of optional work they completed. The degree of initial response freedom also did not have a significant impact on subsequent response quality. However, the influence of inter-task effects were evident based on response tendencies for different question types. We found evidence that consistency in task structure may play a stronger role in promoting response quality than proposed priming procedures. In our final experiment, we studied the influence of a group-level priming treatment on response diversity. Instead of varying task structure for different workers, we varied the degree of overlap in question content distributed to different workers in a group. We expected groups of workers that were exposed to more diverse preliminary question sets to offer greater diversity in response to a subsequent question. Although differences in response diversity were revealed, no consistent trend between question content overlap and response diversity prevailed. Nevertheless, combining consistent task structure with crowd-level priming procedures - to encourage diversity in inter-task effects across the crowd - offers an exciting path for future study

    Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

    Full text link
    Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.Comment: 10 pages. To appear in the Proceedings of the Conference on Human Factors in Computing Systems 2018 (CHI'18

    Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems

    Full text link
    Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester. In particular, our formulation significantly generalizes the budget-free online task pricing problems studied in prior work. We treat this problem as a multi-armed bandit problem, with each "arm" representing a potential contract. To cope with the large (and in fact, infinite) number of arms, we propose a new algorithm, AgnosticZooming, which discretizes the contract space into a finite number of regions, effectively treating each region as a single arm. This discretization is adaptively refined, so that more promising regions of the contract space are eventually discretized more finely. We analyze this algorithm, showing that it achieves regret sublinear in the time horizon and substantially improves over non-adaptive discretization (which is the only competing approach in the literature). Our results advance the state of art on several different topics: the theory of crowdsourcing markets, principal-agent problems, multi-armed bandits, and dynamic pricing.Comment: This is the full version of a paper in the ACM Conference on Economics and Computation (ACM-EC), 201

    Tapping into the Collective Creativity of the Crowd: The Effectiveness of Key Incentives in Fostering Creative Crowdsourcing

    Get PDF
    To better understand the conditions that most effectively stimulate creative participation online, a crowdsourcing project was implemented on Amazon’s Mechanical Turk, collecting 4200 written and visual submissions from online participants. An experimental research design tested the impact of specific incentive structures (i.e. financial rewards, bonuses, specification of project purpose, attribution of authorship credit) on the outcomes of creative participation (quantity of submissions, quality of submissions, time spent on task). The study found that extrinsic rewards (i.e. higher pay and bonuses) are effective in encouraging participants to accept the creative task, whereas the strategies that boost the creativity of the submissions are: offering a bonus, mentioning a charitable purpose, and giving contributors authorship credit. These findings help illuminate the factors that have the greatest impact on the quality and quantity of online creative participation, thus making a vital contribution to our understanding of digital creativity

    Crowdsourcing as a way to access external knowledge for innovation

    Get PDF
    This paper focuses on “crowdsourcing” as a significant trend in the new paradigm of open innovation (Chesbrough 2006; Chesbrough & Appleyard 2007). Crowdsourcing conveys the idea of opening the R&D processes to “the crowd” through a web 2.0 infrastructure. Based on two cases studies of crowdsourcing webstartups (Wilogo and CrowdSpirit), the paper aims to build a framework to characterize and interpret the tension between value creation by a community and value capture by a private economic actor. Contributing to the discussions on “hybrid organizational forms” in organizational studies (Bruce & Jordan 2007), the analysis examines how theses new models combine various forms of relationships and exchanges (market or non market). It describes how crowdsourcing conveys new patterns of control, incentives and co-ordination mechanisms.communautĂ© ; crowdsourcing ; innovation ; formes organisationnelles hybrides ; plateforme ; web 2.0

    Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques and Assurance Actions

    Get PDF
    Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar - all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.Comment: 40 pages main paper, 5 pages appendi

    Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges

    Full text link
    Participatory sensing is a powerful paradigm which takes advantage of smartphones to collect and analyze data beyond the scale of what was previously possible. Given that participatory sensing systems rely completely on the users' willingness to submit up-to-date and accurate information, it is paramount to effectively incentivize users' active and reliable participation. In this paper, we survey existing literature on incentive mechanisms for participatory sensing systems. In particular, we present a taxonomy of existing incentive mechanisms for participatory sensing systems, which are subsequently discussed in depth by comparing and contrasting different approaches. Finally, we discuss an agenda of open research challenges in incentivizing users in participatory sensing.Comment: Updated version, 4/25/201
    • 

    corecore