1,244 research outputs found

    A data-driven game theoretic strategy for developers in software crowdsourcing: a case study

    Get PDF
    Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward

    Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation

    Full text link
    A human computation system can be viewed as a distributed system in which the processors are humans, called workers. Such systems harness the cognitive power of a group of workers connected to the Internet to execute relatively simple tasks, whose solutions, once grouped, solve a problem that systems equipped with only machines could not solve satisfactorily. Examples of such systems are Amazon Mechanical Turk and the Zooniverse platform. A human computation application comprises a group of tasks, each of them can be performed by one worker. Tasks might have dependencies among each other. In this study, we propose a theoretical framework to analyze such type of application from a distributed systems point of view. Our framework is established on three dimensions that represent different perspectives in which human computation applications can be approached: quality-of-service requirements, design and management strategies, and human aspects. By using this framework, we review human computation in the perspective of programmers seeking to improve the design of human computation applications and managers seeking to increase the effectiveness of human computation infrastructures in running such applications. In doing so, besides integrating and organizing what has been done in this direction, we also put into perspective the fact that the human aspects of the workers in such systems introduce new challenges in terms of, for example, task assignment, dependency management, and fault prevention and tolerance. We discuss how they are related to distributed systems and other areas of knowledge.Comment: 3 figures, 1 tabl

    Worker Retention, Response Quality, and Diversity in Microtask Crowdsourcing: An Experimental Investigation of the Potential for Priming Effects to Promote Project Goals

    Get PDF
    Online microtask crowdsourcing platforms act as efficient resources for delegating small units of work, gathering data, generating ideas, and more. Members of research and business communities have incorporated crowdsourcing into problem-solving processes. When human workers contribute to a crowdsourcing task, they are subject to various stimuli as a result of task design. Inter-task priming effects - through which work is nonconsciously, yet significantly, influenced by exposure to certain stimuli - have been shown to affect microtask crowdsourcing responses in a variety of ways. Instead of simply being wary of the potential for priming effects to skew results, task administrators can utilize proven priming procedures in order to promote project goals. In a series of three experiments conducted on Amazon’s Mechanical Turk, we investigated the effects of proposed priming treatments on worker retention, response quality, and response diversity. In our first two experiments, we studied the effect of initial response freedom on sustained worker participation and response quality. We expected that workers who were granted greater levels of freedom in an initial response would be stimulated to complete more work and deliver higher quality work than workers originally constrained in their initial response possibilities. We found no significant relationship between the initial response freedom granted to workers and the amount of optional work they completed. The degree of initial response freedom also did not have a significant impact on subsequent response quality. However, the influence of inter-task effects were evident based on response tendencies for different question types. We found evidence that consistency in task structure may play a stronger role in promoting response quality than proposed priming procedures. In our final experiment, we studied the influence of a group-level priming treatment on response diversity. Instead of varying task structure for different workers, we varied the degree of overlap in question content distributed to different workers in a group. We expected groups of workers that were exposed to more diverse preliminary question sets to offer greater diversity in response to a subsequent question. Although differences in response diversity were revealed, no consistent trend between question content overlap and response diversity prevailed. Nevertheless, combining consistent task structure with crowd-level priming procedures - to encourage diversity in inter-task effects across the crowd - offers an exciting path for future study

    Comparing Strategies for Winning Expert-rated and Crowd-rated Crowdsourcing Contests: First Findings

    Get PDF
    Many studies have been done on expert-rated crowdsourcing contests but few have examined crowd-rated contests in which winners are determined by the voting of the crowd. Due to the different rating mechanisms, determinants for winning may be different under two types of contests. Based on previous studies, we identify three types of winning determinants: expertise, submission timing, and social capital. Our initial investigation, based on 91 entries of two contests in Zooppa, supports that those variables play different roles in winning crowd-rated contests than in winning expert-rated contests. Specifically, past winning experience in crowd-rated contests predicts future success in crowd-rated contests, while past winning experience in expert-rated contests predicts future success in expert-rated contests. We discover a U-shaped relationship between the submission time and winning in both types of contests. Social capital elevates the probability of winning a crowd-rated contest only if the social capital is sufficiently high

    Understanding the Role of Bounty Awards in Improving Content Contribution: Bounty Amount and Temporal Scarcity

    Get PDF
    The bounty award system has been implemented on UGC platforms to address specific issues and improve content contributions. This study aims to assess its effectiveness by examining the bounty amount and temporal scarcity. Based on the optimistic bias theory, we posit that the competition for bounty awards among users can have a positive effect, as users may overestimate their chances of winning and persist in their efforts. Additionally, we hypothesize that the amount of bounty award does not have a linear effect on the quantity and quality of user-generated content, but instead follows an inverted U-shaped relationship. Furthermore, drawing on the stuck-in-the-middle (STIM) effect, we hypothesize that temporal scarcity influences contributors\u27 effort allocation in a U-shaped relationship. By exploring these hypotheses, we aim to advance the understanding of the underlying mechanisms of bounty awards and contribute to the development of effective peer incentive strategies

    Understanding and Leveraging Crowd Development in Crowdsourcing

    Get PDF
    abstract: Although many examples have demonstrated the great potential of a human crowd as an alternative supplier in creative problem-solving, empirical evidence shows that the performance of a crowd varies greatly even under similar situations. This phenomenon is defined as the performance variation puzzle in crowdsourcing. Cases suggest that crowd development influences crowd performance, but little research in crowdsourcing literature has examined the issue of crowd development. This dissertation studies how crowd development impacts crowd performance in crowdsourcing. It first develops a double-funnel framework on crowd development. Based on structural thinking and four crowd development examples, this conceptual framework elaborates different steps of crowd development in crowdsourcing. By doing so, this dissertation partitions a crowd development process into two sub-processes that map out two empirical studies. The first study examines the relationships between elements of event design and crowd emergence and the mechanisms underlying these relationships. This study takes a strong inference approach and tests whether tournament theory is more applicable than diffusion theory in explaining the relationships between elements of event design and crowd emergence in crowdsourcing. Results show that that neither diffusion theory nor tournament theory fully explains these relationships. This dissertation proposes a contatition (i.e., contagious competition) perspective that incorporates both elements of these two theories to get a full understanding of crowd emergence in crowdsourcing. The second empirical study draws from innovation search literature and tournament theory to address the performance variation puzzle through analyzing crowd attributes. Results show that neither innovation search perspective nor tournament theory fully explains the relationships between crowd attributes and crowd performance. Based on the research findings, this dissertation discovers a competition-search mechanism beneath the variation of crowd performance in crowdsourcing. This dissertation makes a few significant contributions. It maps out an emergent process for the first time in supply chain literature, discovers the mechanisms underlying the performance implication of a crowd-development process, and answers a research call on crowd engagement and utilization. Managerial implications for crowd management are also discussed.Dissertation/ThesisDoctoral Dissertation Business Administration 201

    Learning from Winners: A Strategic Perspective of Improving Freelancers’ Bidding Competitiveness in Crowdsourcing

    Get PDF
    The rapid growth of crowdsourcing grants freelancers unprecedented opportunities to materialize their expertise by bidding in specific tasks. Despite lowering freelancers’ participation costs, the bidding mechanism meanwhile induces intense competition, rendering it difficult for freelancers to submit competitive bids. Although previous research has disentangled several bidding strategies, scant attention was paid to whether and how freelancers should learn to adjust their bidding strategies and improve bidding competitiveness during the journey of participating in multiple tasks. To fill in this gap, we adapt a set of bidding strategies from auction literature into the crowdsourcing context. Leveraging the lens of vicarious learning, we advance that freelancers’ learning from winners on bidding strategies will enhance their bidding competitiveness, which is moderated by task complexity. Our preliminary results suggest a significant relationship between strategic learning and bidding competitiveness, along with the moderating effect of task complexity. Expected contributions and future schemes are discussed finally
    • 

    corecore