1,412 research outputs found

    Optimal Crowdsourcing Contests

    Full text link
    We study the design and approximation of optimal crowdsourcing contests. Crowdsourcing contests can be modeled as all-pay auctions because entrants must exert effort up-front to enter. Unlike all-pay auctions where a usual design objective would be to maximize revenue, in crowdsourcing contests, the principal only benefits from the submission with the highest quality. We give a theory for optimal crowdsourcing contests that mirrors the theory of optimal auction design: the optimal crowdsourcing contest is a virtual valuation optimizer (the virtual valuation function depends on the distribution of contestant skills and the number of contestants). We also compare crowdsourcing contests with more conventional means of procurement. In this comparison, crowdsourcing contests are relatively disadvantaged because the effort of losing contestants is wasted. Nonetheless, we show that crowdsourcing contests are 2-approximations to conventional methods for a large family of "regular" distributions, and 4-approximations, otherwise.Comment: The paper has 17 pages and 1 figure. It is to appear in the proceedings of ACM-SIAM Symposium on Discrete Algorithms 201

    The Allocation of Prizes in Crowdsourcing Contests

    Get PDF
    A unique characteristic of crowdsourcing contest is the coexistence of multiple contests and each individual contestant strategically chooses the contest that maximizes his/her expected gain. The competition between contests for contestants significantly changes the optimal allocation of prizes for contest organizers. We show that the contestants with higher ability prefer to single-prize contests while those with lower ability prefer to multiple-prize contests, which makes single-prize contest is no longer the optimal choice for organizers as it was in the context of a single contest. We demonstrate that the organizers may allocate multiple prizes whether they intent to maximize total efforts or highest efforts, and presents the condition under which the multiple-prize approach will be optimal

    Submitting tentative solutions for platform feedback in crowdsourcing contests: breaking network closure with boundary spanning for team performance

    Get PDF
    Purpose To obtain optimal deliverables, more and more crowdsourcing platforms allow contest teams to submit tentative solutions and update scores/rankings on public leaderboards. Such feedback-seeking behavior for progress benchmarking pertains to the team representation activity of boundary spanning. The literature on virtual team performance primarily focuses on team characteristics, among which network closure is generally considered a positive factor. This study further examines how boundary spanning helps mitigate the negative impact of network closure. Design/methodology/approach This study collected data of 9,793 teams in 246 contests from Kaggle.com. Negative binomial regression modeling and linear regression modeling are employed to investigate the relationships among network closure, boundary spanning and team performance in crowdsourcing contests. Findings Whereas network closure turns out to be a negative asset for virtual teams to seek platform feedback, boundary spanning mitigates its impact on team performance. On top of such a partial mediation, boundary spanning experience and previous contest performance serve as potential moderators. Practical implications The findings offer helpful implications for researchers and practitioners on how to break network closure and encourage boundary spanning with the establishment of facilitating structures in crowdsourcing contests. Originality/value The study advances the understanding of theoretical relationships among network closure, boundary spanning and team performance in crowdsourcing contests

    A data-driven game theoretic strategy for developers in software crowdsourcing: a case study

    Get PDF
    Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward

    Behavioral Mechanism Design: Optimal Contests for Simple Agents

    Full text link
    Incentives are more likely to elicit desired outcomes when they are designed based on accurate models of agents' strategic behavior. A growing literature, however, suggests that people do not quite behave like standard economic agents in a variety of environments, both online and offline. What consequences might such differences have for the optimal design of mechanisms in these environments? In this paper, we explore this question in the context of optimal contest design for simple agents---agents who strategically reason about whether or not to participate in a system, but not about the input they provide to it. Specifically, consider a contest where nn potential contestants with types (qi,ci)(q_i,c_i) each choose between participating and producing a submission of quality qiq_i at cost cic_i, versus not participating at all, to maximize their utilities. How should a principal distribute a total prize VV amongst the nn ranks to maximize some increasing function of the qualities of elicited submissions in a contest with such simple agents? We first solve the optimal contest design problem for settings with homogenous participation costs ci=cc_i = c. Here, the optimal contest is always a simple contest, awarding equal prizes to the top jβˆ—j^* contestants for a suitable choice of jβˆ—j^*. (In comparable models with strategic effort choices, the optimal contest is either a winner-take-all contest or awards possibly unequal prizes, depending on the curvature of agents' effort cost functions.) We next address the general case with heterogeneous costs where agents' types are inherently two-dimensional, significantly complicating equilibrium analysis. Our main result here is that the winner-take-all contest is a 3-approximation of the optimal contest when the principal's objective is to maximize the quality of the best elicited contribution.Comment: This is the full version of a paper in the ACM Conference on Economics and Computation (ACM-EC), 201

    Tuning the Diversity of Open-Ended Responses from the Crowd

    Full text link
    Crowdsourcing can solve problems that current fully automated systems cannot. Its effectiveness depends on the reliability, accuracy, and speed of the crowd workers that drive it. These objectives are frequently at odds with one another. For instance, how much time should workers be given to discover and propose new solutions versus deliberate over those currently proposed? How do we determine if discovering a new answer is appropriate at all? And how do we manage workers who lack the expertise or attention needed to provide useful input to a given task? We present a mechanism that uses distinct payoffs for three possible worker actions---propose,vote, or abstain---to provide workers with the necessary incentives to guarantee an effective (or even optimal) balance between searching for new answers, assessing those currently available, and, when they have insufficient expertise or insight for the task at hand, abstaining. We provide a novel game theoretic analysis for this mechanism and test it experimentally on an image---labeling problem and show that it allows a system to reliably control the balance betweendiscovering new answers and converging to existing ones
    • …
    corecore