7 research outputs found

    Behavioral Mechanism Design: Optimal Contests for Simple Agents

    Full text link
    Incentives are more likely to elicit desired outcomes when they are designed based on accurate models of agents' strategic behavior. A growing literature, however, suggests that people do not quite behave like standard economic agents in a variety of environments, both online and offline. What consequences might such differences have for the optimal design of mechanisms in these environments? In this paper, we explore this question in the context of optimal contest design for simple agents---agents who strategically reason about whether or not to participate in a system, but not about the input they provide to it. Specifically, consider a contest where nn potential contestants with types (qi,ci)(q_i,c_i) each choose between participating and producing a submission of quality qiq_i at cost cic_i, versus not participating at all, to maximize their utilities. How should a principal distribute a total prize VV amongst the nn ranks to maximize some increasing function of the qualities of elicited submissions in a contest with such simple agents? We first solve the optimal contest design problem for settings with homogenous participation costs ci=cc_i = c. Here, the optimal contest is always a simple contest, awarding equal prizes to the top j∗j^* contestants for a suitable choice of j∗j^*. (In comparable models with strategic effort choices, the optimal contest is either a winner-take-all contest or awards possibly unequal prizes, depending on the curvature of agents' effort cost functions.) We next address the general case with heterogeneous costs where agents' types are inherently two-dimensional, significantly complicating equilibrium analysis. Our main result here is that the winner-take-all contest is a 3-approximation of the optimal contest when the principal's objective is to maximize the quality of the best elicited contribution.Comment: This is the full version of a paper in the ACM Conference on Economics and Computation (ACM-EC), 201

    Decentralized Attack Search and the Design of Bug Bounty Schemes

    Full text link
    Systems and blockchains often have security vulnerabilities and can be attacked by adversaries, with potentially significant negative consequences. Therefore, infrastructure providers increasingly rely on bug bounty programs, where external individuals probe the system and report any vulnerabilities (bugs) in exchange for rewards (bounty). We develop a simple contest model of bug bounty. A group of individuals of arbitrary size is invited to undertake a costly search for bugs. The individuals differ with regard to their abilities, which we capture by different costs to achieve a certain probability to find bugs if any exist. Costs are private information. We study equilibria of the contest and characterize the optimal design of bug bounty schemes. In particular, the designer can vary the size of the group of individuals invited to search, add a paid expert, insert an artificial bug with some probability, and pay multiple prizes

    Competition among Parallel Contests

    Full text link
    We investigate the model of multiple contests held in parallel, where each contestant selects one contest to join and each contest designer decides the prize structure to compete for the participation of contestants. We first analyze the strategic behaviors of contestants and completely characterize the symmetric Bayesian Nash equilibrium. As for the strategies of contest designers, when other designers' strategies are known, we show that computing the best response is NP-hard and propose a fully polynomial time approximation scheme (FPTAS) to output the ϵ\epsilon-approximate best response. When other designers' strategies are unknown, we provide a worst case analysis on one designer's strategy. We give an upper bound on the utility of any strategy and propose a method to construct a strategy whose utility can guarantee a constant ratio of this upper bound in the worst case.Comment: Accepted by the 18th Conference on Web and Internet Economics (WINE 2022

    Elicitation and Aggregation of Crowd Information

    Get PDF
    This thesis addresses challenges in elicitation and aggregation of crowd information for settings where an information collector, called center, has a limited knowledge about information providers, called agents. Each agent is assumed to have noisy private information that brings a high information gain to the center when it is aggregated with the private information of other agents. We address two particular issues in eliciting crowd information: 1) how to incentivize agents to participate and provide accurate data; 2) how to aggregate crowd information so that the negative impact of agents who provide low quality information is bounded. We examine three different information elicitation settings. In the first elicitation setting, agents report their observations regarding a single phenomenon that represents an abstraction of a crowdsourcing task. The center itself does not observe the phenomenon, so it rewards agents by comparing their reports. Clearly, a rational agent bases her reporting strategy on what she believes about other agents, called peers. We prove that, in general, no payment mechanism can achieve strict properness (i.e., adopt truthful reporting as a strict equilibrium strategy) if agents only report their observations, even if they share a common belief system. This motivates the use of payment mechanisms that are based on an additional report. We show that a general payment mechanism cannot have a simple structure, often adopted by prior work, and that in the limit case, when observations can take real values, agents are constrained to share a common belief system. Furthermore, we develop several payment mechanisms for the elicitation of non-binary observations. In the second elicitation setting, a group of agents observes multiple a priori similar phenomena. Due to the a priori similarity condition, the setting represents a refinement of the former setting and enables one to achieve stronger incentive properties without requiring additional reports or constraining agents to share a common belief system. We extend the existing mechanisms to allow non-binary observations by constructing strongly truthful mechanisms (i.e., mechanisms in which truthful reporting is the highest-paying equilibrium) for different types of agents' population. In the third elicitation setting, agents observe a time evolving phenomenon, and a few of them, whose identity is known, are trusted to report truthful observations. The existence of trusted agents makes this setting much more stringent than the previous ones. We show that, in the context of online information aggregation, one can not only incentivize agents to provide informative reports, but also limit the effectiveness of malicious agents who deliberately misreport. To do so, we construct a reputation system that puts a bound on the negative impact that any misreporting strategy can have on the learned aggregate. Finally, we experimentally verify the effectiveness of novel elicitation mechanisms in community sensing simulation testbeds and a peer grading experiment

    Optimal contest design for simple agents

    No full text
    corecore