24 research outputs found

    Efficiently Characterizing Games Consistent with Perturbed Equilibrium Observations

    Get PDF
    In this thesis, we study the problem of characterizing the set of games that are consistent with observed equilibrium play, a fundamental problem in econometrics. Our contribution is to develop and analyze a new methodology based on convex optimization to address this problem, for many classes of games and observation models of interest. Our approach provides a sharp, computationally efficient characterization of the extent to which a particular set of observations constrains the space of games that could have generated them. This allows us to solve a number of variants of this problem as well as to quantify the power of games from particular classes (e.g., zero-sum, potential, linearly parameterized) to explain player behavior. We illustrate our approach with numerical simulations.</p

    Access to Population-Level Signaling as a Source of Inequality

    Get PDF
    We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR). We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy

    Data: Implications for Markets and for Society

    Get PDF
    Every day, massive amounts of data are gathered, exchanged, and used to run statistical computations, train machine learning algorithms, and inform decisions on individuals and populations. The quick rise of data, the need to exchange and process it, to take data privacy concerns into account, and to understand how it affects decision-making, introduce many new and interesting economic, game theoretic, and algorithmic challenges. The goal of this thesis is to provide theoretical foundations to approach these challenges. The first part of this thesis focuses on the design of mechanisms that purchase then aggregate data from many sources, in order to perform statistical tasks. The second part of this thesis revolves around the societal concerns associated with the use of individuals' data. The first such concern we examine is that of privacy, when using sensitive data about individuals in statistical computations; we focus our attention on how privacy constraints interact with the task of designing mechanisms for acquisition and aggregation of sensitive data. The second concern we focus on is that of fairness in decision-making: we aim to provide tools to society that help prevent discrimination against individuals and populations based on sensitive attributes in their data, when making important decisions about them. Finally, we end this thesis on a study of the interactions between data and strategic behavior. There, we see data as a source of information that informs and affects agents' incentives; we study how information revelation impacts agent behavior in auctions, and in turn how a seller should design auctions that take such information revelation into account.</p

    Downstream Effects of Affirmative Action

    Full text link
    We study a two-stage model, in which students are 1) admitted to college on the basis of an entrance exam which is a noisy signal about their qualifications (type), and then 2) those students who were admitted to college can be hired by an employer as a function of their college grades, which are an independently drawn noisy signal of their type. Students are drawn from one of two populations, which might have different type distributions. We assume that the employer at the end of the pipeline is rational, in the sense that it computes a posterior distribution on student type conditional on all information that it has available (college admissions, grades, and group membership), and makes a decision based on posterior expectation. We then study what kinds of fairness goals can be achieved by the college by setting its admissions rule and grading policy. For example, the college might have the goal of guaranteeing equal opportunity across populations: that the probability of passing through the pipeline and being hired by the employer should be independent of group membership, conditioned on type. Alternately, the college might have the goal of incentivizing the employer to have a group blind hiring rule. We show that both goals can be achieved when the college does not report grades. On the other hand, we show that under reasonable conditions, these goals are impossible to achieve even in isolation when the college uses an (even minimally) informative grading policy

    Optimal Data Acquisition for Statistical Estimation

    Get PDF
    We consider a data analyst's problem of purchasing data from strategic agents to compute an unbiased estimate of a statistic of interest. Agents incur private costs to reveal their data and the costs can be arbitrarily correlated with their data. Once revealed, data are verifiable. This paper focuses on linear unbiased estimators. We design an individually rational and incentive compatible mechanism that optimizes the worst-case mean-squared error of the estimation, where the worst-case is over the unknown correlation between costs and data, subject to a budget constraint in expectation. We characterize the form of the optimal mechanism in closed-form. We further extend our results to acquiring data for estimating a parameter in regression analysis, where private costs can correlate with the values of the dependent variable but not with the values of the independent variables

    Access to Population-Level Signaling as a Source of Inequality

    Get PDF
    We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR). We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy

    Non-Exploitable Protocols for Repeated Cake Cutting

    Get PDF
    We introduce the notion of exploitability in cut-and-choose protocols for repeated cake cutting. If a cut-and-choose protocol is repeated, the cutter can possibly gain information about the chooser from her previous actions, and exploit this information for her own gain, at the expense of the chooser. We define a generalization of cut-and-choose protocols - forced-cut protocols - in which some cuts are made exogenously while others are made by the cutter, and show that there exist non-exploitable forced-cut protocols that use a small number of cuts per day: When the cake has at least as many dimensions as days, we show a protocol that uses a single cut per day. When the cake is 1-dimensional, we show an adaptive non-exploitable protocol that uses 3 cuts per day, and a non-adaptive protocol that uses n cuts per day (where n is the number of days). In contrast, we show that no non-adaptive non-exploitable forced-cut protocol can use a constant number of cuts per day. Finally, we show that if the cake is at least 2-dimensional, there is a non-adaptive non-exploitable protocol that uses 3 cuts per day
    corecore