18 research outputs found

    Simple versus optimal contracts

    Get PDF
    We consider the classic principal-agent model of contract theory, in which a principal designs an outcomedependent compensation scheme to incentivize an agent to take a costly and unobservable action. When all of the model parameters-including the full distribution over principal rewards resulting from each agent action-are known to the designer, an optimal contract can in principle be computed by linear programming. In addition to their demanding informational requirements, however, such optimal contracts are often complex and unintuitive, and do not resemble contracts used in practice. This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the first moment of each action's reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds

    Delegating Data Collection in Decentralized Machine Learning

    Full text link
    Motivated by the emergence of decentralized machine learning ecosystems, we study the delegation of data collection. Taking the field of contract theory as our starting point, we design optimal and near-optimal contracts that deal with two fundamental machine learning challenges: lack of certainty in the assessment of model quality and lack of knowledge regarding the optimal performance of any model. We show that lack of certainty can be dealt with via simple linear contracts that achieve 1-1/e fraction of the first-best utility, even if the principal has a small test set. Furthermore, we give sufficient conditions on the size of the principal's test set that achieves a vanishing additive approximation to the optimal utility. To address the lack of a priori knowledge regarding the optimal performance, we give a convex program that can adaptively and efficiently compute the optimal contract

    Incomplete Information VCG Contracts for Common Agency

    Get PDF
    We study contract design for welfare maximization in the well-known “common agency” model introduced in 1986 by Bernheim and Whinston. This model combines the challenges of coordinating multiple principals with the fundamental challenge of contract design: that principals have incomplete information of the agent’s choice of action. Our goal is to design contracts that satisfy truthfulness of the principals, welfare maximization by the agent, and two fundamental properties of individual rationality (IR) for the principals and limited liability (LL) for the agent. Our results reveal an inherent impossibility. Whereas for every common agency setting there exists a truthful and welfare-maximizing contract, which we refer to as “incomplete information Vickrey–Clarke–Groves contracts,” there is no such contract that also satisfies IR and LL for all settings. As our main results, we show that the class of settings for which there exists a contract that satisfies truthfulness, welfare maximization, LL, and IR is identifiable by a polynomial-time algorithm. Furthermore, for these settings, we design a polynomial-time computable contract: given valuation reports from the principals, it returns, if possible for the setting, a payment scheme for the agent that constitutes a contract with all desired properties. We also give a sufficient graph-theoretic condition on the population of principals that ensures the existence of such a contract and two truthful and welfare-maximizing contracts, in which one satisfies LL and the other one satisfies IR.</p

    Incomplete Information VCG Contracts for Common Agency

    Get PDF
    We study contract design for welfare maximization in the well-known “common agency” model introduced in 1986 by Bernheim and Whinston. This model combines the challenges of coordinating multiple principals with the fundamental challenge of contract design: that principals have incomplete information of the agent’s choice of action. Our goal is to design contracts that satisfy truthfulness of the principals, welfare maximization by the agent, and two fundamental properties of individual rationality (IR) for the principals and limited liability (LL) for the agent. Our results reveal an inherent impossibility. Whereas for every common agency setting there exists a truthful and welfare-maximizing contract, which we refer to as “incomplete information Vickrey–Clarke–Groves contracts,” there is no such contract that also satisfies IR and LL for all settings. As our main results, we show that the class of settings for which there exists a contract that satisfies truthfulness, welfare maximization, LL, and IR is identifiable by a polynomial-time algorithm. Furthermore, for these settings, we design a polynomial-time computable contract: given valuation reports from the principals, it returns, if possible for the setting, a payment scheme for the agent that constitutes a contract with all desired properties. We also give a sufficient graph-theoretic condition on the population of principals that ensures the existence of such a contract and two truthful and welfare-maximizing contracts, in which one satisfies LL and the other one satisfies IR.</p

    Deep Contract Design via Discontinuous Networks

    Full text link
    Contract design involves a principal who establishes contractual agreements about payments for outcomes that arise from the actions of an agent. In this paper, we initiate the study of deep learning for the automated design of optimal contracts. We introduce a novel representation: the Discontinuous ReLU (DeLU) network, which models the principal's utility as a discontinuous piecewise affine function of the design of a contract where each piece corresponds to the agent taking a particular action. DeLU networks implicitly learn closed-form expressions for the incentive compatibility constraints of the agent and the utility maximization objective of the principal, and support parallel inference on each piece through linear programming or interior-point methods that solve for optimal contracts. We provide empirical results that demonstrate success in approximating the principal's utility function with a small number of training samples and scaling to find approximately optimal contracts on problems with a large number of actions and outcomes

    Contracts with Information Acquisition, via Scoring Rules

    Full text link
    We consider a principal-agent problem where the agent may privately choose to acquire relevant information prior to taking a hidden action. This model generalizes two special cases: a classic moral hazard setting, and a more recently studied problem of incentivizing information acquisition (IA). We show that all of these problems can be reduced to the design of a proper scoring rule. Under a limited liability condition, we consider the special cases separately and then the general problem. We give novel results for the special case of IA, giving a closed form "pointed polyhedral cone" solution for the general multidimensional problem. We also describe a geometric, scoring-rules based solution to the case of the classic contracts problem. Finally, we give an efficient algorithm for the general problem of Contracts with Information Acquisition

    Operationalizing Counterfactual Metrics: Incentives, Ranking, and Information Asymmetry

    Full text link
    From the social sciences to machine learning, it has been well documented that metrics to be optimized are not always aligned with social welfare. In healthcare, Dranove et al. (2003) showed that publishing surgery mortality metrics actually harmed the welfare of sicker patients by increasing provider selection behavior. We analyze the incentive misalignments that arise from such average treated outcome metrics, and show that the incentives driving treatment decisions would align with maximizing total patient welfare if the metrics (i) accounted for counterfactual untreated outcomes and (ii) considered total welfare instead of averaging over treated patients. Operationalizing this, we show how counterfactual metrics can be modified to behave reasonably in patient-facing ranking systems. Extending to realistic settings when providers observe more about patients than the regulatory agencies do, we bound the decay in performance by the degree of information asymmetry between principal and agent. In doing so, our model connects principal-agent information asymmetry with unobserved heterogeneity in causal inference
    corecore