342,616 research outputs found

    Learning by Doing vs. Learning from Others in a Principal-Agent Model

    Get PDF
    We introduce learning in a principal-agent model of stochastic output sharing under moral hazard. Without knowing the agents' preferences and technology the principal tries to learn the optimal agency contract. We implement two learning paradigms - social (learning from others) and individual (learning by doing). We use a social evolutionary learning algorithm (SEL) to represent social learning. Within the individual learning paradigm, we investigate the performance of reinforcement learning (RL), experience-weighted attraction learning (EWA), and individual evolutionary learning (IEL). Overall, our results show that learning in the principal-agent environment is very difficult. This is due to three main reasons: (1) the stochastic environment, (2) a discontinuity in the payoff space in a neighborhood of the optimal contract due to the participation constraint and (3) incorrect evaluation of foregone payoffs in the sequential game principal-agent setting. The first two factors apply to all learning algorithms we study while the third is the main contributor for the failure of the EWA and IEL models. Social learning (SEL), especially combined with selective replication, is much more successful in achieving convergence to the optimal contract than the canonical versions of individual learning from the literature. A modified version of the IEL algorithm using realized payoff evaluation performs better than the other individual learning models; however, it still falls short of the social learning's ability to converge to the optimal contract.learning, principal-agent model, moral hazard

    Incentivizing the Dynamic Workforce: Learning Contracts in the Gig-Economy

    Full text link
    In principal-agent models, a principal offers a contract to an agent to perform a certain task. The agent exerts a level of effort that maximizes her utility. The principal is oblivious to the agent's chosen level of effort, and conditions her wage only on possible outcomes. In this work, we consider a model in which the principal is unaware of the agent's utility and action space. She sequentially offers contracts to identical agents, and observes the resulting outcomes. We present an algorithm for learning the optimal contract under mild assumptions. We bound the number of samples needed for the principal obtain a contract that is within Ï”\epsilon of her optimal net profit for every Ï”>0\epsilon>0

    Progressive learning

    Full text link
    We study a dynamic principal–agent relationship with adverse selection and limited commitment. We show that when the relationship is subject to productivity shocks, the principal may be able to improve her value over time by progressively learning the agent's private information. She may even achieve her first‐best payoff in the long run. The relationship may also exhibit path dependence, with early shocks determining the principal's long‐run value. These findings contrast sharply with the results of the ratchet effect literature, in which the principal persistently obtains low payoffs, giving up substantial informational rents to the agent

    Overconfidence in a Career-Concerns Setting

    Get PDF
    We study the effects of overconfidence in a two-period investment-decision agency setting. Under common priors, agent risk aversion implies inefficiently low first-period investment. In our model, principal and agent disagree about the profitability of the investment decision conditional on a given public signal. An overconfident agent believes that the principal will update her beliefs upwards more often than not. As a consequence, the agent overestimates the benefits of learning from first-period investment. This implies that agent overconfidence mitigates the agency problems arising from the agent’s career concerns, even though an overconfident agent bears more project and reputational risk in equilibrium.overconfidence, heterogenous beliefs, career concerns

    Dynamic Incentive Contracts with Uncorrelated Private Information and History Dependent Outcomes

    Get PDF
    In existing papers on dynamic incentive contracts, the dynamic structure of the principal-agent relationship arises exclusively from the ability of the principal to learn about the hidden information over time. In this paper we deal with a different source of dynamics, which is considered standard in all areas of economics other than the information literature: we study situations where current opportunities depend on past and current actions, notwithstanding any information conveyed by the actions. Standard examples include investment, Learning by doing , and R&D. In order to focus on this neglected source of dynamics, we restrict our attention to situations involving asymmetric information in each period, but without any intertemporal informational correlation, so that no dynamic effect arises from informational asymmetries directly. This makes comparisons with static results both easier and more interesting. La dynamique des modĂšles actuels de contrats incitatifs provient de la capacitĂ© du principal, Ă  partir des actions observĂ©es dans le prĂ©sent, d'apprendre quelque chose sur l'information qui ne lui sera pas directement accessible dans le futur. Nous Ă©tudions ici une autre source de dynamique, nĂ©gligĂ©e dans la littĂ©rature sur l'information0501s standard dans toutes les autres branches de la science Ă©conomique, et qui rĂ©sulte du fait que les actions courantes dĂ©finissent les opportunitĂ©s futures. C'est ce qui se passe lorsqu'il y a investissement, apprentissage, R-D, etc. Pour bien identifier les propriĂ©tĂ©s dynamiques rĂ©sultant de ce type de situations dans les modĂšles de principal-agent avec information asymĂ©trique, nous nous en tenons Ă  des modĂšles oĂč il n'y a aucune corrĂ©lation entre information prĂ©sente et information future, si bien qu'aucun effet dynamique ne rĂ©sulte directement de l'asymĂ©trie d'information.Incentive contracts, Dynamic, Asymmetric information, Principal agent relationship, Investment, Learning by doing, Contrats incitatifs, Dynamique, Information asymĂ©trique, Relation principal-agent, Investissement, Learning by doing.

    Incentives for Experimenting Agents

    Get PDF
    We examine a repeated interaction between an agent, who undertakes experiments, and a principal who provides the requisite funding for these experiments. The agent's actions are hidden, and the principal, who makes the offers, cannot commit to future actions. We identify the unique Markovian equilibrium (whose structure depends on the parameters) and characterize the set of all equilibrium payoffs, uncovering a collection of non-Markovian equilibria that can Pareto dominate and reverse the qualitative properties of the Markovian equilibrium. The prospect of lucrative continuation payoffs makes it more expensive for the principal to incentivize the agent, giving rise to a dynamic agency cost. As a result, constrained efficient equilibrium outcomes call for nonstationary outcomes that front-load the agent's effort and that either attenuate or terminate the relationship inefficiently early.Experimentation, Learning, Agency, Dynamic agency, Venture capital, Repeated principal-agent problem

    Learning-by-employing: the value of commitment

    Get PDF
    We analyze a dynamic principal–agent model where an infinitely-lived principal faces asequence of finitely-lived agents who differ in their ability to produce output. The ability of anagent is initially unknown to both him and the principal. An agent’s effort affects the informationon ability that is conveyed by performance. We characterize the equilibrium contracts andshow that they display short–term commitment to employment when the impact of effort onoutput is persistent but delayed. By providing insurance against early termination, commitmentencourages agents to exert effort, and thus improves on the principal’s ability to identify theirtalent. We argue that this helps explain the use of probationary appointments in environmentsin which there exists uncertainty about individual ability.Keywords: dynamic principal–agent model, learning, commitment.

    Transferable Control

    Get PDF
    In this paper, we introduce the notion of transferable control, defined as a situation where one party (the principal, say) can transfer control to another party (the agent) but cannot commit herself to do so. One theoretical foundation for this notion builds on the distinction between formal and real authority introduced by Aghion and Tirole, in which the actual exercise of authority may require noncontractible information, absent which formal control rights are vacuous. We use this notion to study the extent to which control transfers may allow an agent to reveal information regarding his ability or willingness to cooperate with the principal in the future. We show that the distinction between contractible and transferable control can drastically influence how learning takes place: with contractible control, information about the agent can often be acquired through revelation mechanisms that involve communication and message-contingent control allocations; in contrast, when control is transferable but not contractible, it can be optimal to transfer control unconditionally and learn instead from the way in which the agent exercises control

    Incentives for Boundedly Rational Agents.

    Get PDF
    This paper develops a theoretical framework for analyzing incentive schemes under bounded rationality. It starts from a standard principal-agent model and then superimposes an assumption of boundedly rational behavior on the part of the agent. Boundedly rational behavior is modeled as an explicit optimization procedure which combines gradient dynamics with a specific form of social learning called imitation of scope.RATIONALITY ; ECONOMIC MODELS ; BEHAVIOUR

    Incentives for Experimenting Agents

    Get PDF
    We examine a repeated interaction between an agent, who undertakes experiments, and a principal who provides the requisite funding for these experiments. The agent’s actions are hidden, and the principal cannot commit to future actions. The repeated interaction gives rise to a dynamic agency cost -- the more lucrative is the agent’s stream of future rents following a failure, the more costly are current incentives for the agent. As a result, the principal may deliberately delay experimental funding, reducing the continuation value of the project and hence the agent’s current incentive costs. We characterize the set of recursive Markov equilibria. We also find that there are non-Markov equilibria that make the principal better off than the recursive Markov equilibrium, and that may make both agents better off. Efficient equilibria front-load the agent’s effort, inducing as much experimentation as possible over an initial period, until making a switch to the worst possible continuation equilibrium. The initial phase concentrates the agent’s effort near the beginning of the project, where it is most valuable, while the eventual switch to the worst continuation equilibrium attenuates the dynamic agency cost.Experimentation, Learning, Agency, Dynamic agency, Venture capital, Repeated principal-agent problem
    • 

    corecore