247,777 research outputs found

    Learning without Recall: A Case for Log-Linear Learning

    Get PDF
    We analyze a model of learning and belief formation in networks in which agents follow Bayes rule yet they do not recall their history of past observations and cannot reason about how other agents' beliefs are formed. They do so by making rational inferences about their observations which include a sequence of independent and identically distributed private signals as well as the beliefs of their neighboring agents at each time. Fully rational agents would successively apply Bayes rule to the entire history of observations. This leads to forebodingly complex inferences due to lack of knowledge about the global network structure that causes those observations. To address these complexities, we consider a Learning without Recall model, which in addition to providing a tractable framework for analyzing the behavior of rational agents in social networks, can also provide a behavioral foundation for the variety of non-Bayesian update rules in the literature. We present the implications of various choices for time-varying priors of such agents and how this choice affects learning and its rate.Comment: in 5th IFAC Workshop on Distributed Estimation and Control in Networked Systems, (NecSys 2015

    Optimal Monetary Policy When Agents Are Learning

    Get PDF
    Most studies of optimal monetary policy under learning rely on optimality conditions derived for the case when agents have rational expectations. In this paper, we derive optimal monetary policy in an economy where the Central Bank knows, and makes active use of, the learning algorithm agents follow in forming their expectations. In this setup, monetary policy can influence future expectations through its e ect on learning dynamics, introducing an additional tradeo between inflation and output gap stabilization. Specifically, the optimal interest rate rule reacts more aggressively to out-of-equilibrium inflation expectations and noisy cost-push shocks than would be optimal under rational expectations: the Central Bank exploits its ability to "drive" future expectations closer to equilibrium. This optimal policy closely resembles optimal policy when the Central Bank can commit and agents have rational expectations. Monetary policy should be more aggressive in containing inflationary expectations when private agents pay more attention to recent data. In particular, when beliefs are updated according to recursive least squares, the optimal policy is time-varying: after a structural break the Central Bank should be more aggressive and relax the degree of aggressiveness in subsequent periods. The policy recommendation is robust: under our policy the welfare loss if the private sector actually has rational expectations is much smaller than if the Central Bank mistakenly assumes rational expectations whereas in fact agents are learning.Optimal Monetary Policy, Learning, Rational Expectations

    Persistent Disagreement and Polarization in a Bayesian Setting

    Get PDF
    For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No

    Social learning with coarse inference

    Get PDF
    We study social learning by boundedly rational agents. Agents take a decision in sequence, after observing their predecessors and a private signal. They are unable to understand their predecessors’ decisions in their finest details: they only understand the relation between the aggregate distribution of actions and the state of nature. We show that, in a continuous action space, compared to the rational case, agents put more weight on early signals. Despite this behavioral bias, beliefs converge to the truth. In a discrete action space, instead, convergence to the truth does not occur even if agents receive signals of unbounded precisions

    Anticipated Fiscal Policy and Adaptive Learning

    Get PDF
    We consider the impact of anticipated policy changes when agents form expectations using adaptive learning rather than rational expectations. To model this we assume that agents combine limited structural knowledge with a standard adaptive learning rule. We analyze these issues using two well-known set-ups, an endowment economy and the Ramsey model. In our set-up there are important deviations from both rational expectations and purely adaptive learning. Our approach could be applied to many macroeconomic frameworks.Taxation, expectations, Ramsey model.

    Eductive stability in real business cycle models

    Get PDF
    We re-examine issues of coordination in the standard RBC model. Can the unique rational expectations equilibrium be “educed” by rational agents who contemplate the possibility of small deviations from equilibrium? Surprisingly, we find that coordination along this line cannot be expected. Rational agents anticipating small but possibly persistent deviations have to face the existence of retroactions that necessarily invalidate any initial tentative “common knowledge” of the future. This "impossibility" theorem for eductive learning is not fully overcome when adaptive learning is incorporated into the framework.standard RBC model ; coordination

    Learning to forecast and cyclical behavior of output and inflation

    Get PDF
    This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well

    E–stability and stability of adaptive learning in models with asymmetric information

    Get PDF
    The paper demonstrates how the E–stability principle introduced by Evans and Honkapohja [2001] can be applied to models with heterogeneous and private information in order to assess the stability of rational expectations equilibria under learning. The paper extends already known stability results for the Grossman and Stiglitz [1980] model to a more general case with many differentially informed agents and to the case where information is endogenously acquired by optimizing agents. In both cases it turns out that the rational expectations equilibrium of the model is inherently E-stable and thus locally stable under recursive least squares learning.Adaptive Learning, Eductive Stability, Rational Expectations

    Learning to Forecast and Cyclical Behavior of Output and Inflation

    Get PDF
    This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well.Learning, Business Cycles, Rational Expectations, Inefficient Forecasts, Output and Inflation Persistence
    • 

    corecore