research

Learning and Model Validation

Abstract

This paper studies the following problem. An agent takes actions based on a possibly misspecified model. The agent is 'large', in the sense that his actions influence the model he is trying to learn about. The agent is aware of potential model misspecification and tries to detect it, in real-time, using an econometric specification test. If his model fails the test, he formulates a new better-fitting model. If his model passes the test, he uses it to formulate and implement a policy based on the provisional assumption that the current model is correctly specified, and will not change in the future. We claim that this testing and model validation process is an accurate description of most macroeconomic policy problems. Unfortunately, the dynamics produced by this process are not well understood. We make progress on this problem by relating it to a problem that is well understood. In particular, we relate it to the dynamics of constant-gain stochastic approximation algorithms. This enables us to appeal to well known results from the large deviations literature to help us understand the dynamics of testing and model revision. We show that as the agent applies an increasingly stringent specification test, the large deviation properties of the discrete model validation dynamics converge to those of the continuous learning dynamics. This sheds new light on the recent constant-gain learning literature.Learning, Validation, Relative Entropy, Large Deviation

    Similar works