934 research outputs found
How much structure in empirical models?
This chapter highlights the problems that structural methods and SVAR approaches have when estimating DSGE models and examining their ability to capture important features of the data. We show that structural methods are subject to severe identification problems due, in large part, to the nature of DSGE models. The problems can be patched up in a number of ways but solved only if DSGEs are completely reparametrized or respecified. The potential misspecification of the structural relationships give Bayesian methods an hedge over classical ones in structural estimation. SVAR approaches may face invertibility problems but simple diagnostics can help to detect and remedy these problems. A pragmatic empirical approach ought to use the flexibility of SVARs against potential misspecification of the structural relationships but must firmly tie SVARs to the class of DSGE models which could have have generated the data.DSGE models, SVAR models, Identification, Invertibility, Misspecification, Small Samples.
Measuring time preferences
We review research that measures time preferences—i.e., preferences over intertemporal tradeoffs. We distinguish between studies using financial flows, which we call “money earlier or later” (MEL) decisions and studies that use time-dated consumption/effort. Under different structural models, we show how to translate what MEL experiments directly measure (required rates of return for financial flows) into a discount function over utils. We summarize empirical regularities found in MEL studies and the predictive power of those studies. We explain why MEL choices are driven in part by some factors that are distinct from underlying time preferences.National Institutes of Health (NIA R01AG021650 and P01AG005842) and the Pershing Square Fund for Research in the Foundations of Human Behavior
Recommended from our members
Monetary Policy Rules in a Non-Rational World: A Macroeconomic Experiment
I introduce a new learning-to-forecast experimental design, where subjects in a virtual New-Keynesian macroeconomy based on Woodford (2013) need to forecast individual instead of aggregate outcomes. This approach is motivated by the critique of Preston (2005) and Woodford (2013) that substituting arbitrary forms of expectations into the reduced-form New-Keynesian model (consisting of the “DIS” equation, the “Phillips curve” and the “Taylor” rule) is inconsistent with its microfoundations. Using this design, I analyze the impact of different interest rate rules on expectation formation and expectation-driven fluctuations. Even if the Taylor principle is fulfilled, instead of quickly converging to the REE, the experimental economy exhibits persistent purely expectation-driven fluctuations not necessarily around the REE. Only a particularly aggressive monetary authority achieves the elimination of these fluctuations and quick convergence to the REE. To explain the aggregate behavior in the experiment, I develop a “noisy” adaptive learning approach, introducing endogenous shocks into a simple adaptive learning model. However, I find that for some monetary policy regimes a reinforcement learning model, applied to different forecasting rules, provides a better fit to the data
- …