55 research outputs found

    A Taylor Rule for Fiscal Policy

    Get PDF
    In times of rapid macroeconomic change it would seem useful for both fiscal and monetary policy to be modified frequently. This is true for monetary policy with monthly meetings of the Open Market Committee. It is not true for fiscal policy which mostly varies with the annual Congressional budget cycle. This paper proposes a feedback framework for analyzing the question of whether or not movement from annual to quarterly fiscal policy changes would improve the performance of stabilization policy. More broadly the paper considers a complementary rather than competitive framework in which monetary policy in the form of the Taylor rule is joined by a similar fiscal policy rule. This framework is then used to consider methodological improvements in the Taylor and the fiscal policy rule to include lags, uncertainty in parameters and measurement errors.design of fiscal policy, optimal experimentation, stochastic optimization, time-varying parameters, numerical experiments

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive the closed loop form of the Expected Optimal Feedback rule, sometimes called passive learning stochastic control, with time varying parameters. As such this paper extends the work of Kendrick (1981,2002, Chapter 6) where parameters are assumed to vary randomly around a known constant mean. Furthermore, we show that the cautionary myopic rule in Beck and Wieland (2002) model, a test bed for comparing various stochastic optimizations approaches, can be cast into this framework and can be treated as a special case of this solution.Optimal experimentation, stochastic optimization, time-varying parameters, expected optimal feedback

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive, by using dynamic programming, the closed loop form of the Expected Optimal Feedback rule with time varying parameter. As such this paper extends the work of Kendrick (1981, 2002, Chapter 6) for the time varying parameter case. Furthermore, we show that the Beck and Wieland (2002) model can be cast into this framework and can be treated as a special case of this solution.

    How Active is Active Learning: Value Function Method Versus an Approximation Method

    Get PDF
    In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learn- ing), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    The Dual Approch in an Infinite Horizon Model with a Time-Varying Parameter

    Get PDF
    In a previous paper Amman and Tucci (2017) discuss the DUAL control method, based on Tse and Bar-Shalom (1973) and (Kendrick, 1981) seminal works, applied to the BMW infinite horizon model with an unknown but constant parameter. In these pages the DUAL solution to the BMW infinite horizon model with one time-varying parameter is reported. The special case where the desired path for the state and control are set equal to 0 and the linear system has no constant is considered. The appropriate Riccati quantities for the augmented system are derived and the time- invariant feedback rule are defined following the same steps as in Amman and Tucci (2017). Finally the new approximate cost-to-go is presented. Two cases are considered. In the first one the optimal control is selected using the updated estimate of the time-varying parameter in the model. In the second one only an old estimate of that parameter is available at the time the decision maker chooses her/his control. For the reader’s sake, most of the technical derivations are confined to a number of short appendices

    Numerical solutions of the algebraic matrix Riccati equation.

    Get PDF
    Abstract The linear-quadratic control model is one of the most widely used control models in both empirical and theoretical economic modeling. In order to obtain the equilibrium solution of this control model, the so-called algebraic matrix Riccati equation has to be solved. In this note we present a numerical solution method for solving this equation. Our method solves the Riccati equation as a multidimensional fixed-point problem. By establishing the analytical derivative of the Riccati equation we have been able to construct a very efficient Newton-type solution method with quadratic convergence properties. Our method is an extension for the Newton-Raphson method described in Kwakemaak and Sivan ( 1972) and does not require any special conditions on the transition rn3 as inthe nonrecursive method o

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    Climate control of terrestrial carbon exchange across biomes and continents

    Get PDF
    Peer reviewe
    corecore