497 research outputs found

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive the closed loop form of the Expected Optimal Feedback rule, sometimes called passive learning stochastic control, with time varying parameters. As such this paper extends the work of Kendrick (1981,2002, Chapter 6) where parameters are assumed to vary randomly around a known constant mean. Furthermore, we show that the cautionary myopic rule in Beck and Wieland (2002) model, a test bed for comparing various stochastic optimizations approaches, can be cast into this framework and can be treated as a special case of this solution.Optimal experimentation, stochastic optimization, time-varying parameters, expected optimal feedback

    Expected optimal feedback with Time-Varying Parameters

    Get PDF
    In this paper we derive, by using dynamic programming, the closed loop form of the Expected Optimal Feedback rule with time varying parameter. As such this paper extends the work of Kendrick (1981, 2002, Chapter 6) for the time varying parameter case. Furthermore, we show that the Beck and Wieland (2002) model can be cast into this framework and can be treated as a special case of this solution.

    The Dual Approch in an Infinite Horizon Model with a Time-Varying Parameter

    Get PDF
    In a previous paper Amman and Tucci (2017) discuss the DUAL control method, based on Tse and Bar-Shalom (1973) and (Kendrick, 1981) seminal works, applied to the BMW infinite horizon model with an unknown but constant parameter. In these pages the DUAL solution to the BMW infinite horizon model with one time-varying parameter is reported. The special case where the desired path for the state and control are set equal to 0 and the linear system has no constant is considered. The appropriate Riccati quantities for the augmented system are derived and the time- invariant feedback rule are defined following the same steps as in Amman and Tucci (2017). Finally the new approximate cost-to-go is presented. Two cases are considered. In the first one the optimal control is selected using the updated estimate of the time-varying parameter in the model. In the second one only an old estimate of that parameter is available at the time the decision maker chooses her/his control. For the reader’s sake, most of the technical derivations are confined to a number of short appendices

    How Active is Active Learning: Value Function Method Versus an Approximation Method

    Get PDF
    In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learn- ing), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    Approximating an Infinite Horizon Model in the Presence of Optimal Experimentation

    Get PDF
    In an recent article Amman and Tucci (2020) make a comparison of the two dominant approaches for solving models with optimal experimentation in economics; the value function approach and an approximation approach. The approximation approach goes back to engineering literature in the 1970ties (cf. Tse & Bar-Shalom, 1973). Kendrick (1981) introduces this approach in economics. By using the same model and dataset as in Beck and Wieland (2002), Amman and Tucci conclude that differences may be small between the both approaches. In the previous paper we did not present the derivation of the approximation approach for this class of models. Hence, here we will present all derivations of the approximation approach for the case where there is an infinite horizon as is most common in economic models. By presenting the derivations, a better understanding and insight is obtained by the reader on how the value function is adequately approximated

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    The DUAL Approach in an Infinite Horizon Model

    Get PDF
    In this paper we deliver the solution for the DUAL approach Kendrick (1981; 2002) with an infinite horizon. The results of this solutions form the basis for the paper Amman and Tucci (2017)

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    Interplay between Coulomb Blockade and Resonant Tunneling studied by the Keldysh Green's Function Method

    Full text link
    A theory of tunneling through a quantum dot is presented which enables us to study combined effects of Coulomb blockade and discrete energy spectrum of the dot. The expression of tunneling current is derived from the Keldysh Green's function method, and is shown to automatically satisfy the conservation at DC current of both junctions.Comment: 4 pages, 3 figures(mail if you need), use revtex.sty, error corrected, changed titl
    corecore