3 research outputs found

    Markov decision processes with time-varying discount factors and random horizon

    Get PDF
    summary:This paper is related to Markov Decision Processes. The optimal control problem is to minimize the expected total discounted cost, with a non-constant discount factor. The discount factor is time-varying and it could depend on the state and the action. Furthermore, it is considered that the horizon of the optimization problem is given by a discrete random variable, that is, a random horizon is assumed. Under general conditions on Markov control model, using the dynamic programming approach, an optimality equation for both cases is obtained, namely, finite support and infinite support of the random horizon. The obtained results are illustrated by two examples, one of them related to optimal replacement

    Markov decision processes with time-varying discount factors and random horizon

    No full text

    Topics in dynamic programming

    Get PDF
    Dynamic programming is an essential tool lying at the heart of many problems in the modern theory of economic dynamics. Due to its versatility in solving dynamic optimization problems, it can be used to study the decisions of households, firms, governments, and other economic agents with a wide range of applications in macroeconomics and finance. Dynamic programming transforms dynamic optimization problems to a class of functional equations, the Bellman equations, which can be solved via appropriate mathematical tools. One of the most important tools is the contraction mapping theorem, a fixed point theorem that can be used to solve the Bellman equation under the usual discounting assumption for economic agents. However, many recent economic models often make alternative discounting assumptions under which contraction no longer holds. This is the primary motivation for the thesis. This thesis is a re-examination of the standard discrete-time infinite horizon dynamic programming theory under two different discounting specifications: state-dependent discounting and negative discounting. For the case of state-dependent discounting, the standard discounting condition is generalized to an "eventual discounting" condition, under which the Bellman operator is a contraction in the long run, instead of a contraction in one step. For negative discounting, the theory of monotone concave operators is used to derive a unique solution to the Bellman equation; no contraction mapping arguments are required. The core results of the standard theory are extended to these two cases and economic applications are discussed
    corecore