2,731 research outputs found

    Risk-sensitive average optimality in Markov decision processes

    Get PDF
    summary:In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first moment of the generated reward corresponds to the expectation of the total reward and the second central moment of the reward variance. For communicating Markov processes and for some specific classes of unichain processes long run risk-sensitive average reward is independent of the starting state. In this note we present necessary and sufficient condition for existence of optimal policies independent of the starting state in unichain models and characterize the class of average risk-sensitive optimal policies

    Average optimality for continuous-time Markov decision processes under weak continuity conditions

    Full text link
    This article considers the average optimality for a continuous-time Markov decision process with Borel state and action spaces and an arbitrarily unbounded nonnegative cost rate. The existence of a deterministic stationary optimal policy is proved under a different and general set of conditions as compared to the previous literature; the controlled process can be explosive, the transition rates can be arbitrarily unbounded and are weakly continuous, the multifunction defining the admissible action spaces can be neither compact-valued nor upper semi-continuous, and the cost rate is not necessarily inf-compact

    Markov Decision Processes with Risk-Sensitive Criteria: An Overview

    Full text link
    The paper provides an overview of the theory and applications of risk-sensitive Markov decision processes. The term 'risk-sensitive' refers here to the use of the Optimized Certainty Equivalent as a means to measure expectation and risk. This comprises the well-known entropic risk measure and Conditional Value-at-Risk. We restrict our considerations to stationary problems with an infinite time horizon. Conditions are given under which optimal policies exist and solution procedures are explained. We present both the theory when the Optimized Certainty Equivalent is applied recursively as well as the case where it is applied to the cumulated reward. Discounted as well as non-discounted models are reviewe

    An optimality system for finite average Markov decision chains under risk-aversion

    Get PDF
    summary:This work concerns controlled Markov chains with finite state space and compact action sets. The decision maker is risk-averse with constant risk-sensitivity, and the performance of a control policy is measured by the long-run average cost criterion. Under standard continuity-compactness conditions, it is shown that the (possibly non-constant) optimal value function is characterized by a system of optimality equations which allows to obtain an optimal stationary policy. Also, it is shown that the optimal superior and inferior limit average cost functions coincide

    Continuous-Time Markov Decision Processes with Exponential Utility

    Get PDF
    In this paper, we consider a continuous-time Markov decision process (CTMDP) in Borel spaces, where the certainty equivalent with respect to the exponential utility of the total undiscounted cost is to be minimized. The cost rate is nonnegative. We establish the optimality equation. Under the compactness-continuity condition, we show the existence of a deterministic stationary optimal policy. We reduce the risk-sensitive CTMDP problem to an equivalent risk-sensitive discrete-time Markov decision process, which is with the same state and action spaces as the original CTMDP. In particular, the value iteration algorithm for the CTMDP problem follows from this reduction. We essentially do not need to impose a condition on the growth of the transition and cost rate in the state, and the controlled process could be explosive

    A DYNAMIC MODEL FOR DETERMINING OPTIMAL RANGE IMPROVEMENT PROGRAMS

    Get PDF
    A Markov chain dynamic programming model is presented for determining optimal range improvement strategies as well as accompanying livestock production practices. The model specification focuses on the improved representation of rangeland dynamics and livestock response under alternative range conditions. The model is applied to range management decision making in the Cross Timbers Region of central Oklahoma. Results indicate that tebuthiuron treatments are economically feasible over the range of treatment costs evaluated. Optimal utilization of forage production following a treatment requires the conjunctive employment of prescribed burning and variable stocking rates over the treatmentÂ’s life.Land Economics/Use, Livestock Production/Industries,
    • …
    corecore