109 research outputs found

    A Characterization of the optimal risk-Sensitive average cost in finite controlled Markov chains

    Full text link
    This work concerns controlled Markov chains with finite state and action spaces. The transition law satisfies the simultaneous Doeblin condition, and the performance of a control policy is measured by the (long-run) risk-sensitive average cost criterion associated to a positive, but otherwise arbitrary, risk sensitivity coefficient. Within this context, the optimal risk-sensitive average cost is characterized via a minimization problem in a finite-dimensional Euclidean space.Comment: Published at http://dx.doi.org/10.1214/105051604000000585 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Risk-sensitive optimal control for Markov decision processes with monotone cost

    Get PDF
    The existence of an optimal feedback law is established for the risk-sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility and anear monotonicity condition on the one-step cost function. A solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law

    An optimality system for finite average Markov decision chains under risk-aversion

    Get PDF
    summary:This work concerns controlled Markov chains with finite state space and compact action sets. The decision maker is risk-averse with constant risk-sensitivity, and the performance of a control policy is measured by the long-run average cost criterion. Under standard continuity-compactness conditions, it is shown that the (possibly non-constant) optimal value function is characterized by a system of optimality equations which allows to obtain an optimal stationary policy. Also, it is shown that the optimal superior and inferior limit average cost functions coincide

    Risk-sensitive Markov stopping games with an absorbing state

    Get PDF
    summary:This work is concerned with discrete-time Markov stopping games with two players. At each decision time player II can stop the game paying a terminal reward to player I, or can let the system to continue its evolution. In this latter case player I applies an action affecting the transitions and entitling him to receive a running reward from player II. It is supposed that player I has a no-null and constant risk-sensitivity coefficient, and that player II tries to minimize the utility of player I. The performance of a pair of decision strategies is measured by the risk-sensitive (expected) total reward of player I and, besides mild continuity-compactness conditions, the main structural assumption on the model is the existence of an absorbing state which is accessible from any starting point. In this context, it is shown that the value function of the game is characterized by an equilibrium equation, and the existence of a Nash equilibrium is established

    Existence of optimal delay-dependent control for finite-horizon continuous-time Markov decision process

    Full text link
    This paper intends to study the optimal control problem for the continuous-time Markov decision process with denumerable states and compact action space. The admissible controls depend not only on the current state of the jumping process but also on its history. By the compactification method, we show the existence of an optimal delay-dependent control under some explicit conditions, and further establish the dynamic programming principle. Moreover, we show that the value function is the unique viscosity solution of certain Hamilton-Jacobi-Bellman equation which does not depend on the delay-dependent control policies. Consequently, under our explicit conditions, there is no impact on the value function to make decision depending on or not on the history of the jumping process.Comment: 22 page

    A note on Multiplicative Poisson Equation: developments in the span-contraction approach

    Full text link
    In this paper we study the problem of Multiplicative Poisson Equation (MPE) bounded solution existence in the generic discrete-time setting. Assuming mixing and boundedness of the risk-reward function, we investigate what conditions should be imposed on the underlying non-controlled probability kernel or the reward function in order for the MPE bounded solution to always exists. In particular, we consolidate span-norm framework based results and derive an explicit sharp bound that needs to be imposed on the cost function to guarantee the bounded solution existence under mixing. Also, we study the properties which the probability kernel must satisfy to ensure existence of bounded MPE for any generic risk-reward function and characterise process behaviour in the complement of the invariant measure support. Finally, we present numerous examples and stochastic-dominance based arguments that help to better understand the intricacies that emerge when the ergodic risk-neutral mean operator is replaced with ergodic risk-sensitive entropy

    Markov Decision Processes with Risk-Sensitive Criteria: An Overview

    Full text link
    The paper provides an overview of the theory and applications of risk-sensitive Markov decision processes. The term 'risk-sensitive' refers here to the use of the Optimized Certainty Equivalent as a means to measure expectation and risk. This comprises the well-known entropic risk measure and Conditional Value-at-Risk. We restrict our considerations to stationary problems with an infinite time horizon. Conditions are given under which optimal policies exist and solution procedures are explained. We present both the theory when the Optimized Certainty Equivalent is applied recursively as well as the case where it is applied to the cumulated reward. Discounted as well as non-discounted models are reviewe
    corecore