Markov Decision Processes with Average-Value-at-Risk criteria

Abstract

We investigate the problem of minimizing the Average-Value-at-Risk (AV aRr) of the discounted cost over a finite and an infinite horizon which is generated by a Markov Decision Process (MDP). We show that this problem can be reduced to an ordinary MDP with extended state space and give conditions under which an optimal policy exists. We also give a time-consistent interpretation of the AV aRr . At the end we consider a numerical example which is a simple repeated casino game. It is used to discuss the influence of the risk aversion parameter r of the AV aRr-criterion

    Similar works