1,146 research outputs found
State-Augmentation Transformations for Risk-Sensitive Reinforcement Learning
In the framework of MDP, although the general reward function takes three
arguments-current state, action, and successor state; it is often simplified to
a function of two arguments-current state and action. The former is called a
transition-based reward function, whereas the latter is called a state-based
reward function. When the objective involves the expected cumulative reward
only, this simplification works perfectly. However, when the objective is
risk-sensitive, this simplification leads to an incorrect value. We present
state-augmentation transformations (SATs), which preserve the reward sequences
as well as the reward distributions and the optimal policy in risk-sensitive
reinforcement learning. In risk-sensitive scenarios, firstly we prove that, for
every MDP with a stochastic transition-based reward function, there exists an
MDP with a deterministic state-based reward function, such that for any given
(randomized) policy for the first MDP, there exists a corresponding policy for
the second MDP, such that both Markov reward processes share the same reward
sequence. Secondly we illustrate that two situations require the proposed SATs
in an inventory control problem. One could be using Q-learning (or other
learning methods) on MDPs with transition-based reward functions, and the other
could be using methods, which are for the Markov processes with a deterministic
state-based reward functions, on the Markov processes with general reward
functions. We show the advantage of the SATs by considering Value-at-Risk as an
example, which is a risk measure on the reward distribution instead of the
measures (such as mean and variance) of the distribution. We illustrate the
error in the reward distribution estimation from the direct use of Q-learning,
and show how the SATs enable a variance formula to work on Markov processes
with general reward functions
Influence-Optimistic Local Values for Multiagent Planning --- Extended Version
Recent years have seen the development of methods for multiagent planning
under uncertainty that scale to tens or even hundreds of agents. However, most
of these methods either make restrictive assumptions on the problem domain, or
provide approximate solutions without any guarantees on quality. Methods in the
former category typically build on heuristic search using upper bounds on the
value function. Unfortunately, no techniques exist to compute such upper bounds
for problems with non-factored value functions. To allow for meaningful
benchmarking through measurable quality guarantees on a very general class of
problems, this paper introduces a family of influence-optimistic upper bounds
for factored decentralized partially observable Markov decision processes
(Dec-POMDPs) that do not have factored value functions. Intuitively, we derive
bounds on very large multiagent planning problems by subdividing them in
sub-problems, and at each of these sub-problems making optimistic assumptions
with respect to the influence that will be exerted by the rest of the system.
We numerically compare the different upper bounds and demonstrate how we can
achieve a non-trivial guarantee that a heuristic solution for problems with
hundreds of agents is close to optimal. Furthermore, we provide evidence that
the upper bounds may improve the effectiveness of heuristic influence search,
and discuss further potential applications to multiagent planning.Comment: Long version of IJCAI 2015 paper (and extended abstract at AAMAS
2015
Maximizing the Conditional Expected Reward for Reaching the Goal
The paper addresses the problem of computing maximal conditional expected
accumulated rewards until reaching a target state (briefly called maximal
conditional expectations) in finite-state Markov decision processes where the
condition is given as a reachability constraint. Conditional expectations of
this type can, e.g., stand for the maximal expected termination time of
probabilistic programs with non-determinism, under the condition that the
program eventually terminates, or for the worst-case expected penalty to be
paid, assuming that at least three deadlines are missed. The main results of
the paper are (i) a polynomial-time algorithm to check the finiteness of
maximal conditional expectations, (ii) PSPACE-completeness for the threshold
problem in acyclic Markov decision processes where the task is to check whether
the maximal conditional expectation exceeds a given threshold, (iii) a
pseudo-polynomial-time algorithm for the threshold problem in the general
(cyclic) case, and (iv) an exponential-time algorithm for computing the maximal
conditional expectation and an optimal scheduler.Comment: 103 pages, extended version with appendices of a paper accepted at
TACAS 201
Chance-Constrained Control with Lexicographic Deep Reinforcement Learning
This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes, in which the controller seeks to ensure that the probability of satisfying the constraint is above a given threshold. Standard DeepRL approaches require i) the constraints to be included as additional weighted terms in the cost function, in a multi-objective fashion, and ii) the tuning of the introduced weights during the training phase of the Deep Neural Network (DNN) according to the probability thresholds. The proposed approach, instead, requires to separately train one constraint-free DNN and one DNN associated to each constraint and then, at each time-step, to select which DNN to use depending on the system observed state. The presented solution does not require any hyper-parameter tuning besides the standard DNN ones, even if the probability thresholds changes. A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations
Risk Aversion in Finite Markov Decision Processes Using Total Cost Criteria and Average Value at Risk
In this paper we present an algorithm to compute risk averse policies in
Markov Decision Processes (MDP) when the total cost criterion is used together
with the average value at risk (AVaR) metric. Risk averse policies are needed
when large deviations from the expected behavior may have detrimental effects,
and conventional MDP algorithms usually ignore this aspect. We provide
conditions for the structure of the underlying MDP ensuring that approximations
for the exact problem can be derived and solved efficiently. Our findings are
novel inasmuch as average value at risk has not previously been considered in
association with the total cost criterion. Our method is demonstrated in a
rapid deployment scenario, whereby a robot is tasked with the objective of
reaching a target location within a temporal deadline where increased speed is
associated with increased probability of failure. We demonstrate that the
proposed algorithm not only produces a risk averse policy reducing the
probability of exceeding the expected temporal deadline, but also provides the
statistical distribution of costs, thus offering a valuable analysis tool
- …