119 research outputs found
Using mathematical programming to solve Factored Markov Decision Processes with Imprecise Probabilities
AbstractThis paper investigates Factored Markov Decision Processes with Imprecise Probabilities (MDPIPs); that is, Factored Markov Decision Processes (MDPs) where transition probabilities are imprecisely specified. We derive efficient approximate solutions for Factored MDPIPs based on mathematical programming. To do this, we extend previous linear programming approaches for linear approximations in Factored MDPs, resulting in a multilinear formulation for robust “maximin” linear approximations in Factored MDPIPs. By exploiting the factored structure in MDPIPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches, in exchange for relatively low approximation errors, on a difficult class of benchmark problems with millions of states
Computational Approaches for Stochastic Shortest Path on Succinct MDPs
We consider the stochastic shortest path (SSP) problem for succinct Markov
decision processes (MDPs), where the MDP consists of a set of variables, and a
set of nondeterministic rules that update the variables. First, we show that
several examples from the AI literature can be modeled as succinct MDPs. Then
we present computational approaches for upper and lower bounds for the SSP
problem: (a)~for computing upper bounds, our method is polynomial-time in the
implicit description of the MDP; (b)~for lower bounds, we present a
polynomial-time (in the size of the implicit description) reduction to
quadratic programming. Our approach is applicable even to infinite-state MDPs.
Finally, we present experimental results to demonstrate the effectiveness of
our approach on several classical examples from the AI literature
Influence-Optimistic Local Values for Multiagent Planning --- Extended Version
Recent years have seen the development of methods for multiagent planning
under uncertainty that scale to tens or even hundreds of agents. However, most
of these methods either make restrictive assumptions on the problem domain, or
provide approximate solutions without any guarantees on quality. Methods in the
former category typically build on heuristic search using upper bounds on the
value function. Unfortunately, no techniques exist to compute such upper bounds
for problems with non-factored value functions. To allow for meaningful
benchmarking through measurable quality guarantees on a very general class of
problems, this paper introduces a family of influence-optimistic upper bounds
for factored decentralized partially observable Markov decision processes
(Dec-POMDPs) that do not have factored value functions. Intuitively, we derive
bounds on very large multiagent planning problems by subdividing them in
sub-problems, and at each of these sub-problems making optimistic assumptions
with respect to the influence that will be exerted by the rest of the system.
We numerically compare the different upper bounds and demonstrate how we can
achieve a non-trivial guarantee that a heuristic solution for problems with
hundreds of agents is close to optimal. Furthermore, we provide evidence that
the upper bounds may improve the effectiveness of heuristic influence search,
and discuss further potential applications to multiagent planning.Comment: Long version of IJCAI 2015 paper (and extended abstract at AAMAS
2015
Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters
Markov decision processes (MDPs) are a popular model for performance analysis
and optimization of stochastic systems. The parameters of stochastic behavior
of MDPs are estimates from empirical observations of a system; their values are
not known precisely. Different types of MDPs with uncertain, imprecise or
bounded transition rates or probabilities and rewards exist in the literature.
Commonly, analysis of models with uncertainties amounts to searching for the
most robust policy which means that the goal is to generate a policy with the
greatest lower bound on performance (or, symmetrically, the lowest upper bound
on costs). However, hedging against an unlikely worst case may lead to losses
in other situations. In general, one is interested in policies that behave well
in all situations which results in a multi-objective view on decision making.
In this paper, we consider policies for the expected discounted reward
measure of MDPs with uncertain parameters. In particular, the approach is
defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best
and average case performances of a policy are analyzed simultaneously, which
yields a multi-scenario multi-objective optimization problem. The paper
presents and evaluates approaches to compute the pure Pareto optimal policies
in the value vector space.Comment: 9 pages, 5 figures, preprint for VALUETOOLS 201
Decision-Making Under Uncertainty: Beyond Probabilities
This position paper reflects on the state-of-the-art in decision-making under
uncertainty. A classical assumption is that probabilities can sufficiently
capture all uncertainty in a system. In this paper, the focus is on the
uncertainty that goes beyond this classical interpretation, particularly by
employing a clear distinction between aleatoric and epistemic uncertainty. The
paper features an overview of Markov decision processes (MDPs) and extensions
to account for partial observability and adversarial behavior. These models
sufficiently capture aleatoric uncertainty but fail to account for epistemic
uncertainty robustly. Consequently, we present a thorough overview of so-called
uncertainty models that exhibit uncertainty in a more robust interpretation. We
show several solution techniques for both discrete and continuous models,
ranging from formal verification, over control-based abstractions, to
reinforcement learning. As an integral part of this paper, we list and discuss
several key challenges that arise when dealing with rich types of uncertainty
in a model-based fashion
- …