55,914 research outputs found
Multi-Objective Model Checking of Markov Decision Processes
We study and provide efficient algorithms for multi-objective model checking
problems for Markov Decision Processes (MDPs). Given an MDP, M, and given
multiple linear-time (\omega -regular or LTL) properties \varphi\_i, and
probabilities r\_i \epsilon [0,1], i=1,...,k, we ask whether there exists a
strategy \sigma for the controller such that, for all i, the probability that a
trajectory of M controlled by \sigma satisfies \varphi\_i is at least r\_i. We
provide an algorithm that decides whether there exists such a strategy and if
so produces it, and which runs in time polynomial in the size of the MDP. Such
a strategy may require the use of both randomization and memory. We also
consider more general multi-objective \omega -regular queries, which we
motivate with an application to assume-guarantee compositional reasoning for
probabilistic systems.
Note that there can be trade-offs between different properties: satisfying
property \varphi\_1 with high probability may necessitate satisfying \varphi\_2
with low probability. Viewing this as a multi-objective optimization problem,
we want information about the "trade-off curve" or Pareto curve for maximizing
the probabilities of different properties. We show that one can compute an
approximate Pareto curve with respect to a set of \omega -regular properties in
time polynomial in the size of the MDP.
Our quantitative upper bounds use LP methods. We also study qualitative
multi-objective model checking problems, and we show that these can be analysed
by purely graph-theoretic methods, even though the strategies may still require
both randomization and memory.Comment: 21 pages, 2 figure
Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making
In multi-objective decision planning and learning, much attention is paid to
producing optimal solution sets that contain an optimal policy for every
possible user preference profile. We argue that the step that follows, i.e,
determining which policy to execute by maximising the user's intrinsic utility
function over this (possibly infinite) set, is under-studied. This paper aims
to fill this gap. We build on previous work on Gaussian processes and pairwise
comparisons for preference modelling, extend it to the multi-objective decision
support scenario, and propose new ordered preference elicitation strategies
based on ranking and clustering. Our main contribution is an in-depth
evaluation of these strategies using computer and human-based experiments. We
show that our proposed elicitation strategies outperform the currently used
pairwise methods, and found that users prefer ranking most. Our experiments
further show that utilising monotonicity information in GPs by using a linear
prior mean at the start and virtual comparisons to the nadir and ideal points,
increases performance. We demonstrate our decision support framework in a
real-world study on traffic regulation, conducted with the city of Amsterdam.Comment: AAMAS 2018, Source code at
https://github.com/lmzintgraf/gp_pref_elici
Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes
Interval Markov decision processes (IMDPs) generalise classical MDPs by
having interval-valued transition probabilities. They provide a powerful
modelling tool for probabilistic systems with an additional variation or
uncertainty that prevents the knowledge of the exact transition probabilities.
In this paper, we consider the problem of multi-objective robust strategy
synthesis for interval MDPs, where the aim is to find a robust strategy that
guarantees the satisfaction of multiple properties at the same time in face of
the transition probability uncertainty. We first show that this problem is
PSPACE-hard. Then, we provide a value iteration-based decision algorithm to
approximate the Pareto set of achievable points. We finally demonstrate the
practical effectiveness of our proposed approaches by applying them on several
case studies using a prototypical tool.Comment: This article is a full version of a paper accepted to the Conference
on Quantitative Evaluation of SysTems (QEST) 201
Route Planning in Transportation Networks
We survey recent advances in algorithms for route planning in transportation
networks. For road networks, we show that one can compute driving directions in
milliseconds or less even at continental scale. A variety of techniques provide
different trade-offs between preprocessing effort, space requirements, and
query time. Some algorithms can answer queries in a fraction of a microsecond,
while others can deal efficiently with real-time traffic. Journey planning on
public transportation systems, although conceptually similar, is a
significantly harder problem due to its inherent time-dependent and
multicriteria nature. Although exact algorithms are fast enough for interactive
queries on metropolitan transit systems, dealing with continent-sized instances
requires simplifications or heavy preprocessing. The multimodal route planning
problem, which seeks journeys combining schedule-based transportation (buses,
trains) with unrestricted modes (walking, driving), is even harder, relying on
approximate solutions even for metropolitan inputs.Comment: This is an updated version of the technical report MSR-TR-2014-4,
previously published by Microsoft Research. This work was mostly done while
the authors Daniel Delling, Andrew Goldberg, and Renato F. Werneck were at
Microsoft Research Silicon Valle
Quantitative multi-objective verification for probabilistic systems
We present a verification framework for analysing multiple quantitative objectives of systems that exhibit both nondeterministic and stochastic behaviour. These systems are modelled as probabilistic automata, enriched with cost or reward structures that capture, for example, energy usage or performance metrics. Quantitative properties of these models are expressed in a specification language that incorporates probabilistic safety and liveness properties, expected total cost or reward, and supports multiple objectives of these types. We propose and implement an efficient verification framework for such properties and then present two distinct applications of it: firstly, controller synthesis subject to multiple quantitative objectives; and, secondly, quantitative compositional verification. The practical applicability of both approaches is illustrated with experimental results from several large case studies
Multi-Information Source Fusion and Optimization to Realize ICME: Application to Dual Phase Materials
Integrated Computational Materials Engineering (ICME) calls for the
integration of computational tools into the materials and parts development
cycle, while the Materials Genome Initiative (MGI) calls for the acceleration
of the materials development cycle through the combination of experiments,
simulation, and data. As they stand, both ICME and MGI do not prescribe how to
achieve the necessary tool integration or how to efficiently exploit the
computational tools, in combination with experiments, to accelerate the
development of new materials and materials systems. This paper addresses the
first issue by putting forward a framework for the fusion of information that
exploits correlations among sources/models and between the sources and `ground
truth'. The second issue is addressed through a multi-information source
optimization framework that identifies, given current knowledge, the next best
information source to query and where in the input space to query it via a
novel value-gradient policy. The querying decision takes into account the
ability to learn correlations between information sources, the resource cost of
querying an information source, and what a query is expected to provide in
terms of improvement over the current state. The framework is demonstrated on
the optimization of a dual-phase steel to maximize its strength-normalized
strain hardening rate. The ground truth is represented by a
microstructure-based finite element model while three low fidelity information
sources---i.e. reduced order models---based on different homogenization
assumptions---isostrain, isostress and isowork---are used to efficiently and
optimally query the materials design space.Comment: 19 pages, 11 figures, 5 table
- …