2,052 research outputs found
Probabilistic Bisimulations for PCTL Model Checking of Interval MDPs
Verification of PCTL properties of MDPs with convex uncertainties has been
investigated recently by Puggelli et al. However, model checking algorithms
typically suffer from state space explosion. In this paper, we address
probabilistic bisimulation to reduce the size of such an MDPs while preserving
PCTL properties it satisfies. We discuss different interpretations of
uncertainty in the models which are studied in the literature and that result
in two different definitions of bisimulations. We give algorithms to compute
the quotients of these bisimulations in time polynomial in the size of the
model and exponential in the uncertain branching. Finally, we show by a case
study that large models in practice can have small branching and that a
substantial state space reduction can be achieved by our approach.Comment: In Proceedings SynCoP 2014, arXiv:1403.784
Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes
Interval Markov decision processes (IMDPs) generalise classical MDPs by
having interval-valued transition probabilities. They provide a powerful
modelling tool for probabilistic systems with an additional variation or
uncertainty that prevents the knowledge of the exact transition probabilities.
In this paper, we consider the problem of multi-objective robust strategy
synthesis for interval MDPs, where the aim is to find a robust strategy that
guarantees the satisfaction of multiple properties at the same time in face of
the transition probability uncertainty. We first show that this problem is
PSPACE-hard. Then, we provide a value iteration-based decision algorithm to
approximate the Pareto set of achievable points. We finally demonstrate the
practical effectiveness of our proposed approaches by applying them on several
case studies using a prototypical tool.Comment: This article is a full version of a paper accepted to the Conference
on Quantitative Evaluation of SysTems (QEST) 201
Data-driven and Model-based Verification: a Bayesian Identification Approach
This work develops a measurement-driven and model-based formal verification
approach, applicable to systems with partly unknown dynamics. We provide a
principled method, grounded on reachability analysis and on Bayesian inference,
to compute the confidence that a physical system driven by external inputs and
accessed under noisy measurements, verifies a temporal logic property. A case
study is discussed, where we investigate the bounded- and unbounded-time safety
of a partly unknown linear time invariant system
Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications
We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a -suboptimal robust control policy with time complexity times that for the non-robust case. We then implement this control policy on the original dynamical system
Automated Experiment Design for Data-Efficient Verification of Parametric Markov Decision Processes
We present a new method for statistical verification of quantitative
properties over a partially unknown system with actions, utilising a
parameterised model (in this work, a parametric Markov decision process) and
data collected from experiments performed on the underlying system. We obtain
the confidence that the underlying system satisfies a given property, and show
that the method uses data efficiently and thus is robust to the amount of data
available. These characteristics are achieved by firstly exploiting parameter
synthesis to establish a feasible set of parameters for which the underlying
system will satisfy the property; secondly, by actively synthesising
experiments to increase amount of information in the collected data that is
relevant to the property; and finally propagating this information over the
model parameters, obtaining a confidence that reflects our belief whether or
not the system parameters lie in the feasible set, thereby solving the
verification problem.Comment: QEST 2017, 18 pages, 7 figure
Formal Methods for Autonomous Systems
Formal methods refer to rigorous, mathematical approaches to system
development and have played a key role in establishing the correctness of
safety-critical systems. The main building blocks of formal methods are models
and specifications, which are analogous to behaviors and requirements in system
design and give us the means to verify and synthesize system behaviors with
formal guarantees.
This monograph provides a survey of the current state of the art on
applications of formal methods in the autonomous systems domain. We consider
correct-by-construction synthesis under various formulations, including closed
systems, reactive, and probabilistic settings. Beyond synthesizing systems in
known environments, we address the concept of uncertainty and bound the
behavior of systems that employ learning using formal methods. Further, we
examine the synthesis of systems with monitoring, a mitigation technique for
ensuring that once a system deviates from expected behavior, it knows a way of
returning to normalcy. We also show how to overcome some limitations of formal
methods themselves with learning. We conclude with future directions for formal
methods in reinforcement learning, uncertainty, privacy, explainability of
formal methods, and regulation and certification
- ā¦