2,400 research outputs found
Logic and model checking for hidden Markov models
The branching-time temporal logic PCTL* has been introduced to specify quantitative properties over probability systems, such as discrete-time Markov chains. Until now, however, no logics have been defined to specify properties over hidden Markov models (HMMs). In HMMs the states are hidden, and the hidden processes produce a sequence of observations. In this paper we extend the logic PCTL* to POCTL*. With our logic one can state properties such as "there is at least a 90 percent probability that the model produces a given sequence of observations" over HMMs. Subsequently, we give model checking algorithms for POCTL* over HMMs
Automated Experiment Design for Data-Efficient Verification of Parametric Markov Decision Processes
We present a new method for statistical verification of quantitative
properties over a partially unknown system with actions, utilising a
parameterised model (in this work, a parametric Markov decision process) and
data collected from experiments performed on the underlying system. We obtain
the confidence that the underlying system satisfies a given property, and show
that the method uses data efficiently and thus is robust to the amount of data
available. These characteristics are achieved by firstly exploiting parameter
synthesis to establish a feasible set of parameters for which the underlying
system will satisfy the property; secondly, by actively synthesising
experiments to increase amount of information in the collected data that is
relevant to the property; and finally propagating this information over the
model parameters, obtaining a confidence that reflects our belief whether or
not the system parameters lie in the feasible set, thereby solving the
verification problem.Comment: QEST 2017, 18 pages, 7 figure
Data-driven and Model-based Verification: a Bayesian Identification Approach
This work develops a measurement-driven and model-based formal verification
approach, applicable to systems with partly unknown dynamics. We provide a
principled method, grounded on reachability analysis and on Bayesian inference,
to compute the confidence that a physical system driven by external inputs and
accessed under noisy measurements, verifies a temporal logic property. A case
study is discussed, where we investigate the bounded- and unbounded-time safety
of a partly unknown linear time invariant system
An Overview of Modest Models and Tools for Real Stochastic Timed Systems
We depend on the safe, reliable, and timely operation of cyber-physical
systems ranging from smart grids to avionics components. Many of them involve
time-dependent behaviours and are subject to randomness. Modelling languages
and verification tools thus need to support these quantitative aspects. In my
invited presentation at MARS 2022, I gave an introduction to quantitative
verification using the Modest modelling language and the Modest Toolset, and
highlighted three recent case studies with increasing demands on model
expressiveness and tool capabilities: A case of power supply noise in a
network-on-chip modelled as a Markov chain; a case of message routing in
satellite constellations that uses Markov decision processes with distributed
information; and a case of optimising an attack on Bitcoin via Markov automata
model checking. This paper summarises the presentation.Comment: In Proceedings MARS 2022, arXiv:2203.0929
Robust Control of Uncertain Markov Decision Processes with Temporal Logic Specifications
We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming allows us to solve for a -suboptimal robust control policy with time complexity times that for the non-robust case. We then implement this control policy on the original dynamical system
- ā¦