24 research outputs found
Formal Abstractions for Automated Verification and Synthesis of Stochastic Systems
Stochastic hybrid systems involve the coupling of discrete, continuous, and probabilistic phenomena, in which the composition of continuous and discrete variables captures the behavior of physical systems interacting with digital, computational devices. Because of their versatility and generality, methods for modeling, analysis, and verification of stochastic hybrid systems (SHS) have proved invaluable in a wide range of applications, including biology, smart grids, air traffic control, finance, and automotive systems. The problems of verification and of controller synthesis over SHS can be algorithmically studied using methodologies and tools developed in computer science, utilizing proper symbolic models describing the overall behaviors of the SHS. A promising direction to address formal verification and synthesis against complex logic specifications, such as PCTL and BLTL, is the use of abstraction with finitely many states. This thesis is devoted to formal abstractions for verification and synthesis of SHS by bridging the gap between stochastic analysis, computer science, and control engineering. A SHS is first considered as a discrete time Markov process over a general state space, then is abstracted as a finite-state Markov chain to be formally verified against the desired specification. We generate finite abstractions of general state-space Markov processes based on the partitioning of the state space, which provide a Markov chain as an approximation of the original process. We put forward a novel adaptive and sequential gridding algorithm based on non-uniform quantization of the state space that is expected to conform to the underlying dynamics of the model and thus to mitigate the curse of dimensionality unavoidably related to the partitioning procedure. PCTL and BLTL properties are defined over trajectories of a system. Examples of such properties are probabilistic safety and reach-avoid specifications. While the developed techniques are applicable to a wide arena of probabilistic properties, the thesis focuses on the study of the particular specification probabilistic safety or invariance, over a finite horizon. Abstraction of controlled discrete-time Markov processes to Markov decision processes over finite sets of states is also studied in the thesis. The proposed abstraction scheme enables us to solve the problem of obtaining a maximally safe Markov policy for the Markov decision process and synthesize a control policy for the original model. The total error is quantified which is due to the abstraction procedure and caused by exporting the result back to the original process. The abstraction error hinges on the regularity of the stochastic kernel of the process, i.e. its Lipschitz continuity. Furthermore, this thesis extends the results in the following directions: 1) Partially degenerate stochastic processes suffer from non-smooth probabilistic evolution of states. The stochastic kernel of such processes does not satisfy Lipschitz continuity assumptions which requires us to develop new techniques specialized for this class of processes. We have shown that the probabilistic invariance problem over such processes can be separated into two parts: a deterministic reachability analysis, and a probabilistic invariance problem that depends on the outcome of the first. This decomposition approach leads to computational improvements. 2) The abstraction approach have leveraged piece-wise constant interpolations of the stochastic kernel of the process. We extend this approach for systems with higher degrees of smoothness in their probabilistic evolution and provide approximation methods via higher-order interpolations that are aimed at requiring less computational effort. Using higher-order interpolations (versus piece-wise constant ones) can be beneficial in terms of obtaining tighter bounds on the approximation error. Furthermore, since the approximation procedures depend on the partitioning of the state space, higher-order schemes display an interesting tradeoff between more parsimonious representations versus more complex local computation. From the application point of view, an example of SHS is the model of thermostatically controlled loads (TCLs), which captures the evolution of temperature inside a building. This thesis proposes a new, formal two-step abstraction procedure to generate a finite stochastic dynamical model as the aggregation of the dynamics of a population of TCLs. The approach relaxes the limiting assumptions employed in the literature by providing a model based on the natural probabilistic evolution of the single TCL temperature. We also describe a dynamical model for the time evolution of the abstraction, and develop a set-point control strategy aimed at reference tracking over the total power consumption of the TCL population. The abstraction algorithms discussed in this thesis have been implemented as a MATLAB tool FAUST2 (abbreviation for “Formal Abstractions of Uncountable-STate STochastic processes”). The software is freely available for download at http://sourceforge.net/projects/faust2/.Delft Center for Systems and ControlMechanical, Maritime and Materials Engineerin
Achievements in correct-by-design control for stochastic systems
\u3cp\u3eDiscrete-time stochastic systems are an essential modeling tool for many engineering systems. The direct synthesis of controllers for guaranteeing temporal properties encoded as finite automata has gain significant attention. Still, for many physical systems, computational issues make the synthesis problematic due to the inherent continuity and complexity of the state space. This extended abstract summarises achievements made in correct-by-design robust controller synthesis of stochastic systems using approximate simulation relations.\u3c/p\u3
Multilevel Monte Carlo Method for Statistical Model Checking of Hybrid Systems
We study statistical model checking of continuous-time stochastic hybrid
systems. The challenge in applying statistical model checking to these systems
is that one cannot simulate such systems exactly. We employ the multilevel
Monte Carlo method (MLMC) and work on a sequence of discrete-time stochastic
processes whose executions approximate and converge weakly to that of the
original continuous-time stochastic hybrid system with respect to satisfaction
of the property of interest. With focus on bounded-horizon reachability, we
recast the model checking problem as the computation of the distribution of the
exit time, which is in turn formulated as the expectation of an indicator
function. This latter computation involves estimating discontinuous
functionals, which reduces the bound on the convergence rate of the Monte Carlo
algorithm. We propose a smoothing step with tunable precision and formally
quantify the error of the MLMC approach in the mean-square sense, which is
composed of smoothing error, bias, and variance. We formulate a general
adaptive algorithm which balances these error terms. Finally, we describe an
application of our technique to verify a model of thermostatically controlled
loads.Comment: Accepted in the 14th International Conference on Quantitative
Evaluation of Systems (QEST), 201
Verification of general Markov decision processes by approximate similarity relations and policy refinement
\u3cp\u3eIn this work we introduce new approximate similarity relations that are shown to be key for policy (or control) synthesis over general Markov decision processes. The models of interest are discrete-time Markov decision processes, endowed with uncountably infinite state spaces and metric output (or observation) spaces. The new relations, underpinned by the use of metrics, allow, in particular, for a useful trade-off between deviations over probability distributions on states, and distances between model outputs. We show that the new probabilistic similarity relations, inspired by a notion of simulation developed for finite-state models, can be effectively employed over general Markov decision processes for verification purposes, and specifically for control refinement from abstract models.\u3c/p\u3
Dynamic Bayesian networks for formal verification of structured stochastic processes
We study the problem of finite-horizon probabilistic invariance for discrete-time Markov processes over general (uncountable) state spaces. We compute discrete-time, finite-state Markov chains as formal abstractions of the given Markov processes. Our abstraction differs from existing approaches in two ways: first, we exploit the structure of the underlying Markov process to compute the abstraction separately for each dimension; second, we employ dynamic Bayesian networks (DBN) as compact representations of the abstraction. In contrast, approaches which represent and store the (exponentially large) Markov chain explicitly incur significantly higher memory requirements. In our experiments, explicit representations scaled to models of dimension less than half the size as those analyzable by DBN representations. We show how to construct a DBN abstraction of a Markov process satisfying an independence assumption on the driving process noise. We compute a guaranteed bound on the error in the abstraction w.r.t. the probabilistic invariance property—the dimension-dependent abstraction makes the error bounds more precise than existing approaches. Additionally, we show how factor graphs and the sum-product algorithm for DBNs can be used to solve the finite-horizon probabilistic invariance problem. Together, DBN-based representations and algorithms can be significantly more efficient than explicit representations of Markov chains for abstracting and model checking structured Markov processes