62 research outputs found
Probabilistic model checking of complex biological pathways
Probabilistic model checking is a formal verification technique that has been successfully applied to the analysis of systems from a broad range of domains, including security and communication protocols, distributed algorithms and power management. In this paper we illustrate its applicability to a complex biological system: the FGF (Fibroblast Growth Factor) signalling pathway. We give a detailed description of how this case study can be modelled in the probabilistic model checker PRISM, discussing some of the issues that arise in doing so, and show how we can thus examine a rich selection of quantitative properties of this model. We present experimental results for the case study under several different scenarios and provide a detailed analysis, illustrating how this approach can be used to yield a better understanding of the dynamics of the pathway
Approximating Euclidean by Imprecise Markov Decision Processes
Euclidean Markov decision processes are a powerful tool for modeling control
problems under uncertainty over continuous domains. Finite state imprecise,
Markov decision processes can be used to approximate the behavior of these
infinite models. In this paper we address two questions: first, we investigate
what kind of approximation guarantees are obtained when the Euclidean process
is approximated by finite state approximations induced by increasingly fine
partitions of the continuous state space. We show that for cost functions over
finite time horizons the approximations become arbitrarily precise. Second, we
use imprecise Markov decision process approximations as a tool to analyse and
validate cost functions and strategies obtained by reinforcement learning. We
find that, on the one hand, our new theoretical results validate basic design
choices of a previously proposed reinforcement learning approach. On the other
hand, the imprecise Markov decision process approximations reveal some
inaccuracies in the learned cost functions
Syntactic Markovian Bisimulation for Chemical Reaction Networks
In chemical reaction networks (CRNs) with stochastic semantics based on
continuous-time Markov chains (CTMCs), the typically large populations of
species cause combinatorially large state spaces. This makes the analysis very
difficult in practice and represents the major bottleneck for the applicability
of minimization techniques based, for instance, on lumpability. In this paper
we present syntactic Markovian bisimulation (SMB), a notion of bisimulation
developed in the Larsen-Skou style of probabilistic bisimulation, defined over
the structure of a CRN rather than over its underlying CTMC. SMB identifies a
lumpable partition of the CTMC state space a priori, in the sense that it is an
equivalence relation over species implying that two CTMC states are lumpable
when they are invariant with respect to the total population of species within
the same equivalence class. We develop an efficient partition-refinement
algorithm which computes the largest SMB of a CRN in polynomial time in the
number of species and reactions. We also provide an algorithm for obtaining a
quotient network from an SMB that induces the lumped CTMC directly, thus
avoiding the generation of the state space of the original CRN altogether. In
practice, we show that SMB allows significant reductions in a number of models
from the literature. Finally, we study SMB with respect to the deterministic
semantics of CRNs based on ordinary differential equations (ODEs), where each
equation gives the time-course evolution of the concentration of a species. SMB
implies forward CRN bisimulation, a recently developed behavioral notion of
equivalence for the ODE semantics, in an analogous sense: it yields a smaller
ODE system that keeps track of the sums of the solutions for equivalent
species.Comment: Extended version (with proofs), of the corresponding paper published
at KimFest 2017 (http://kimfest.cs.aau.dk/
Probabilistic Reachability for Parametric Markov Models
Abstract. Given a parametric Markov model, we consider the problem of computing the formula expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form expression representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in an exponential growth of the regular expression relative to the number of states. We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids the exponential blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with non-determinism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on non-trivial models.
Efficient Syntax-Driven Lumping of Differential Equations
We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations (ODEs). Our approach consists in an extension of Paige and Tarjan’s seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because (i) it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and (ii) we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in O(r⋅s⋅logs)O(r⋅s⋅logs) time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions
Probabilistic timing covert channels: to close or not to close?
We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then propose an algorithm which computes an estimate of the security of a program with respect to this notion in terms of timing leakage and show how to use this estimate for cost optimization
Probabilistic Reachability for Parametric Markov Models
Abstract. Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form function representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in the growth of the regular expression relative to the number of states (nΘ(logn)). We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids this blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with non-determinism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on non-trivial models.
Exact analysis of summary statistics for continuous-time discrete-state Markov processes on networks using graph-automorphism lumping
We propose a unified framework to represent a wide range of continuous-time discrete-state Markov processes on networks, and show how many network dynamics models in the literature can be represented in this unified framework. We show how a particular sub-set of these models, referred to here as single-vertex-transition (SVT) processes, lead to the analysis of quasi-birth-and-death (QBD) processes in the theory of continuous-time Markov chains. We illustrate how to analyse a number of summary statistics for these processes, such as absorption probabilities and first-passage times. We extend the graph-automorphism lumping approach [Kiss, Miller, Simon, Mathematics of Epidemics on Networks, 2017; Simon, Taylor, Kiss, J. Math. Bio. 62(4), 2011], by providing a matrix-oriented representation of this technique, and show how it can be applied to a very wide range of dynamical processes on networks. This approach can be used not only to solve the master equation of the system, but also to analyse the summary statistics of interest. We also show the interplay between the graph-automorphism lumping approach and the QBD structures when dealing with SVT processes. Finally, we illustrate our theoretical results with examples from the areas of opinion dynamics and mathematical epidemiology
Solution of Large Markov Models Using Lumping Techniques and Symbolic Data Structures
Continuous time Markov chains (CTMCs) are among the most fundamental mathematical structures used for performance and dependability modeling of communication and computer systems. They are often constructed from models described in one of the various high-level formalisms. Since the size of a CTMC usually grows exponentially with the size of the corresponding high-level model, one often encounters the infamous state-space explosion problem, which often makes solution of the CTMCs intractable and sometimes makes it impossible. In state-based numerical analysis, which is the solution technique we have chosen to use to solve for measures defined on a CTMC, the state-space explosion problem is manifested in two ways: 1) large state transition rate matrices, and 2) large iteration vectors.
The goal of this dissertation is to extend, improve, and combine existing solutions of the state-space explosion problem in order to make possible the construction and solution of very large CTMCs generated from high-level models. Our new techniques follow largeness avoidance and largeness tolerance approaches. In the former approach, we reduce the size of the CTMC that needs to be solved in order to compute the measures of interest. That makes both the transition matrix and the iteration vectors smaller. In the latter approach, we reduce the size of the representation of the transition matrix by using symbolic data structures.
In particular, we have developed the fastest known CTMC lumping algorithm with the running time of \Ord(m \log n), where n and m are the number of states and non-zero entries of the generator matrix of the CTMC, respectively. The algorithm can be used both in isolation and along with all compositional lumping algorithms, including the one we have proposed in this dissertation. We have also combined the use of multi-valued decision diagram (MDD) and matrix diagram (MD) symbolic data structures with state-lumping techniques to develop an efficient symbolic state-space exploration algorithm for state-sharing replicate/join composed models that exploits lumpings that are due to equally behaving components created by the replicate operator. Finally, we have developed a new compositional algorithm that lumps CTMCs represented as MDs. Unlike other compositional lumping algorithms, our algorithm does not require any knowledge of the modeling formalisms from which the MDs were generated. Our approach relies on local conditions, i.e., conditions on individual nodes of the MD, which are often much smaller than the state transition rate matrix of the overall CTMC. We believe that our new approach has a simpler formulation, and thus is easier to understand
- …