5,516 research outputs found

    Model Checking Markov Chains with Actions and State Labels

    Get PDF
    In the past, logics of several kinds have been proposed for reasoning about discrete- or continuous-time Markov chains. Most of these logics rely on either state labels (atomic propositions) or on transition labels (actions). However, in several applications it is useful to reason about both state-properties and action-sequences. For this purpose, we introduce the logic asCSL which provides powerful means to characterize execution paths of Markov chains with actions and state labels. asCSL can be regarded as an extension of the purely state-based logic asCSL (continuous stochastic logic). \ud In asCSL, path properties are characterized by regular expressions over actions and state-formulas. Thus, the truth value of path-formulas does not only depend on the available actions in a given time interval, but also on the validity of certain state formulas in intermediate states.\ud We compare the expressive power of CSL and asCSL and show that even the state-based fragment of asCSL is strictly more expressive than CSL if time intervals starting at zero are employed. Using an automaton-based technique, an asCSL formula and a Markov chain with actions and state labels are combined into a product Markov chain. For time intervals starting at zero we establish a reduction of the model checking problem for asCSL to CSL model checking on this product Markov chain. The usefulness of our approach is illustrated by through an elaborate model of a scalable cellular communication system for which several properties are formalized by means of asCSL-formulas, and checked using the new procedure

    A Markov Chain Model Checker

    Get PDF
    Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17,6] and the continuous time setting [4,8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen Twente Markov Chain Checker (E⊱MC2(E \vdash MC^2), where properties are expressed in appropriate extensions of CTL. We illustrate the general bene ts of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of (E⊱MC2(E \vdash MC^2)

    A tool for model-checking Markov chains

    Get PDF
    Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen-Twente Markov Chain Checker EÎMC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of EÎMC2

    A uniformization-based algorithm for model checking the CSL until operator on labeled queueing networks

    Get PDF
    We present a model checking procedure for the CSL until operator on the CTMCs that underlie Jackson queueing networks. The key issue lies in the fact that the underlying CTMC is infinite in as many dimension as there are queues in the JQN. We need to compute the transient state probabilities for all goal states and for all possible starting states. However, for these transient probabilities no computational procedures are readily available. The contribution of this paper is the proposal of a new uniformization-based approach to compute the transient state probabilities. Furthermore, we show how the highly structured state space of JQNs allows us to compute the possible infinite satisfaction set for until formulas. A case study on an e-business site shows the feasibility of our approach

    Beyond Model-Checking CSL for QBDs: Resets, Batches and Rewards

    Get PDF
    We propose and discuss a number of extensions to quasi-birth-death models (QBDs) for which CSL model checking is still possible, thus extending our recent work on CSL model checking of QBDs. We then equip the QBDs with rewards, and discuss algorithms and open research issues for model checking CSRL for QBDs with rewards

    Efficient CSL Model Checking Using Stratification

    Get PDF
    For continuous-time Markov chains, the model-checking problem with respect to continuous-time stochastic logic (CSL) has been introduced and shown to be decidable by Aziz, Sanwal, Singhal and Brayton in 1996. Their proof can be turned into an approximation algorithm with worse than exponential complexity. In 2000, Baier, Haverkort, Hermanns and Katoen presented an efficient polynomial-time approximation algorithm for the sublogic in which only binary until is allowed. In this paper, we propose such an efficient polynomial-time approximation algorithm for full CSL. The key to our method is the notion of stratified CTMCs with respect to the CSL property to be checked. On a stratified CTMC, the probability to satisfy a CSL path formula can be approximated by a transient analysis in polynomial time (using uniformization). We present a measure-preserving, linear-time and -space transformation of any CTMC into an equivalent, stratified one. This makes the present work the centerpiece of a broadly applicable full CSL model checker. Recently, the decision algorithm by Aziz et al. was shown to work only for stratified CTMCs. As an additional contribution, our measure-preserving transformation can be used to ensure the decidability for general CTMCs.Comment: 18 pages, preprint for LMCS. An extended abstract appeared in ICALP 201

    When are Stochastic Transition Systems Tameable?

    Full text link
    A decade ago, Abdulla, Ben Henda and Mayr introduced the elegant concept of decisiveness for denumerable Markov chains [1]. Roughly speaking, decisiveness allows one to lift most good properties from finite Markov chains to denumerable ones, and therefore to adapt existing verification algorithms to infinite-state models. Decisive Markov chains however do not encompass stochastic real-time systems, and general stochastic transition systems (STSs for short) are needed. In this article, we provide a framework to perform both the qualitative and the quantitative analysis of STSs. First, we define various notions of decisiveness (inherited from [1]), notions of fairness and of attractors for STSs, and make explicit the relationships between them. Then, we define a notion of abstraction, together with natural concepts of soundness and completeness, and we give general transfer properties, which will be central to several verification algorithms on STSs. We further design a generic construction which will be useful for the analysis of {\omega}-regular properties, when a finite attractor exists, either in the system (if it is denumerable), or in a sound denumerable abstraction of the system. We next provide algorithms for qualitative model-checking, and generic approximation procedures for quantitative model-checking. Finally, we instantiate our framework with stochastic timed automata (STA), generalized semi-Markov processes (GSMPs) and stochastic time Petri nets (STPNs), three models combining dense-time and probabilities. This allows us to derive decidability and approximability results for the verification of these models. Some of these results were known from the literature, but our generic approach permits to view them in a unified framework, and to obtain them with less effort. We also derive interesting new approximability results for STA, GSMPs and STPNs.Comment: 77 page

    On the connections between PCTL and Dynamic Programming

    Full text link
    Probabilistic Computation Tree Logic (PCTL) is a well-known modal logic which has become a standard for expressing temporal properties of finite-state Markov chains in the context of automated model checking. In this paper, we give a definition of PCTL for noncountable-space Markov chains, and we show that there is a substantial affinity between certain of its operators and problems of Dynamic Programming. After proving some uniqueness properties of the solutions to the latter, we conclude the paper with two examples to show that some recovery strategies in practical applications, which are naturally stated as reach-avoid problems, can be actually viewed as particular cases of PCTL formulas.Comment: Submitte

    MeGARA: Menu-based Game Abstraction and Abstraction Refinement of Markov Automata

    Full text link
    Markov automata combine continuous time, probabilistic transitions, and nondeterminism in a single model. They represent an important and powerful way to model a wide range of complex real-life systems. However, such models tend to be large and difficult to handle, making abstraction and abstraction refinement necessary. In this paper we present an abstraction and abstraction refinement technique for Markov automata, based on the game-based and menu-based abstraction of probabilistic automata. First experiments show that a significant reduction in size is possible using abstraction.Comment: In Proceedings QAPL 2014, arXiv:1406.156
    • 

    corecore