9,845 research outputs found

    A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Get PDF
    The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective

    Model-Checking Branching-Time Properties of Stateless Probabilistic Pushdown Systems and Its Quantum Extension

    Full text link
    In this work, we first resolve a question in the probabilistic verification of infinite-state systems (specifically, the probabilistic pushdown systems). We show that model checking stateless probabilistic pushdown systems (pBPA) against probabilistic computational tree logic (PCTL) is generally undecidable. We define the quantum analogues of the probabilistic pushdown systems and Markov chains and investigate whether it is necessary to define a quantum analogue of probabilistic computational tree logic to describe the branching-time properties of the quantum Markov chain. We also study its model-checking problem and show that the model-checking of stateless quantum pushdown systems (qBPA) against probabilistic computational tree logic (PCTL) is generally undecidable, too. The immediate corollaries of the above results are summarized in the work.Comment: Obvious typos corrected in new version; this work is a quantum extension of arXiv:1405.4806, [v13]; 30 pages; comments are welcom

    When are Stochastic Transition Systems Tameable?

    Full text link
    A decade ago, Abdulla, Ben Henda and Mayr introduced the elegant concept of decisiveness for denumerable Markov chains [1]. Roughly speaking, decisiveness allows one to lift most good properties from finite Markov chains to denumerable ones, and therefore to adapt existing verification algorithms to infinite-state models. Decisive Markov chains however do not encompass stochastic real-time systems, and general stochastic transition systems (STSs for short) are needed. In this article, we provide a framework to perform both the qualitative and the quantitative analysis of STSs. First, we define various notions of decisiveness (inherited from [1]), notions of fairness and of attractors for STSs, and make explicit the relationships between them. Then, we define a notion of abstraction, together with natural concepts of soundness and completeness, and we give general transfer properties, which will be central to several verification algorithms on STSs. We further design a generic construction which will be useful for the analysis of {\omega}-regular properties, when a finite attractor exists, either in the system (if it is denumerable), or in a sound denumerable abstraction of the system. We next provide algorithms for qualitative model-checking, and generic approximation procedures for quantitative model-checking. Finally, we instantiate our framework with stochastic timed automata (STA), generalized semi-Markov processes (GSMPs) and stochastic time Petri nets (STPNs), three models combining dense-time and probabilities. This allows us to derive decidability and approximability results for the verification of these models. Some of these results were known from the literature, but our generic approach permits to view them in a unified framework, and to obtain them with less effort. We also derive interesting new approximability results for STA, GSMPs and STPNs.Comment: 77 page

    PCTL Model Checking of Markov Chains: Truth and Falsity as Winning Strategies in Games

    No full text
    Probabilistic model checking is a technique for verifying whether a model such as a Markov chain satisfies a probabilistic, behavioral property – e.g. “with probability at least 0.999, a device will be elected leader. ” Such properties are expressible in probabilistic temporal logics, e.g. PCTL, and efficient algorithms exist for checking whether these formulae are true or false on finite-state models. Alas, these algorithms don’t supply diagnostic information for why a probabilistic property does or does not hold in a given model. We provide here complete and rigorous foundations for such diagnostics in the setting of countable labeled Markov chains and PCTL. For each model and PCTL formula, we define a game between a Verifier and a Refuter that is won by Verifier if the formula holds in the model, and won by Refuter if it doesn’t hold. Games are won by exactly one player, through monotone strategies that encode the diagnostic information for truth and falsity (respectively). These games are infinite with BĂŒchi type acceptance conditions where simpler fairness conditions are shown not be to sufficient. Verifier can always force finite plays for certain PCTL formulae, suggesting the existence of finite-state abstractions of models that satisfy such formulae

    Model checking Quantitative Linear Time Logic

    Get PDF
    This paper considers QLtl, a quantitative analagon of Ltl and presents algorithms for model checking QLtl over quantitative versions of Kripke structures and Markov chains

    Model Checking Markov Chains with Actions and State Labels

    Get PDF
    In the past, logics of several kinds have been proposed for reasoning about discrete- or continuous-time Markov chains. Most of these logics rely on either state labels (atomic propositions) or on transition labels (actions). However, in several applications it is useful to reason about both state-properties and action-sequences. For this purpose, we introduce the logic asCSL which provides powerful means to characterize execution paths of Markov chains with actions and state labels. asCSL can be regarded as an extension of the purely state-based logic asCSL (continuous stochastic logic). \ud In asCSL, path properties are characterized by regular expressions over actions and state-formulas. Thus, the truth value of path-formulas does not only depend on the available actions in a given time interval, but also on the validity of certain state formulas in intermediate states.\ud We compare the expressive power of CSL and asCSL and show that even the state-based fragment of asCSL is strictly more expressive than CSL if time intervals starting at zero are employed. Using an automaton-based technique, an asCSL formula and a Markov chain with actions and state labels are combined into a product Markov chain. For time intervals starting at zero we establish a reduction of the model checking problem for asCSL to CSL model checking on this product Markov chain. The usefulness of our approach is illustrated by through an elaborate model of a scalable cellular communication system for which several properties are formalized by means of asCSL-formulas, and checked using the new procedure

    Beyond Model-Checking CSL for QBDs: Resets, Batches and Rewards

    Get PDF
    We propose and discuss a number of extensions to quasi-birth-death models (QBDs) for which CSL model checking is still possible, thus extending our recent work on CSL model checking of QBDs. We then equip the QBDs with rewards, and discuss algorithms and open research issues for model checking CSRL for QBDs with rewards

    Decisive Markov Chains

    Get PDF
    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In particular, this holds for probabilistic lossy channel systems (PLCS). Furthermore, all globally coarse Markov chains are decisive. This class includes probabilistic vector addition systems (PVASS) and probabilistic noisy Turing machines (PNTM). We consider both safety and liveness problems for decisive Markov chains, i.e., the probabilities that a given set of states F is eventually reached or reached infinitely often, respectively. 1. We express the qualitative problems in abstract terms for decisive Markov chains, and show an almost complete picture of its decidability for PLCS, PVASS and PNTM. 2. We also show that the path enumeration algorithm of Iyer and Narasimha terminates for decisive Markov chains and can thus be used to solve the approximate quantitative safety problem. A modified variant of this algorithm solves the approximate quantitative liveness problem. 3. Finally, we show that the exact probability of (repeatedly) reaching F cannot be effectively expressed (in a uniform way) in Tarski-algebra for either PLCS, PVASS or (P)NTM.Comment: 32 pages, 0 figure
    • 

    corecore