20,010 research outputs found

    A model checker for performance and dependability properties

    Get PDF
    Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [8] and the continuous time setting [1], [3]. In this short paper, we describe the prototype model checker EāŠ¢MC2E \vdash M C^2 for discrete and continuous-time Markov chains, where properties are expressed in appropriate extensions of CTL.We illustrate the general benefits of this approach and discuss the structure of the tool

    Probabilistic Model Checking for Energy Analysis in Software Product Lines

    Full text link
    In a software product line (SPL), a collection of software products is defined by their commonalities in terms of features rather than explicitly specifying all products one-by-one. Several verification techniques were adapted to establish temporal properties of SPLs. Symbolic and family-based model checking have been proven to be successful for tackling the combinatorial blow-up arising when reasoning about several feature combinations. However, most formal verification approaches for SPLs presented in the literature focus on the static SPLs, where the features of a product are fixed and cannot be changed during runtime. This is in contrast to dynamic SPLs, allowing to adapt feature combinations of a product dynamically after deployment. The main contribution of the paper is a compositional modeling framework for dynamic SPLs, which supports probabilistic and nondeterministic choices and allows for quantitative analysis. We specify the feature changes during runtime within an automata-based coordination component, enabling to reason over strategies how to trigger dynamic feature changes for optimizing various quantitative objectives, e.g., energy or monetary costs and reliability. For our framework there is a natural and conceptually simple translation into the input language of the prominent probabilistic model checker PRISM. This facilitates the application of PRISM's powerful symbolic engine to the operational behavior of dynamic SPLs and their family-based analysis against various quantitative queries. We demonstrate feasibility of our approach by a case study issuing an energy-aware bonding network device.Comment: 14 pages, 11 figure

    A tool for model-checking Markov chains

    Get PDF
    Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen-Twente Markov Chain Checker EƎMC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of EƎMC2

    Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives

    Full text link
    Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value

    A Markov Chain Model Checker

    Get PDF
    Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17,6] and the continuous time setting [4,8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen Twente Markov Chain Checker (EāŠ¢MC2(E \vdash MC^2), where properties are expressed in appropriate extensions of CTL. We illustrate the general bene ts of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of (EāŠ¢MC2(E \vdash MC^2)

    Synthesizing Probabilistic Invariants via Doob's Decomposition

    Full text link
    When analyzing probabilistic computations, a powerful approach is to first find a martingale---an expression on the program variables whose expectation remains invariant---and then apply the optional stopping theorem in order to infer properties at termination time. One of the main challenges, then, is to systematically find martingales. We propose a novel procedure to synthesize martingale expressions from an arbitrary initial expression. Contrary to state-of-the-art approaches, we do not rely on constraint solving. Instead, we use a symbolic construction based on Doob's decomposition. This procedure can produce very complex martingales, expressed in terms of conditional expectations. We show how to automatically generate and simplify these martingales, as well as how to apply the optional stopping theorem to infer properties at termination time. This last step typically involves some simplification steps, and is usually done manually in current approaches. We implement our techniques in a prototype tool and demonstrate our process on several classical examples. Some of them go beyond the capability of current semi-automatic approaches

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Symbolic Algorithms for Qualitative Analysis of Markov Decision Processes with B\"uchi Objectives

    Full text link
    We consider Markov decision processes (MDPs) with \omega-regular specifications given as parity objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. The algorithms for the computation of the almost-sure winning set for parity objectives iteratively use the solutions for the almost-sure winning set for B\"uchi objectives (a special case of parity objectives). Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as compared to the previous known algorithm that takes O(n^2) symbolic steps, where nn is the number of states and mm is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes O(n \sqrt{n}) symbolic steps, as compared to the previous known O(n2)O(n^2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the maximal number of edges of strongly connected components (scc's) of the MDP. The win-lose algorithm requires symbolic computation of scc's. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5n symbolic steps, whereas our new algorithm takes 4n symbolic steps
    • ā€¦
    corecore