15 research outputs found

    ISCASMC: A Web-Based Probabilistic Model Checker

    Get PDF
    We introduce the web-based model checker iscasMc for probabilistic systems (see http://iscasmc.ios.ac.cn/IscasMC). This Java application offers an easy-to-use web interface for the evaluation of Markov chains and decision processes against PCTL and PCTL specifications. Compared to PRISM or MRMC, iscasMc is particularly efficient in evaluating the probabilities of LTL properties. © 2014 Springer International Publishing Switzerland.We introduce the web-based model checker iscasMc for probabilistic systems (see http://iscasmc.ios.ac.cn/IscasMC). This Java application offers an easy-to-use web interface for the evaluation of Markov chains and decision processes against PCTL and PCTL specifications. Compared to PRISM or MRMC, iscasMc is particularly efficient in evaluating the probabilities of LTL properties. © 2014 Springer International Publishing Switzerland

    Efficient computation of exact solutions for quantitative model checking

    Get PDF
    Quantitative model checkers for Markov Decision Processes typically use finite-precision arithmetic. If all the coefficients in the process are rational numbers, then the model checking results are rational, and so they can be computed exactly. However, exact techniques are generally too expensive or limited in scalability. In this paper we propose a method for obtaining exact results starting from an approximated solution in finite-precision arithmetic. The input of the method is a description of a scheduler, which can be obtained by a model checker using finite precision. Given a scheduler, we show how to obtain a corresponding basis in a linear-programming problem, in such a way that the basis is optimal whenever the scheduler attains the worst-case probability. This correspondence is already known for discounted MDPs, we show how to apply it in the undiscounted case provided that some preprocessing is done. Using the correspondence, the linear-programming problem can be solved in exact arithmetic starting from the basis obtained. As a consequence, the method finds the worst-case probability even if the scheduler provided by the model checker was not optimal. In our experiments, the calculation of exact solutions from a candidate scheduler is significantly faster than the calculation using the simplex method under exact arithmetic starting from a default basis.Comment: In Proceedings QAPL 2012, arXiv:1207.055

    How to Handle Assumptions in Synthesis

    Full text link
    The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    Perturbation analysis in verification of discrete-time Markov chains

    Get PDF
    Perturbation analysis in probabilistic verification addresses the robustness and sensitivity problem for verification of stochastic models against qualitative and quantitative properties. We identify two types of perturbation bounds, namely non-asymptotic bounds and asymptotic bounds. Non-asymptotic bounds are exact, pointwise bounds that quantify the upper and lower bounds of the verification result subject to a given perturbation of the model, whereas asymptotic bounds are closed-form bounds that approximate non-asymptotic bounds by assuming that the given perturbation is sufficiently small. We perform perturbation analysis in the setting of Discrete-time Markov Chains. We consider three basic matrix norms to capture the perturbation distance, and focus on the computational aspect. Our main contributions include algorithms and tight complexity bounds for calculating both non-asymptotic bounds and asymptotic bounds with respect to the three perturbation distances. © 2014 Springer-Verlag

    Synthesizing Systems with Optimal Average-Case Behavior for Ratio Objectives

    Full text link
    We show how to automatically construct a system that satisfies a given logical specification and has an optimal average behavior with respect to a specification with ratio costs. When synthesizing a system from a logical specification, it is often the case that several different systems satisfy the specification. In this case, it is usually not easy for the user to state formally which system she prefers. Prior work proposed to rank the correct systems by adding a quantitative aspect to the specification. A desired preference relation can be expressed with (i) a quantitative language, which is a function assigning a value to every possible behavior of a system, and (ii) an environment model defining the desired optimization criteria of the system, e.g., worst-case or average-case optimal. In this paper, we show how to synthesize a system that is optimal for (i) a quantitative language given by an automaton with a ratio cost function, and (ii) an environment model given by a labeled Markov decision process. The objective of the system is to minimize the expected (ratio) costs. The solution is based on a reduction to Markov Decision Processes with ratio cost functions which do not require that the costs in the denominator are strictly positive. We find an optimal strategy for these using a fractional linear program.Comment: In Proceedings iWIGP 2011, arXiv:1102.374

    Probabilistic Black-Box Checking via Active MDP Learning

    Full text link
    We introduce a novel methodology for testing stochastic black-box systems, frequently encountered in embedded systems. Our approach enhances the established black-box checking (BBC) technique to address stochastic behavior. Traditional BBC primarily involves iteratively identifying an input that breaches the system's specifications by executing the following three phases: the learning phase to construct an automaton approximating the black box's behavior, the synthesis phase to identify a candidate counterexample from the learned automaton, and the validation phase to validate the obtained candidate counterexample and the learned automaton against the original black-box system. Our method, ProbBBC, refines the conventional BBC approach by (1) employing an active Markov Decision Process (MDP) learning method during the learning phase, (2) incorporating probabilistic model checking in the synthesis phase, and (3) applying statistical hypothesis testing in the validation phase. ProbBBC uniquely integrates these techniques rather than merely substituting each method in the traditional BBC; for instance, the statistical hypothesis testing and the MDP learning procedure exchange information regarding the black-box system's observation with one another. The experiment results suggest that ProbBBC outperforms an existing method, especially for systems with limited observation.Comment: Accepted to EMSOFT 202

    Lazy Probabilistic Model Checking without Determinisation

    Get PDF
    The bottleneck in the quantitative analysis of Markov chains and Markov decision processes against specifications given in LTL or as some form of nondeterministic B\"uchi automata is the inclusion of a determinisation step of the automaton under consideration. In this paper, we show that full determinisation can be avoided: subset and breakpoint constructions suffice. We have implemented our approach---both explicit and symbolic versions---in a prototype tool. Our experiments show that our prototype can compete with mature tools like PRISM.Comment: 38 pages. Updated version for introducing the following changes: - general improvement on paper presentation; - extension of the approach to avoid full determinisation; - added proofs for such an extension; - added case studies; - updated old case studies to reflect the added extensio

    IST Austria Technical Report

    Get PDF
    We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems. We focus on qualitative properties for MDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation of MDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation. We present an automated technique for assume-guarantee style reasoning for compositional analysis of MDPs with qualitative properties by giving a counter-example guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements
    corecore