66 research outputs found

    Bayesian Verification under Model Uncertainty

    Full text link
    Machine learning enables systems to build and update domain models based on runtime observations. In this paper, we study statistical model checking and runtime verification for systems with this ability. Two challenges arise: (1) Models built from limited runtime data yield uncertainty to be dealt with. (2) There is no definition of satisfaction w.r.t. uncertain hypotheses. We propose such a definition of subjective satisfaction based on recently introduced satisfaction functions. We also propose the BV algorithm as a Bayesian solution to runtime verification of subjective satisfaction under model uncertainty. BV provides user-definable stochastic bounds for type I and II errors. We discuss empirical results from an example application to illustrate our ideas.Comment: Accepted at SEsCPS @ ICSE 201

    Stacked Thompson Bandits

    Full text link
    We introduce Stacked Thompson Bandits (STB) for efficiently generating plans that are likely to satisfy a given bounded temporal logic requirement. STB uses a simulation for evaluation of plans, and takes a Bayesian approach to using the resulting information to guide its search. In particular, we show that stacking multiarmed bandits and using Thompson sampling to guide the action selection process for each bandit enables STB to generate plans that satisfy requirements with a high probability while only searching a fraction of the search space.Comment: Accepted at SEsCPS @ ICSE 201

    Cross-entropy optimisation of importance sampling parameters for statistical model checking

    Get PDF
    Statistical model checking avoids the exponential growth of states associated with probabilistic model checking by estimating properties from multiple executions of a system and by giving results within confidence bounds. Rare properties are often very important but pose a particular challenge for simulation-based approaches, hence a key objective under these circumstances is to reduce the number and length of simulations necessary to produce a given level of confidence. Importance sampling is a well-established technique that achieves this, however to maintain the advantages of statistical model checking it is necessary to find good importance sampling distributions without considering the entire state space. Motivated by the above, we present a simple algorithm that uses the notion of cross-entropy to find the optimal parameters for an importance sampling distribution. In contrast to previous work, our algorithm uses a low dimensional vector of parameters to define this distribution and thus avoids the often intractable explicit representation of a transition matrix. We show that our parametrisation leads to a unique optimum and can produce many orders of magnitude improvement in simulation efficiency. We demonstrate the efficacy of our methodology by applying it to models from reliability engineering and biochemistry.Comment: 16 pages, 8 figures, LNCS styl

    BioDiVinE: A Framework for Parallel Analysis of Biological Models

    Full text link
    In this paper a novel tool BioDiVinEfor parallel analysis of biological models is presented. The tool allows analysis of biological models specified in terms of a set of chemical reactions. Chemical reactions are transformed into a system of multi-affine differential equations. BioDiVinE employs techniques for finite discrete abstraction of the continuous state space. At that level, parallel analysis algorithms based on model checking are provided. In the paper, the key tool features are described and their application is demonstrated by means of a case study

    Reliable sequential testing for statistical model checking

    Get PDF
    We introduce a framework for comparing statistical model checking (SMC) techniques and propose a new, more reliable, SMC technique. Statistical model checking has recently been implemented in tools like UPPAAL and PRISM to be able to handle models which are too complex for numerical analysis. However, these techniques turn out to have shortcomings, most notably that the validity of their outcomes depends on parameters that must be chosen a priori. Our new technique does not have this problem; we prove its correctness, and numerically compare its performance to existing techniques

    A Study of the PDGF Signaling Pathway with PRISM

    Get PDF
    In this paper, we apply the probabilistic model checker PRISM to the analysis of a biological system -- the Platelet-Derived Growth Factor (PDGF) signaling pathway, demonstrating in detail how this pathway can be analyzed in PRISM. We show that quantitative verification can yield a better understanding of the PDGF signaling pathway.Comment: In Proceedings CompMod 2011, arXiv:1109.104

    On minimising the maximum expected verification time

    Get PDF
    Cyber Physical Systems (CPSs) consist of hardware and software components. To verify that the whole (i.e., software + hardware) system meets the given specifications, exhaustive simulation-based approaches (Hardware In the Loop Simulation, HILS) can be effectively used by first generating all relevant simulation scenarios (i.e., sequences of disturbances) and then actually simulating all of them (verification phase). When considering the whole verification activity, we see that the above mentioned verification phase is repeated until no error is found. Accordingly, in order to minimise the time taken by the whole verification activity, in each verification phase we should, ideally, start by simulating scenarios witnessing errors (counterexamples). Of course, to know beforehand the set of such scenarios is not feasible. In this paper we show how to select scenarios so as to minimise the Worst Case Expected Verification Tim
    • …
    corecore