1,546 research outputs found

    Comparator automata in quantitative verification

    Full text link
    The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by {\em comparator automata} ({\em comparators}, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values. We show that {aggregate functions} that can be represented with B\"uchi automaton result in comparators that are finite-state and accept by the B\"uchi condition as well. Such {\em ω\omega-regular comparators} further lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem. We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is ω\omega-regular iff the discount-factor is an integer. Not every aggregate function, however, has an ω\omega-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither ω\omega-regular nor ω\omega-context-free. Given this result, we introduce the notion of {\em prefix-average} as a relaxation of limit-average aggregation, and show that it admits ω\omega-context-free comparators

    On the Number of Bootstrap Repetitions for BC_a Confidence Intervals

    Get PDF
    This paper considers the problem of choosing the number bootstrap repetitions B to use with the BC_{a} bootstrap confidence intervals introduced by Efron (1987). Because the simulated random variables are ancillary, we seek a choice of B that yields a confidence interval that is close to the ideal bootstrap confidence interval for which B = infinity. We specifiy a three-step method of choosing B that ensures that the lower and upper lengths of the confidence interval deviate from those of the ideal bootstrap confidence interval by at most a small percentage with high probability.

    Balancing Scalability and Uniformity in SAT Witness Generator

    Full text link
    Constrained-random simulation is the predominant approach used in the industry for functional verification of complex digital designs. The effectiveness of this approach depends on two key factors: the quality of constraints used to generate test vectors, and the randomness of solutions generated from a given set of constraints. In this paper, we focus on the second problem, and present an algorithm that significantly improves the state-of-the-art of (almost-)uniform generation of solutions of large Boolean constraints. Our algorithm provides strong theoretical guarantees on the uniformity of generated solutions and scales to problems involving hundreds of thousands of variables.Comment: This is a full version of DAC 2014 pape

    Multi-Objective Model Checking of Markov Decision Processes

    Get PDF
    We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, M, and given multiple linear-time (\omega -regular or LTL) properties \varphi\_i, and probabilities r\_i \epsilon [0,1], i=1,...,k, we ask whether there exists a strategy \sigma for the controller such that, for all i, the probability that a trajectory of M controlled by \sigma satisfies \varphi\_i is at least r\_i. We provide an algorithm that decides whether there exists such a strategy and if so produces it, and which runs in time polynomial in the size of the MDP. Such a strategy may require the use of both randomization and memory. We also consider more general multi-objective \omega -regular queries, which we motivate with an application to assume-guarantee compositional reasoning for probabilistic systems. Note that there can be trade-offs between different properties: satisfying property \varphi\_1 with high probability may necessitate satisfying \varphi\_2 with low probability. Viewing this as a multi-objective optimization problem, we want information about the "trade-off curve" or Pareto curve for maximizing the probabilities of different properties. We show that one can compute an approximate Pareto curve with respect to a set of \omega -regular properties in time polynomial in the size of the MDP. Our quantitative upper bounds use LP methods. We also study qualitative multi-objective model checking problems, and we show that these can be analysed by purely graph-theoretic methods, even though the strategies may still require both randomization and memory.Comment: 21 pages, 2 figure

    A Continuous-Discontinuous Second-Order Transition in the Satisfiability of Random Horn-SAT Formulas

    Full text link
    We compute the probability of satisfiability of a class of random Horn-SAT formulae, motivated by a connection with the nonemptiness problem of finite tree automata. In particular, when the maximum clause length is 3, this model displays a curve in its parameter space along which the probability of satisfiability is discontinuous, ending in a second-order phase transition where it becomes continuous. This is the first case in which a phase transition of this type has been rigorously established for a random constraint satisfaction problem
    • …
    corecore