803 research outputs found

    Quantifier Elimination over Finite Fields Using Gr\"obner Bases

    Full text link
    We give an algebraic quantifier elimination algorithm for the first-order theory over any given finite field using Gr\"obner basis methods. The algorithm relies on the strong Nullstellensatz and properties of elimination ideals over finite fields. We analyze the theoretical complexity of the algorithm and show its application in the formal analysis of a biological controller model.Comment: A shorter version is to appear in International Conference on Algebraic Informatics 201

    Learning Probabilistic Systems from Tree Samples

    Full text link
    We consider the problem of learning a non-deterministic probabilistic system consistent with a given finite set of positive and negative tree samples. Consistency is defined with respect to strong simulation conformance. We propose learning algorithms that use traditional and a new "stochastic" state-space partitioning, the latter resulting in the minimum number of states. We then use them to solve the problem of "active learning", that uses a knowledgeable teacher to generate samples as counterexamples to simulation equivalence queries. We show that the problem is undecidable in general, but that it becomes decidable under a suitable condition on the teacher which comes naturally from the way samples are generated from failed simulation checks. The latter problem is shown to be undecidable if we impose an additional condition on the learner to always conjecture a "minimum state" hypothesis. We therefore propose a semi-algorithm using stochastic partitions. Finally, we apply the proposed (semi-) algorithms to infer intermediate assumptions in an automated assume-guarantee verification framework for probabilistic systems.Comment: 14 pages, conference paper with full proof

    Compositional Synthesis via a Convex Parameterization of Assume-Guarantee Contracts

    Full text link
    We develop an assume-guarantee framework for control of large scale linear (time-varying) systems from finite-time reach and avoid or infinite-time invariance specifications. The contracts describe the admissible set of states and controls for individual subsystems. A set of contracts compose correctly if mutual assumptions and guarantees match in a way that we formalize. We propose a rich parameterization of contracts such that the set of parameters that compose correctly is convex. Moreover, we design a potential function of parameters that describes the distance of contracts from a correct composition. Thus, the verification and synthesis for the aggregate system are broken to solving small convex programs for individual subsystems, where correctness is ultimately achieved in a compositional way. Illustrative examples demonstrate the scalability of our method

    Model checking: Algorithmic verification and debugging

    Get PDF
    Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief

    Verifying Temporal Regular Properties of Abstractions of Term Rewriting Systems

    Get PDF
    The tree automaton completion is an algorithm used for proving safety properties of systems that can be modeled by a term rewriting system. This representation and verification technique works well for proving properties of infinite systems like cryptographic protocols or more recently on Java Bytecode programs. This algorithm computes a tree automaton which represents a (regular) over approximation of the set of reachable terms by rewriting initial terms. This approach is limited by the lack of information about rewriting relation between terms. Actually, terms in relation by rewriting are in the same equivalence class: there are recognized by the same state in the tree automaton. Our objective is to produce an automaton embedding an abstraction of the rewriting relation sufficient to prove temporal properties of the term rewriting system. We propose to extend the algorithm to produce an automaton having more equivalence classes to distinguish a term or a subterm from its successors w.r.t. rewriting. While ground transitions are used to recognize equivalence classes of terms, epsilon-transitions represent the rewriting relation between terms. From the completed automaton, it is possible to automatically build a Kripke structure abstracting the rewriting sequence. States of the Kripke structure are states of the tree automaton and the transition relation is given by the set of epsilon-transitions. States of the Kripke structure are labelled by the set of terms recognized using ground transitions. On this Kripke structure, we define the Regular Linear Temporal Logic (R-LTL) for expressing properties. Such properties can then be checked using standard model checking algorithms. The only difference between LTL and R-LTL is that predicates are replaced by regular sets of acceptable terms

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers
    • …
    corecore