1,532 research outputs found

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming

    Well Structured Transition Systems with History

    Get PDF
    We propose a formal model of concurrent systems in which the history of a computation is explicitly represented as a collection of events that provide a view of a sequence of configurations. In our model events generated by transitions become part of the system configurations leading to operational semantics with historical data. This model allows us to formalize what is usually done in symbolic verification algorithms. Indeed, search algorithms often use meta-information, e.g., names of fired transitions, selected processes, etc., to reconstruct (error) traces from symbolic state exploration. The other interesting point of the proposed model is related to a possible new application of the theory of well-structured transition systems (wsts). In our setting wsts theory can be applied to formally extend the class of properties that can be verified using coverability to take into consideration (ordered and unordered) historical data. This can be done by using different types of representation of collections of events and by combining them with wsts by using closure properties of well-quasi orderings.Comment: In Proceedings GandALF 2015, arXiv:1509.0685

    Introducing the Concept of Activation and Blocking of Rules in the General Framework for Regulated Rewriting in Sequential Grammars

    Get PDF
    We introduce new possibilities to control the application of rules based on the preceding application of rules which can be de ned for a general model of sequential grammars and we show some similarities to other control mechanisms as graph-controlled grammars and matrix grammars with and without applicability checking as well as gram- mars with random context conditions and ordered grammars. Using both activation and blocking of rules, in the string and in the multiset case we can show computational com- pleteness of context-free grammars equipped with the control mechanism of activation and blocking of rules even when using only two nonterminal symbols

    CoLoR: a Coq library on well-founded rewrite relations and its application to the automated verification of termination certificates

    Get PDF
    Termination is an important property of programs; notably required for programs formulated in proof assistants. It is a very active subject of research in the Turing-complete formalism of term rewriting systems, where many methods and tools have been developed over the years to address this problem. Ensuring reliability of those tools is therefore an important issue. In this paper we present a library formalizing important results of the theory of well-founded (rewrite) relations in the proof assistant Coq. We also present its application to the automated verification of termination certificates, as produced by termination tools

    Inductive-data-type Systems

    Get PDF
    In a previous work ("Abstract Data Type Systems", TCS 173(2), 1997), the last two authors presented a combined language made of a (strongly normalizing) algebraic rewrite system and a typed lambda-calculus enriched by pattern-matching definitions following a certain format, called the "General Schema", which generalizes the usual recursor definitions for natural numbers and similar "basic inductive types". This combined language was shown to be strongly normalizing. The purpose of this paper is to reformulate and extend the General Schema in order to make it easily extensible, to capture a more general class of inductive types, called "strictly positive", and to ease the strong normalization proof of the resulting system. This result provides a computation model for the combination of an algebraic specification language based on abstract data types and of a strongly typed functional language with strictly positive inductive types.Comment: Theoretical Computer Science (2002

    Further Results on the Power of Generating APCol Systems

    Get PDF
    In this paper we continue our investigations in APCol systems (Automatonlike P colonies), variants of P colonies where the environment of the agents is given by a string and the functioning of the system resembles to the functioning of standard nite automaton. We rst deal with the concept of determinism in these systems and compare deterministic APCol systems with deterministic register machines. Then we focus on generating non-deterministic APCol systems with only one agent. We show that these systems are as powerful as 0-type grammars, i.e., generate any recursively enumerable language. If the APCol system is non-erasing, then any context-sensitive language can be generated by a non-deterministic APCol systems with only one agent

    Knowledge Flow Analysis for Security Protocols

    Full text link
    Knowledge flow analysis offers a simple and flexible way to find flaws in security protocols. A protocol is described by a collection of rules constraining the propagation of knowledge amongst principals. Because this characterization corresponds closely to informal descriptions of protocols, it allows a succinct and natural formalization; because it abstracts away message ordering, and handles communications between principals and applications of cryptographic primitives uniformly, it is readily represented in a standard logic. A generic framework in the Alloy modelling language is presented, and instantiated for two standard protocols, and a new key management scheme.Comment: 20 page
    corecore