8 research outputs found

    Unification in Permutative Equational Theories is Undecidable

    Get PDF
    An equational theory E is permutative if in every valid equation s =E t the terms s and t have the same symbols with the same number of occurrences. The class of permutative equational theories includes associativity and commutativity and hence is important for unification theory, for term rewriting systems modulo equational theories and corresponding completion procedures. It is shown in this research note that there is no algorithm that decides E-unifiability of terms for all permutative theories. The proof technique is to provide for every Turing machine M a permutative theory with a confluent term rewriting system such that narrowing on certain terms simulates the Turing machine M

    Function definitions in term rewriting and applicative programming

    Get PDF
    The frameworks of unconditional and conditional Term Rewriting and Applicative systems are explored with the objective of using them for defining functions. In particular, a new operational semantics, Tue-Reduction, is elaborated for conditional term rewriting systems. For each framework, the concept of evaluation of terms invoking defined functions is formalized. We then discuss how it may be ensured that a function definition in each of these frameworks is meaningful, by defining restrictions that may be imposed to guarantee termination, unambiguity, and completeness of definition. The three frameworks are then compared, studying when a definition may be translated from one formalism to another

    Basic Narrowing Revisited

    Get PDF
    In this paper we study basic narrowing as a method for solving equations in the initial algebra specified by a ground confluent and terminating term rewriting system. Since we are interested in equation solving, we don’t study basic narrowing as a reduction relation on terms but consider immediately its reformulation as an equation solving rule. This reformulation leads to a technically simpler presentation and reveals that the essence of basic narrowing can be captured without recourse to term unification. We present an equation solving calculus that features three classes of rules. Resolution rules, whose application is don’t care nondeterministic, are the basic rules and suffice for a complete solution procedure. Failure rules detect inconsistent parts of the search space. Simplification rules, whose application is don’t care nondeterministic, enhance the power of the failure rules and reduce the number of necessary don’t know steps. Three of the presented simplification rules are new. The rewriting rule allows for don’t care nondeterministic rewriting and thus yields a marriage of basic and normalizing narrowing. The safe blocking rule is specific to basic narrowing and is particulary useful in conjunction with the rewriting rule. Finally, the unfolding rule allows for a variety of search strategies that reduce the number of don’t know alternatives that need to be explored

    Higher Order Unification via Explicit Substitutions

    Get PDF
    AbstractHigher order unification is equational unification for βη-conversion. But it is not first order equational unification, as substitution has to avoid capture. Thus, the methods for equational unification (such as narrowing) built upon grafting (i.e., substitution without renaming) cannot be used for higher order unification, which needs specific algorithms. Our goal in this paper is to reduce higher order unification to first order equational unification in a suitable theory. This is achieved by replacing substitution by grafting, but this replacement is not straightforward as it raises two major problems. First, some unification problems have solutions with grafting but no solution with substitution. Then equational unification algorithms rest upon the fact that grafting and reduction commute. But grafting and βη-reduction do not commute in λ-calculus and reducing an equation may change the set of its solutions. This difficulty comes from the interaction between the substitutions initiated by βη-reduction and the ones initiated by the unification process. Two kinds of variables are involved: those of βη-conversion and those of unification. So, we need to set up a calculus which distinguishes between these two kinds of variables and such that reduction and grafting commute. For this purpose, the application of a substitution of a reduction variable to a unification one must be delayed until this variable is instantiated. Such a separation and delay are provided by a calculus of explicit substitutions. Unification in such a calculus can be performed by well-known algorithms such as narrowing, but we present a specialised algorithm for greater efficiency. At last we show how to relate unification in λ-calculus and in a calculus with explicit substitutions. Thus, we come up with a new higher order unification algorithm which eliminates some burdens of the previous algorithms, in particular the functional handling of scopes. Huet's algorithm can be seen as a specific strategy for our algorithm, since each of its steps can be decomposed into elementary ones, leading to a more atomic description of the unification process. Also, solved forms in λ-calculus can easily be computed from solved forms in λσ-calculus

    Complete Sets of Transformations for General \u3cem\u3eE\u3c/em\u3e-Unification

    Get PDF
    This paper is concerned with E-unification in arbitrary equational theories. We extend the method of transformations on systems of terms, developed by Martelli-Montanari for standard unification, to E-unification by giving two sets of transformations, BT and T, which are proved to be sound and complete in the sense that a complete set of E-unifiers for any equational theory E can be enumerated by either of these sets. The set T is an improvement of BT, in that many E-unifiers produced by BT will be weeded out by T. In addition, we show that a generalization of surreduction (also called narrowing) combined with the computation of critical pairs is complete. A new representation of equational proofs as certain kinds of trees is used to prove the completeness of the set BT in a rather direct fashion that parallels the completeness of the transformations in the case of (standard) unification. The completeness of T and the generalization of surreduction is proved by a method inspired by the concept of unfailing completion, using an abstract (and simpler) notion of the completion of a set of equations

    Rule-Based Software Verification and Correction

    Full text link
    The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature. In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data). Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm. Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one can also infer semi-automatically some possible corrections in order to fix theWeb site. The methodology is based on a novel rewritBallis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194

    Formal Testing of Object-Oriented Software: from the Method to the Tool

    Get PDF
    This thesis presents a method and a tool for test set selection, dedicated to object-oriented applications and based on formal specifications. Testing is one method to increase the quality of today's extraordinary complex software. The aim is to find program errors with respect to given criteria of correctness. In the case of formal testing, the criterion of correctness is the formal specification of the tested application: program behaviors are compared to those required by the specification. In this context, the difficulty of testing object-oriented software arises from the fact that the behavior of an object does not only depend on the input values of the parameters of its operations, but also on its current state, and generally on the current states of other related objects. This combinatorial explosion requires carefully selecting pertinent test sets of reasonable size. This thesis proposes a formal testing method which takes this issue into account. Our approach is based on two different formalisms: a specification language well adapted to the expression of system properties from the specifier's point of view, and a test language well adapted to the description of test sets from the tester's point of view. Specifications are written in an object-oriented language, CO-OPN (Concurrent Object-Oriented Petri Nets), based on synchronized algebraic Petri nets and devoted to the specification of concurrent systems. Test sets are expressed using a very simple temporal logic, HML (Hennessy-Milner Logic), whose logic formulas can be executed by a program. There exists a full agreement, shown in this thesis, between the CO-OPN and HML satisfaction relationships: the program satisfies its specification if and only if it satisfies the exhaustive test set derived from this specification. The exhaustive test set expresses all the specification properties. The exhaustive test set is generally infinite. Its size is reduced by applying hypotheses to the program behavior. These hypotheses define test selection strategies and reflect common test practices. The quality of the test sets thus selected only depends on the pertinence of the hypotheses. Concretely, the reduction is achieved by associating to each hypothesis applied to the program, a constraint on the test set. Our method proposes a set of elementary constraints: syntactic constraints on the structure of the tests and semantic constraints which allow to instantiate the test variables so as to cover the different classes of behaviors induced by the specification (subdomain decomposition). Elementary constraints can be combined to form complex constraints. Finally, the constraint system defined on the exhaustive test set is solved, and the solution leads to a pertinent test set of reasonable size. Thanks to the CO-OPN semantics, which allows to compute all the correct and incorrect behaviors induced by a specification, our method is able to test, on the one hand that a program does possess correct behaviors, and on the other hand that a program does not possess incorrect behaviors. An advantage of this approach is to provide through the tests, an observational description of valid and invalid implementations. Our testing method exhibits the advantage of being formal, and thus allows a semi-automation of the test selection process. A new tool, called CO-OPNTEST, is presented in this thesis. This tool assists the tester during the construction of constraints to apply to the exhaustive test set; afterward it automatically generates a test set satisfying these constraints. The CO-OPNTEST architecture is composed of a PROLOG kernel and a Java graphical interface. The kernel is an equational resolution procedure based on logic programming. It includes control mechanisms for subdomain decomposition. The graphical interface allows a user-friendly definition of the test constraints. The CO-OPNTEST tool has generated test sets for several case studies in a simple, rapid and efficient way. In particular, it has generated test sets for an industrial case study of realistic size: the control program of a production cell [Lewerentz 95]. CO-OPNTEST and its application to significant examples demonstrate the pertinence of our approach

    Unification in conditional-equational theories

    No full text
    TIB: RA 7759(8502) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekSIGLEDEGerman
    corecore