3,443 research outputs found
Canonical Abstract Syntax Trees
This paper presents Gom, a language for describing abstract syntax trees and
generating a Java implementation for those trees. Gom includes features
allowing the user to specify and modify the interface of the data structure.
These features provide in particular the capability to maintain the internal
representation of data in canonical form with respect to a rewrite system. This
explicitly guarantees that the client program only manipulates normal forms for
this rewrite system, a feature which is only implicitly used in many
implementations
Termination of rewrite relations on -terms based on Girard's notion of reducibility
In this paper, we show how to extend the notion of reducibility introduced by
Girard for proving the termination of -reduction in the polymorphic
-calculus, to prove the termination of various kinds of rewrite
relations on -terms, including rewriting modulo some equational theory
and rewriting with matching modulo , by using the notion of
computability closure. This provides a powerful termination criterion for
various higher-order rewriting frameworks, including Klop's Combinatory
Reductions Systems with simple types and Nipkow's Higher-order Rewrite Systems
Tableaux Modulo Theories Using Superdeduction
We propose a method that allows us to develop tableaux modulo theories using
the principles of superdeduction, among which the theory is used to enrich the
deduction system with new deduction rules. This method is presented in the
framework of the Zenon automated theorem prover, and is applied to the set
theory of the B method. This allows us to provide another prover to Atelier B,
which can be used to verify B proof rules in particular. We also propose some
benchmarks, in which this prover is able to automatically verify a part of the
rules coming from the database maintained by Siemens IC-MOL. Finally, we
describe another extension of Zenon with superdeduction, which is able to deal
with any first order theory, and provide a benchmark coming from the TPTP
library, which contains a large set of first order problems.Comment: arXiv admin note: substantial text overlap with arXiv:1501.0117
Inductive-data-type Systems
In a previous work ("Abstract Data Type Systems", TCS 173(2), 1997), the last
two authors presented a combined language made of a (strongly normalizing)
algebraic rewrite system and a typed lambda-calculus enriched by
pattern-matching definitions following a certain format, called the "General
Schema", which generalizes the usual recursor definitions for natural numbers
and similar "basic inductive types". This combined language was shown to be
strongly normalizing. The purpose of this paper is to reformulate and extend
the General Schema in order to make it easily extensible, to capture a more
general class of inductive types, called "strictly positive", and to ease the
strong normalization proof of the resulting system. This result provides a
computation model for the combination of an algebraic specification language
based on abstract data types and of a strongly typed functional language with
strictly positive inductive types.Comment: Theoretical Computer Science (2002
Computability Closure: Ten Years Later
The notion of computability closure has been introduced for proving the
termination of higher-order rewriting with first-order matching by Jean-Pierre
Jouannaud and Mitsuhiro Okada in a 1997 draft which later served as a basis for
the author's PhD. In this paper, we show how this notion can also be used for
dealing with beta-normalized rewriting with matching modulo beta-eta (on
patterns \`a la Miller), rewriting with matching modulo some equational theory,
and higher-order data types (types with constructors having functional
recursive arguments). Finally, we show how the computability closure can easily
be turned into a reduction ordering which, in the higher-order case, contains
Jean-Pierre Jouannaud and Albert Rubio's higher-order recursive path ordering
and, in the first-order case, is equal to the usual first-order recursive path
ordering
Towards Static Analysis of Functional Programs using Tree Automata Completion
This paper presents the first step of a wider research effort to apply tree
automata completion to the static analysis of functional programs. Tree
Automata Completion is a family of techniques for computing or approximating
the set of terms reachable by a rewriting relation. The completion algorithm we
focus on is parameterized by a set E of equations controlling the precision of
the approximation and influencing its termination. For completion to be used as
a static analysis, the first step is to guarantee its termination. In this
work, we thus give a sufficient condition on E and T(F) for completion
algorithm to always terminate. In the particular setting of functional
programs, this condition can be relaxed into a condition on E and T(C) (terms
built on the set of constructors) that is closer to what is done in the field
of static analysis, where abstractions are performed on data.Comment: Proceedings of WRLA'14. 201
Deduction modulo theory
This paper is a survey on Deduction modulo theor
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
The exp-log normal form of types
Lambda calculi with algebraic data types lie at the core of functional
programming languages and proof assistants, but conceal at least two
fundamental theoretical problems already in the presence of the simplest
non-trivial data type, the sum type. First, we do not know of an explicit and
implemented algorithm for deciding the beta-eta-equality of terms---and this in
spite of the first decidability results proven two decades ago. Second, it is
not clear how to decide when two types are essentially the same, i.e.
isomorphic, in spite of the meta-theoretic results on decidability of the
isomorphism.
In this paper, we present the exp-log normal form of types---derived from the
representation of exponential polynomials via the unary exponential and
logarithmic functions---that any type built from arrows, products, and sums,
can be isomorphically mapped to. The type normal form can be used as a simple
heuristic for deciding type isomorphism, thanks to the fact that it is a
systematic application of the high-school identities.
We then show that the type normal form allows to reduce the standard beta-eta
equational theory of the lambda calculus to a specialized version of itself,
while preserving the completeness of equality on terms. We end by describing an
alternative representation of normal terms of the lambda calculus with sums,
together with a Coq-implemented converter into/from our new term calculus. The
difference with the only other previously implemented heuristic for deciding
interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we
exploit the type information of terms substantially and this often allows us to
obtain a canonical representation of terms without performing sophisticated
term analyses
- …