47 research outputs found
Modularity of Convergence and Strong Convergence in Infinitary Rewriting
Properties of Term Rewriting Systems are called modular iff they are
preserved under (and reflected by) disjoint union, i.e. when combining two Term
Rewriting Systems with disjoint signatures. Convergence is the property of
Infinitary Term Rewriting Systems that all reduction sequences converge to a
limit. Strong Convergence requires in addition that redex positions in a
reduction sequence move arbitrarily deep. In this paper it is shown that both
Convergence and Strong Convergence are modular properties of non-collapsing
Infinitary Term Rewriting Systems, provided (for convergence) that the term
metrics are granular. This generalises known modularity results beyond metric
\infty
Weak convergence and uniform normalization in infinitary rewriting
We study infinitary term rewriting systems containing finitely many rules. For these, we show that if a weakly convergent reduction is not strongly convergent, it contains a term that reduces to itself in one step (but the step itself need not be part of the reduction). Using this result, we prove
the starkly surprising result
that for any orthogonal system with finitely many rules, the system is
weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong convergence.
As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly convergent rewriting in the setting of weak convergence, if systems with finitely many rules are considered. Finally, we give a number of counterexamples showing failure of most of the results when infinite sets of rules are allowed
Type classes for efficient exact real arithmetic in Coq
Floating point operations are fast, but require continuous effort on the part
of the user in order to ensure that the results are correct. This burden can be
shifted away from the user by providing a library of exact analysis in which
the computer handles the error estimates. Previously, we [Krebbers/Spitters
2011] provided a fast implementation of the exact real numbers in the Coq proof
assistant. Our implementation improved on an earlier implementation by O'Connor
by using type classes to describe an abstract specification of the underlying
dense set from which the real numbers are built. In particular, we used dyadic
rationals built from Coq's machine integers to obtain a 100 times speed up of
the basic operations already. This article is a substantially expanded version
of [Krebbers/Spitters 2011] in which the implementation is extended in the
various ways. First, we implement and verify the sine and cosine function.
Secondly, we create an additional implementation of the dense set based on
Coq's fast rational numbers. Thirdly, we extend the hierarchy to capture order
on undecidable structures, while it was limited to decidable structures before.
This hierarchy, based on type classes, allows us to share theory on the
naturals, integers, rationals, dyadics, and reals in a convenient way. Finally,
we obtain another dramatic speed-up by avoiding evaluation of termination
proofs at runtime.Comment: arXiv admin note: text overlap with arXiv:1105.275
Lambda-calculus and formal language theory
Formal and symbolic approaches have offered computer science many application fields. The rich and fruitful connection between logic, automata and algebra is one such approach. It has been used to model natural languages as well as in program verification. In the mathematics of language it is able to model phenomena ranging from syntax to phonology while in verification it gives model checking algorithms to a wide family of programs. This thesis extends this approach to simply typed lambda-calculus by providing a natural extension of recognizability to programs that are representable by simply typed terms. This notion is then applied to both the mathematics of language and program verification. In the case of the mathematics of language, it is used to generalize parsing algorithms and to propose high-level methods to describe languages. Concerning program verification, it is used to describe methods for verifying the behavioral properties of higher-order programs. In both cases, the link that is drawn between finite state methods and denotational semantics provide the means to mix powerful tools coming from the two worlds
Variant-Based Satisfiability
Although different satisfiability decision procedures
can be combined by algorithms such as those of Nelson-Oppen or
Shostak, current tools typically can only support a finite number of
theories to use in such combinations. To make SMT solving more
widely applicable, generic satisfiability algorithms that can
allow a potentially infinite number of decidable theories to be
user-definable, instead of needing to be built in by the
implementers, are highly desirable. This work studies how
folding variant narrowing, a generic
unification algorithm that offers
good extensibility in unification theory, can be extended to
a generic variant-based satisfiability algorithm for the initial
algebras of its user-specified input theories when such theories
satisfy Comon-Delaune's finite variant property (FVP) and some
extra conditions. Several, increasingly larger infinite classes of
theories whose initial algebras enjoy decidable variant-based satisfiability
are identified, and a method based on descent maps to bring other theories
into these classes and to improve the generic
algorithm's efficiency is proposed and illustrated with examples.Partially supported by NSF Grant CNS 13-19109.Ope