402 research outputs found
Termination of Rewriting with and Automated Synthesis of Forbidden Patterns
We introduce a modified version of the well-known dependency pair framework
that is suitable for the termination analysis of rewriting under forbidden
pattern restrictions. By attaching contexts to dependency pairs that represent
the calling contexts of the corresponding recursive function calls, it is
possible to incorporate the forbidden pattern restrictions in the (adapted)
notion of dependency pair chains, thus yielding a sound and complete approach
to termination analysis. Building upon this contextual dependency pair
framework we introduce a dependency pair processor that simplifies problems by
analyzing the contextual information of the dependency pairs. Moreover, we show
how this processor can be used to synthesize forbidden patterns suitable for a
given term rewriting system on-the-fly during the termination analysis.Comment: In Proceedings IWS 2010, arXiv:1012.533
(Leftmost-Outermost) Beta Reduction is Invariant, Indeed
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
lambda-calculus a reasonable machine? Is there a way to measure the
computational complexity of a lambda-term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of lambda-calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating lambda-calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modeled after
linear logic proof nets and admitting a decomposition of leftmost-outermost
derivations with the desired property. Thus, the LSC is invariant with respect
to, say, random access machines. The second step is to show that LSC is
invariant with respect to the lambda-calculus. The size explosion problem seems
to imply that this is not possible: having the same notions of normal form,
evaluation in the LSC is exponentially longer than in the lambda-calculus. We
solve such an impasse by introducing a new form of shared normal form and
shared reduction, deemed useful. Useful evaluation avoids those steps that only
unshare the output without contributing to beta-redexes, i.e. the steps that
cause the blow-up in size. The main technical contribution of the paper is
indeed the definition of useful reductions and the thorough analysis of their
properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
Distilling Abstract Machines (Long Version)
It is well-known that many environment-based abstract machines can be seen as
strategies in lambda calculi with explicit substitutions (ES). Recently,
graphical syntaxes and linear logic led to the linear substitution calculus
(LSC), a new approach to ES that is halfway between big-step calculi and
traditional calculi with ES. This paper studies the relationship between the
LSC and environment-based abstract machines. While traditional calculi with ES
simulate abstract machines, the LSC rather distills them: some transitions are
simulated while others vanish, as they map to a notion of structural
congruence. The distillation process unveils that abstract machines in fact
implement weak linear head reduction, a notion of evaluation having a central
role in the theory of linear logic. We show that such a pattern applies
uniformly in call-by-name, call-by-value, and call-by-need, catching many
machines in the literature. We start by distilling the KAM, the CEK, and the
ZINC, and then provide simplified versions of the SECD, the lazy KAM, and
Sestoft's machine. Along the way we also introduce some new machines with
global environments. Moreover, we show that distillation preserves the time
complexity of the executions, i.e. the LSC is a complexity-preserving
abstraction of abstract machines.Comment: 63 page
A Strong Distillery
Abstract machines for the strong evaluation of lambda-terms (that is, under
abstractions) are a mostly neglected topic, despite their use in the
implementation of proof assistants and higher-order logic programming
languages. This paper introduces a machine for the simplest form of strong
evaluation, leftmost-outermost (call-by-name) evaluation to normal form,
proving it correct, complete, and bounding its overhead. Such a machine, deemed
Strong Milner Abstract Machine, is a variant of the KAM computing normal forms
and using just one global environment. Its properties are studied via a special
form of decoding, called a distillation, into the Linear Substitution Calculus,
neatly reformulating the machine as a standard micro-step strategy for explicit
substitutions, namely linear leftmost-outermost reduction, i.e., the extension
to normal form of linear head reduction. Additionally, the overhead of the
machine is shown to be linear both in the number of steps and in the size of
the initial term, validating its design. The study highlights two distinguished
features of strong machines, namely backtracking phases and their interactions
with abstractions and environments.Comment: Accepted at APLAS 201
Rewriting Modulo \beta in the \lambda\Pi-Calculus Modulo
The lambda-Pi-calculus Modulo is a variant of the lambda-calculus with
dependent types where beta-conversion is extended with user-defined rewrite
rules. It is an expressive logical framework and has been used to encode logics
and type systems in a shallow way. Basic properties such as subject reduction
or uniqueness of types do not hold in general in the lambda-Pi-calculus Modulo.
However, they hold if the rewrite system generated by the rewrite rules
together with beta-reduction is confluent. But this is too restrictive. To
handle the case where non confluence comes from the interference between the
beta-reduction and rewrite rules with lambda-abstraction on their left-hand
side, we introduce a notion of rewriting modulo beta for the lambda-Pi-calculus
Modulo. We prove that confluence of rewriting modulo beta is enough to ensure
subject reduction and uniqueness of types. We achieve our goal by encoding the
lambda-Pi-calculus Modulo into Higher-Order Rewrite System (HRS). As a
consequence, we also make the confluence results for HRSs available for the
lambda-Pi-calculus Modulo.Comment: In Proceedings LFMTP 2015, arXiv:1507.0759
Beta Reduction is Invariant, Indeed (Long Version)
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
-calculus a reasonable machine? Is there a way to measure the
computational complexity of a -term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of -calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating -calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modelled
after linear logic and proof-nets and admitting a decomposition of
leftmost-outermost derivations with the desired property. Thus, the LSC is
invariant with respect to, say, random access machines. The second step is to
show that LSC is invariant with respect to the -calculus. The size
explosion problem seems to imply that this is not possible: having the same
notions of normal form, evaluation in the LSC is exponentially longer than in
the -calculus. We solve such an impasse by introducing a new form of
shared normal form and shared reduction, deemed useful. Useful evaluation
avoids those steps that only unshare the output without contributing to
-redexes, i.e., the steps that cause the blow-up in size.Comment: 29 page
Automated Synthesis of a Finite Complexity Ordering for Saturation
We present in this paper a new procedure to saturate a set of clauses with
respect to a well-founded ordering on ground atoms such that A < B implies
Var(A) {\subseteq} Var(B) for every atoms A and B. This condition is satisfied
by any atom ordering compatible with a lexicographic, recursive, or multiset
path ordering on terms. Our saturation procedure is based on a priori ordered
resolution and its main novelty is the on-the-fly construction of a finite
complexity atom ordering. In contrast with the usual redundancy, we give a new
redundancy notion and we prove that during the saturation a non-redundant
inference by a priori ordered resolution is also an inference by a posteriori
ordered resolution. We also prove that if a set S of clauses is saturated with
respect to an atom ordering as described above then the problem of whether a
clause C is entailed from S is decidable
Subsumption Demodulation in First-Order Theorem Proving
Motivated by applications of first-order theorem proving to software
analysis, we introduce a new inference rule, called subsumption demodulation,
to improve support for reasoning with conditional equalities in
superposition-based theorem proving. We show that subsumption demodulation is a
simplification rule that does not require radical changes to the underlying
superposition calculus. We implemented subsumption demodulation in the theorem
prover Vampire, by extending Vampire with a new clause index and adapting its
multi-literal matching component. Our experiments, using the TPTP and SMT-LIB
repositories, show that subsumption demodulation in Vampire can solve many new
problems that could so far not be solved by state-of-the-art reasoners
Subsumption Demodulation in First-Order Theorem Proving
Motivated by applications of first-order theorem proving to software analysis, we introduce a new inference rule, called subsumption demodulation, to improve support for reasoning with conditional equalities in superposition-based theorem proving. We show that subsumption demodulation is a simplification rule that does not require radical changes to the underlying superposition calculus. We implemented subsumption demodulation in the theorem prover Vampire, by extending Vampire with a new clause index and adapting its multi-literal matching component. Our experiments, using the TPTP and SMT-LIB repositories, show that subsumption demodulation in Vampire can solve many new problems that could so far not be solved by state-of-the-art reasoners
- …