7,974 research outputs found
Free-cut elimination in linear logic and an application to a feasible arithmetic
International audienceWe prove a general form of 'free-cut elimination' for first-order theories in linear logic, yielding normal forms of proofs where cuts are anchored to nonlogical steps. To demonstrate the usefulness of this result, we consider a version of arithmetic in linear logic, based on a previous axiomatisation by Bellantoni and Hofmann. We prove a witnessing theorem for a fragment of this arithmetic via the 'witness function method', showing that the provably convergent functions are precisely the polynomial-time functions. The programs extracted are implemented in the framework of 'safe' recursive functions, due to Bellantoni and Cook, where the ! modality of linear logic corresponds to normal inputs of a safe recursive program
Making proofs without Modus Ponens: An introduction to the combinatorics and complexity of cut elimination
This paper is intended to provide an introduction to cut elimination which is
accessible to a broad mathematical audience. Gentzen's cut elimination theorem
is not as well known as it deserves to be, and it is tied to a lot of
interesting mathematical structure. In particular we try to indicate some
dynamical and combinatorial aspects of cut elimination, as well as its
connections to complexity theory. We discuss two concrete examples where one
can see the structure of short proofs with cuts, one concerning feasible
numbers and the other concerning "bounded mean oscillation" from real analysis
Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs
We consider the problem of computing numerical invariants of programs, for
instance bounds on the values of numerical program variables. More
specifically, we study the problem of performing static analysis by abstract
interpretation using template linear constraint domains. Such invariants can be
obtained by Kleene iterations that are, in order to guarantee termination,
accelerated by widening operators. In many cases, however, applying this form
of extrapolation leads to invariants that are weaker than the strongest
inductive invariant that can be expressed within the abstract domain in use.
Another well-known source of imprecision of traditional abstract interpretation
techniques stems from their use of join operators at merge nodes in the control
flow graph. The mentioned weaknesses may prevent these methods from proving
safety properties. The technique we develop in this article addresses both of
these issues: contrary to Kleene iterations accelerated by widening operators,
it is guaranteed to yield the strongest inductive invariant that can be
expressed within the template linear constraint domain in use. It also eschews
join operators by distinguishing all paths of loop-free code segments. Formally
speaking, our technique computes the least fixpoint within a given template
linear constraint domain of a transition relation that is succinctly expressed
as an existentially quantified linear real arithmetic formula. In contrast to
previously published techniques that rely on quantifier elimination, our
algorithm is proved to have optimal complexity: we prove that the decision
problem associated with our fixpoint problem is in the second level of the
polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is
a CoRR version of our submission to Logical Methods in Computer Scienc
Short Proofs for Slow Consistency
Let denote the finite
consistency statement "there are no proofs of contradiction in with
symbols". For a large class of natural theories , Pudl\'ak
has shown that the lengths of the shortest proofs of
in the theory
itself are bounded by a polynomial in . At the same time he conjectures that
does not have polynomial proofs of the finite consistency
statements . In contrast we show that Peano arithmetic
() has polynomial proofs of
,
where is the slow consistency statement for
Peano arithmetic, introduced by S.-D. Friedman, Rathjen and Weiermann. We also
obtain a new proof of the result that the usual consistency statement
is equivalent to iterations
of slow consistency. Our argument is proof-theoretic, while previous
investigations of slow consistency relied on non-standard models of arithmetic
The Small-Is-Very-Small Principle
The central result of this paper is the small-is-very-small principle for
restricted sequential theories. The principle says roughly that whenever the
given theory shows that a property has a small witness, i.e. a witness in every
definable cut, then it shows that the property has a very small witness: i.e. a
witness below a given standard number.
We draw various consequences from the central result. For example (in rough
formulations): (i) Every restricted, recursively enumerable sequential theory
has a finitely axiomatized extension that is conservative w.r.t. formulas of
complexity . (ii) Every sequential model has, for any , an extension
that is elementary for formulas of complexity , in which the
intersection of all definable cuts is the natural numbers. (iii) We have
reflection for -sentences with sufficiently small witness in any
consistent restricted theory . (iv) Suppose is recursively enumerable
and sequential. Suppose further that every recursively enumerable and
sequential that locally inteprets , globally interprets . Then,
is mutually globally interpretable with a finitely axiomatized sequential
theory.
The paper contains some careful groundwork developing partial satisfaction
predicates in sequential theories for the complexity measure depth of
quantifier alternations
Bounded Linear Logic
A typed, modular paradigm for polynomial time computation is proposed
- …