1,526 research outputs found
Transformation-Based Bottom-Up Computation of the Well-Founded Model
We present a framework for expressing bottom-up algorithms to compute the
well-founded model of non-disjunctive logic programs. Our method is based on
the notion of conditional facts and elementary program transformations studied
by Brass and Dix for disjunctive programs. However, even if we restrict their
framework to nondisjunctive programs, their residual program can grow to
exponential size, whereas for function-free programs our program remainder is
always polynomial in the size of the extensional database (EDB).
We show that particular orderings of our transformations (we call them
strategies) correspond to well-known computational methods like the alternating
fixpoint approach, the well-founded magic sets method and the magic alternating
fixpoint procedure. However, due to the confluence of our calculi, we come up
with computations of the well-founded model that are provably better than these
methods.
In contrast to other approaches, our transformation method treats magic set
transformed programs correctly, i.e. it always computes a relevant part of the
well-founded model of the original program.Comment: 43 pages, 3 figure
Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq
This paper shows equivalence of several versions of applicative similarity
and contextual approximation, and hence also of applicative bisimilarity and
contextual equivalence, in LR, the deterministic call-by-need lambda calculus
with letrec extended by data constructors, case-expressions and Haskell's
seq-operator. LR models an untyped version of the core language of Haskell. The
use of bisimilarities simplifies equivalence proofs in calculi and opens a way
for more convenient correctness proofs for program transformations. The proof
is by a fully abstract and surjective transfer into a call-by-name calculus,
which is an extension of Abramsky's lazy lambda calculus. In the latter
calculus equivalence of our similarities and contextual approximation can be
shown by Howe's method. Similarity is transferred back to LR on the basis of an
inductively defined similarity. The translation from the call-by-need letrec
calculus into the extended call-by-name lambda calculus is the composition of
two translations. The first translation replaces the call-by-need strategy by a
call-by-name strategy and its correctness is shown by exploiting infinite trees
which emerge by unfolding the letrec expressions. The second translation
encodes letrec-expressions by using multi-fixpoint combinators and its
correctness is shown syntactically by comparing reductions of both calculi. A
further result of this paper is an isomorphism between the mentioned calculi,
which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure
Formal Derivation of Concurrent Garbage Collectors
Concurrent garbage collectors are notoriously difficult to implement
correctly. Previous approaches to the issue of producing correct collectors
have mainly been based on posit-and-prove verification or on the application of
domain-specific templates and transformations. We show how to derive the upper
reaches of a family of concurrent garbage collectors by refinement from a
formal specification, emphasizing the application of domain-independent design
theories and transformations. A key contribution is an extension to the
classical lattice-theoretic fixpoint theorems to account for the dynamics of
concurrent mutation and collection.Comment: 38 pages, 21 figures. The short version of this paper appeared in the
Proceedings of MPC 201
Theorem proving support in programming language semantics
We describe several views of the semantics of a simple programming language
as formal documents in the calculus of inductive constructions that can be
verified by the Coq proof system. Covered aspects are natural semantics,
denotational semantics, axiomatic semantics, and abstract interpretation.
Descriptions as recursive functions are also provided whenever suitable, thus
yielding a a verification condition generator and a static analyser that can be
run inside the theorem prover for use in reflective proofs. Extraction of an
interpreter from the denotational semantics is also described. All different
aspects are formally proved sound with respect to the natural semantics
specification.Comment: Propos\'e pour publication dans l'ouvrage \`a la m\'emoire de Gilles
Kah
On the Verification of a WiMax Design Using Symbolic Simulation
In top-down multi-level design methodologies, design descriptions at higher
levels of abstraction are incrementally refined to the final realizations.
Simulation based techniques have traditionally been used to verify that such
model refinements do not change the design functionality. Unfortunately, with
computer simulations it is not possible to completely check that a design
transformation is correct in a reasonable amount of time, as the number of test
patterns required to do so increase exponentially with the number of system
state variables. In this paper, we propose a methodology for the verification
of conformance of models generated at higher levels of abstraction in the
design process to the design specifications. We model the system behavior using
sequence of recurrence equations. We then use symbolic simulation together with
equivalence checking and property checking techniques for design verification.
Using our proposed method, we have verified the equivalence of three WiMax
system models at different levels of design abstraction, and the correctness of
various system properties on those models. Our symbolic modeling and
verification experiments show that the proposed verification methodology
provides performance advantage over its numerical counterpart.Comment: In Proceedings SCSS 2012, arXiv:1307.802
Goal-Driven Query Answering for Existential Rules with Equality
Inspired by the magic sets for Datalog, we present a novel goal-driven
approach for answering queries over terminating existential rules with equality
(aka TGDs and EGDs). Our technique improves the performance of query answering
by pruning the consequences that are not relevant for the query. This is
challenging in our setting because equalities can potentially affect all
predicates in a dataset. We address this problem by combining the existing
singularization technique with two new ingredients: an algorithm for
identifying the rules relevant to a query and a new magic sets algorithm. We
show empirically that our technique can significantly improve the performance
of query answering, and that it can mean the difference between answering a
query in a few seconds or not being able to process the query at all
Mechanized semantics
The goal of this lecture is to show how modern theorem provers---in this
case, the Coq proof assistant---can be used to mechanize the specification of
programming languages and their semantics, and to reason over individual
programs and over generic program transformations, as typically found in
compilers. The topics covered include: operational semantics (small-step,
big-step, definitional interpreters); a simple form of denotational semantics;
axiomatic semantics and Hoare logic; generation of verification conditions,
with application to program proof; compilation to virtual machine code and its
proof of correctness; an example of an optimizing program transformation (dead
code elimination) and its proof of correctness
Proving Correctness and Completeness of Normal Programs - a Declarative Approach
We advocate a declarative approach to proving properties of logic programs.
Total correctness can be separated into correctness, completeness and clean
termination; the latter includes non-floundering. Only clean termination
depends on the operational semantics, in particular on the selection rule. We
show how to deal with correctness and completeness in a declarative way,
treating programs only from the logical point of view. Specifications used in
this approach are interpretations (or theories). We point out that
specifications for correctness may differ from those for completeness, as
usually there are answers which are neither considered erroneous nor required
to be computed.
We present proof methods for correctness and completeness for definite
programs and generalize them to normal programs. For normal programs we use the
3-valued completion semantics; this is a standard semantics corresponding to
negation as finite failure. The proof methods employ solely the classical
2-valued logic. We use a 2-valued characterization of the 3-valued completion
semantics which may be of separate interest. The presented methods are compared
with an approach based on operational semantics. We also employ the ideas of
this work to generalize a known method of proving termination of normal
programs.Comment: To appear in Theory and Practice of Logic Programming (TPLP). 44
page
- …