24,926 research outputs found
Fifty years of Hoare's Logic
We present a history of Hoare's logic.Comment: 79 pages. To appear in Formal Aspects of Computin
Mechanized semantics
The goal of this lecture is to show how modern theorem provers---in this
case, the Coq proof assistant---can be used to mechanize the specification of
programming languages and their semantics, and to reason over individual
programs and over generic program transformations, as typically found in
compilers. The topics covered include: operational semantics (small-step,
big-step, definitional interpreters); a simple form of denotational semantics;
axiomatic semantics and Hoare logic; generation of verification conditions,
with application to program proof; compilation to virtual machine code and its
proof of correctness; an example of an optimizing program transformation (dead
code elimination) and its proof of correctness
Compilation of extended recursion in call-by-value functional languages
This paper formalizes and proves correct a compilation scheme for
mutually-recursive definitions in call-by-value functional languages. This
scheme supports a wider range of recursive definitions than previous methods.
We formalize our technique as a translation scheme to a lambda-calculus
featuring in-place update of memory blocks, and prove the translation to be
correct.Comment: 62 pages, uses pi
Research in mathematical theory of computation
Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker
On the Relation of Interaction Semantics to Continuations and Defunctionalization
In game semantics and related approaches to programming language semantics,
programs are modelled by interaction dialogues. Such models have recently been
used in the design of new compilation methods, e.g. for hardware synthesis or
for programming with sublinear space. This paper relates such semantically
motivated non-standard compilation methods to more standard techniques in the
compilation of functional programming languages, namely continuation passing
and defunctionalization. We first show for the linear {\lambda}-calculus that
interpretation in a model of computation by interaction can be described as a
call-by-name CPS-translation followed by a defunctionalization procedure that
takes into account control-flow information. We then establish a relation
between these two compilation methods for the simply-typed {\lambda}-calculus
and end by considering recursion
Lambda Calculus in Core Aldwych
Core Aldwych is a simple model for concurrent computation, involving the concept of agents which communicate through shared variables. Each variable will have exactly one agent that can write to it, and its value can never be changed once written, but a value can contain further variables which are written to later. A key aspect is that the reader of a value may become the writer of variables in it. In this paper we show how this model can be used to encode lambda calculus. Individual function applications can be explicitly encoded as lazy or not, as required. We then show how this encoding can be extended to cover functions which manipulate mutable variables, but with the underlying Core Aldwych implementation still using only immutable variables. The ordering of function applications then becomes an issue, with Core Aldwych able to model either the enforcement of an ordering or the retention of indeterminate ordering, which allows parallel execution
A Step-indexed Semantics of Imperative Objects
Step-indexed semantic interpretations of types were proposed as an
alternative to purely syntactic proofs of type safety using subject reduction.
The types are interpreted as sets of values indexed by the number of
computation steps for which these values are guaranteed to behave like proper
elements of the type. Building on work by Ahmed, Appel and others, we introduce
a step-indexed semantics for the imperative object calculus of Abadi and
Cardelli. Providing a semantic account of this calculus using more
`traditional', domain-theoretic approaches has proved challenging due to the
combination of dynamically allocated objects, higher-order store, and an
expressive type system. Here we show that, using step-indexing, one can
interpret a rich type discipline with object types, subtyping, recursive and
bounded quantified types in the presence of state
- …