687 research outputs found
A Transformation-Based Foundation for Semantics-Directed Code Generation
Interpreters and compilers are two different ways of implementing
programming languages. An interpreter directly executes its program
input. It is a concise definition of the semantics of a programming
language and is easily implemented. A compiler translates its program
input into another language. It is more difficult to construct, but
the code that it generates runs faster than interpreted code.
In this dissertation, we propose a transformation-based foundation for
deriving compilers from semantic specifications in the form of four
rules. These rules give apriori advice for staging, and allow
explicit compiler derivation that would be less succinct with partial
evaluation. When applied, these rules turn an interpreter that
directly executes its program input into a compiler that emits the
code that the interpreter would have executed.
We formalize the language syntax and semantics to be used for the
interpreter and the compiler, and also specify a notion of equality.
It is then possible to precisely state the transformation rules and to
prove both local and global correctness theorems. And although the
transformation rules were developed so as to apply to an interpreter
written in a denotational style, we consider how to modify
non-denotational interpreters so that the rules apply. Finally, we
illustrate these ideas by considering a larger example: a Prolog
implementation
Classical logic, continuation semantics and abstract machines
One of the goals of this paper is to demonstrate that denotational semantics is useful for operational issues like implementation of functional languages by abstract machines. This is exemplified in a tutorial way by studying the case of extensional untyped call-by-name λ-calculus with Felleisen's control operator 𝒞. We derive the transition rules for an abstract machine from a continuation semantics which appears as a generalization of the ¬¬-translation known from logic. The resulting abstract machine appears as an extension of Krivine's machine implementing head reduction. Though the result, namely Krivine's machine, is well known our method of deriving it from continuation semantics is new and applicable to other languages (as e.g. call-by-value variants). Further new results are that Scott's D∞-models are all instances of continuation models. Moreover, we extend our continuation semantics to Parigot's λμ-calculus from which we derive an extension of Krivine's machine for λμ-calculus. The relation between continuation semantics and the abstract machines is made precise by proving computational adequacy results employing an elegant method introduced by Pitts
A consistent extension of the lambda-calculus as a base for functional programming languages
Church's lambda-calculus is modified by introducing a new mechanism, the lambda-bar operator #, which neutralizes the effect of one preceding lambda binding. This operator can be used in such a way that renaming of bound variables in any reduction sequence can be avoided, with the effect that efficient interpreters with comparatively simple machine organization can be designed. It is shown that any semantic model of the pure λ-calculus also serves as a model of this modified reduction calculus, which guarantees smooth semantic theories. The Berkling Reduction Language (BRL) is a new functional programming language based upon this modification
How Is a Knowledge Representation System Like a Piano?
The research reported here was supported by National Institutes of Health Grant No. 1 P41 RR 01096-02 from the Division of Research Resources, and was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology.In the summer of 1978 a decision was made to devote a special issue of the SIGART newsletter to the subject of knowledge representation research. To assist in ascertaining the current state of people's thinking on this topic, the editors (Ron Brachman and myself) decided to circulate an informal questionnaire among the representation community. What was originally planned as a simple list of questions eventually developed into the current document, and we have decided to issue it as a report on its own merits. The questionnaire is offered here as a potential aid both for understanding knowledge representation research, and for analysing the philosophical foundations on which that research is based.
The questionnaire consists of two parts. Part I focuses first on specific details, but moves gradually towards more abstract and theoretical questions regarding assumptions about what knowledge representation is; about the role played by the computational metaphor about the relationships among model, theory, and program; etc. In part II, in a more speculative vein, we set forth for consideration nine hypothesis about various open issues in representation research.MIT Artificial Intelligence Laboratory
National Institutes of Healt
In and Out of SSA : a Denotational Specification
International audienceWe present non-standard denotational specifications of the SSA form and of its conversion processes from and to imperative programming languages. Thus, we provide a strong mathematical foundation for this intermediate code representation language used in modern compilers such as GCC or Intel CC. More specifically, we provide (1) a new functional approach to SSA, the Static Single Assignment form, together with its denotational semantics, (2) a collecting denotational semantics for a simple imperative language Imp, (3) a non-standard denotational semantics specifying the conversion of Imp to SSA and (4) a non-standard denotational semantics for the reverse SSA to Imp conversion process. These translations are proven correct, ensuring that the structure of the memory states manipulated by imperative constructs is preserved in compilers' middle ends that use the SSA form as control-flow data representation. Interestingly, a s unexpected by-products of our conversion procedures, we offer (1) a new proof of the reducibility of the RAM computing model to the domain of Kleene's partial recursive functions, to which SSA is strongly related, and, on a more practical note, (2) a new algorithm to perform program slicing in imperative programming languages. All these specifications have been prototyped using GNU Common Lisp. These fundamental results prove that the widely used SSA technology is sound. Our formal denotational framework further suggests that the SSA form could become a target of choice for other optimization analysis techniques such as abstract interpretation or partial evaluation. Indeed, since the SSA form is language-independent, the resulting optimizations would be automatically enabled for any source language supported by compilers such as GCC
From Interpreter to Compiler and Virtual Machine: A Functional Derivation
We show how to derive a compiler and a virtual machine from a compositional interpreter. We first illustrate the derivation with two evaluation functions and two normalization functions. We obtain Krivine's machine, Felleisen et al.'s CEK machine, and a generalization of these machines performing strong normalization, which is new. We observe that several existing compilers and virtual machines--e.g., the Categorical Abstract Machine (CAM), Schmidt's VEC machine, and Leroy's Zinc abstract machine--are already in derived form and we present the corresponding interpreter for the CAM and the VEC machine. We also consider Hannan and Miller's CLS machine and Landin's SECD machine. We derived Krivine's machine via a call-by-name CPS transformation and the CEK machine via a call-by-value CPS transformation. These two derivations hold both for an evaluation function and for a normalization function. They provide a non-trivial illustration of Reynolds's warning about the evaluation order of a meta-language
The programming language jigsaw: mixins, modularity and multiple in heritance
technical reportThis dissertation provides a framework for modularity in programming languages. In this framework known as Jigsaw, inheritance is understood to be an essential linguistic mechanism for module manipulation. In Jigsaw, the roles of classes in existing languages are "unbundled," by providing a suite of operators independently controlling such effects as combination, modification encapsulation name resolution and sharing all on the single notion of module. All module operators are forms of inheritance Thus, inheritance is not in conflict with modularity in this system but is indeed its foundation This allows a previously unobtainable spectrum of features to be combined in a cohesive manner including multiple inheritance mixins, encapsulation and strong typing. Jigsaw has a rigorous semantics based upon a denotational model of inheritance Jigsaw provides a notion of modularity independent of a particular computational paradigm Jigsaw can therefore be applied to a wide variety of languages especially special purpose languages where the effort of designing specific mechanisms for modularity is difficult to justify but which could still benefit from such mechanisms. The framework is used to derive an extension of Modula-3 that supports the new operations An efficient implementation strategy is developed for this extension The performance of this scheme is on a par with the methods employed by the highest performance object oriented language processors currently available
Michael John Caldwell Gordon (FRS 1994), 28 February 1948 -- 22 August 2017
Michael Gordon was a pioneer in the field of interactive theorem proving and
hardware verification. In the 1970s, he had the vision of formally verifying
system designs, proving their correctness using mathematics and logic. He
demonstrated his ideas on real-world computer designs. His students extended
the work to such diverse areas as the verification of floating-point
algorithms, the verification of probabilistic algorithms and the verified
translation of source code to correct machine code. He was elected to the Royal
Society in 1994, and he continued to produce outstanding research until
retirement.
His achievements include his work at Edinburgh University helping to create
Edinburgh LCF, the first interactive theorem prover of its kind, and the ML
family of functional programming languages. He adopted higher-order logic as a
general formalism for verification, showing that it could specify hardware
designs from the gate level right up to the processor level. It turned out to
be an ideal formalism for many problems in computer science and mathematics.
His tools and techniques have exerted a huge influence across the field of
formal verification
A Rational Deconstruction of Landin's SECD Machine
Landin's SECD machine was the first abstract machine for the lambda-calculus viewed as a programming language. Both theoretically as a model of computation and practically as an idealized implementation, it has set the tone for the subsequent development of abstract machines for functional programming languages. However, and even though variants of the SECD machine have been presented, derived, and invented, the precise rationale for its architecture and modus operandi has remained elusive. In this article, we deconstruct the SECD machine into a lambda-interpreter, i.e., an evaluation function, and we reconstruct lambda-interpreters into a variety of SECD-like machines. The deconstruction and reconstructions are transformational: they are based on equational reasoning and on a combination of simple program transformations--mainly closure conversion, transformation into continuation-passing style, and defunctionalization. The evaluation function underlying the SECD machine provides a precise rationale for its architecture: it is an environment-based eval-apply evaluator with a callee-save strategy for the environment, a data stack of intermediate results, and a control delimiter. Each of the components of the SECD machine (stack, environment, control, and dump) is therefore rationalized and so are its transitions. The deconstruction and reconstruction method also applies to other abstract machines and other evaluation functions. It makes it possible to systematically extract the denotational content of an abstract machine in the form of a compositional evaluation function, and the (small-step) operational content of an evaluation function in the form of an abstract machine
- …