349 research outputs found

    Abstract interpretation and optimising transformations for applicative programs

    Get PDF
    This thesis describes methods for transforming applicative programs with the aim of improving their efficiency. The general justification for these techniques is presented via the concept of abstract interpretation. The work can be seen as providing mechanisms to optimise applicative programs for sequential von Neumann machines. The chapters address the following subjects. Chapter 1 gives an overview and gentle introduction to the following technical chapters. Chapter 2 gives an introduction to and motivation for the concept of abstract interpretation necessary for the detailed understanding of the rest of the work. It includes certain theoretical developments, of which I believe the most important is the incorporation of the concept of partial functions into our notion of abstract interpretation. This is done by associating non-standard denotations with functions just as denotational semantics gives the standard denotations. Chapter 3 gives an example of the ease with which we can talk about function objects within abstract interpretive schemes. It uses this to show how a simple language using call-by-need semantics can be augmented with a system that annotates places in a program at which call-by-value can be used without violating the call-by-need semantics. Chapter 4 extends the work of chapter 3 by showing that under some sequentiality restriction, the incorporation of call-by-value for call-by-need can be made complete in the sense that the resulting program will only possess strict functions except for the conditional. Chapter 5 is an attempt to apply the concepts of abstract interpretation to a completely different problem, that of incorporating destructive operators into an applicative program. We do this in order to increase the efficiency of implementation without violating the applicative semantics by introducing destructive operators into our language. Finally, chapter 6 contains a discussion of the implications of such techniques for real languages, and in particular presents arguments whereby applicative languages should be seen as whole systems and not merely the applicative subset of some larger language

    Efficient execution in an automated reasoning environment

    Get PDF
    We describe a method that permits the user of a mechanized mathematical logic to write elegant logical definitions while allowing sound and efficient execution. In particular, the features supporting this method allow the user to install, in a logically sound way, alternative executable counterparts for logically defined functions. These alternatives are often much more efficient than the logically equivalent terms they replace. These features have been implemented in the ACL2 theorem prover, and we discuss several applications of the features in ACL2.Ministerio de Educación y Ciencia TIN2004–0388

    An Extensible Theorem Proving Frontend

    Get PDF
    Interaktive Theorembeweiser sind Softwarewerkzeuge zum computergestützten Beweisen, d.h. sie können entsprechend kodierte Beweise von logischen Aussagen sowohl verifizieren als auch beim Erstellen dieser unterstützen. In den letzten Jahren wurden weitreichende Formalisierungsprojekte über Mathematik sowie Programmverifikation mit solchen Theorembeweisern bewältigt. Der Theorembeweiser Lean insbesondere wurde nicht nur erfolgreich zum Verifizieren lange bekannter mathematischer Theoreme verwendet, sondern auch zur Unterstützung von aktueller mathematischer Forschung. Das Ziel des Lean-Projekts ist nichts weniger als die Arbeitsweise von Mathematikern grundlegend zu verändern, indem mit dem Computer formalisierte Beweise eine praktible Alternative zu solchen mit Stift und Papier werden sollen. Aufwändige manuelle Gutachten zur Korrektheit von Beweisen wären damit hinfällig und gleichzeitig wäre garantiert, dass alle nötigen Beweisschritte exakt erfasst sind, statt der Interpretation und dem Hintergrundwissen des Lesers überlassen zu sein. Um dieses Ziel zu erreichen, sind jedoch noch weitere Fortschritte hinsichtlich Effizienz und Nutzbarkeit von Theorembeweisern nötig. Als Schritt in Richtung dieses Ziels beschreibt diese Dissertation eine neue, vollständig erweiterbare Theorembeweiser-Benutzerschnittstelle ("frontend") im Rahmen von Lean 4, der nächsten Version von Lean. Aufgabe dieser Benutzerschnittstelle ist die textuelle Beschreibung und Entgegennahme der Beweiseingabe in einer Syntax, die mehrere teils widersprüchliche Ziele optimieren sollte: Kompaktheit, Lesbarkeit für menschliche Benutzer und Eindeutigkeit in der Interpretation durch den Theorembeweiser. Da in der geschriebenen Mathematik eine umfangreiche Menge an verschiedenen Notationen existiert, die von Jahr zu Jahr weiter wächst und sich gleichzeitig zwischen verschiedenen Feldern, Autoren oder sogar einzelnen Arbeiten unterscheiden kann, muss solch eine Schnittstelle es Benutzern erlauben, sie jederzeit mit neuen, ausdrucksfähigen Notationen zu erweitern und ihnen mit flexiblen Regeln Bedeutung zuzuschreiben. Dieser Wunsch nach Flexibilität der Eingabesprache lässt sich weiterhin auch auf der Ebene der einzelnen Beweisschritte ("Taktiken") sowie höheren Ebenen der Beweis- und Programmorganisation wiederfinden. Den Kernteil dieser gewünschten Erweiterbarkeit habe ich mit einem ausdrucksstarken Makrosystem für Lean realisiert, mit dem sich sowohl einfach Syntaxtransformationen ("syntaktischer Zucker") also auch komplexe, typgesteuerte Übersetzung in die Kernsprache des Beweisers ausdrücken lassen. Das Makrosystem basiert auf einem neuartigen Algorithmus für Makrohygiene, basierend auf dem der Lisp-Sprache Racket und von mir an die spezifischen Anforderungen von Theorembeweisern angepasst, dessen Aufgabe es ist zu gewährleisten, dass lexikalische Geltungsbereiche von Bezeichnern selbst für komplexe Makros wie intuitiv erwartet funktionieren. Besonders habe ich beim Entwurf des Makrosystems darauf geachtet, das System einfach zugänglich zu gestalten, indem mehrere Abstraktionsebenen bereitgestellt werden, die sich in ihrer Ausdrucksstärke unterscheiden, aber auf den gleichen fundamentalen Prinzipien wie der erwähnten Makrohygiene beruhen. Als ein Anwendungsbeispiel des Makrosystems beschreibe ich eine Erweiterung der aus Haskell bekannten "do"-Notation um weitere imperative Sprachfeatures. Die erweiterte Syntax ist in Lean 4 eingeflossen und hat grundsätzlich die Art und Weise verändert, wie sowohl Entwickler als auch Benutzer monadischen, aber auch puren Code schreiben. Das Makrosystem stellt das "Herz" des erweiterbaren Frontends dar, ist gleichzeitig aber auch eng mit anderen Softwarekomponenten innerhalb der Benutzerschnittstelle verknüpft oder von ihnen abhängig. Ich stelle das gesamte Frontend und das umgebende Lean-System vor mit Fokus auf Teilen, an denen ich maßgeblich mitgewirkt habe. Schließlich beschreibe ich noch ein effizientes Referenzzählungsschema für funktionale Programmierung, welches eine Neuimplementierung von Lean in Lean selbst und damit das erweiterbare Frontend erst ermöglicht hat. Spezifische Optimierungen darin zur Wiederverwendung von Allokationen vereinen, ähnlich wie die erweiterte do-Notation, die Vorteile von imperativer und pur funktionaler Programmierung in einem neuen Paradigma, das ich "pure imperative Programmierung" nenne

    Heap Abstractions for Static Analysis

    Full text link
    Heap data is potentially unbounded and seemingly arbitrary. As a consequence, unlike stack and static memory, heap memory cannot be abstracted directly in terms of a fixed set of source variable names appearing in the program being analysed. This makes it an interesting topic of study and there is an abundance of literature employing heap abstractions. Although most studies have addressed similar concerns, their formulations and formalisms often seem dissimilar and some times even unrelated. Thus, the insights gained in one description of heap abstraction may not directly carry over to some other description. This survey is a result of our quest for a unifying theme in the existing descriptions of heap abstractions. In particular, our interest lies in the abstractions and not in the algorithms that construct them. In our search of a unified theme, we view a heap abstraction as consisting of two features: a heap model to represent the heap memory and a summarization technique for bounding the heap representation. We classify the models as storeless, store based, and hybrid. We describe various summarization techniques based on k-limiting, allocation sites, patterns, variables, other generic instrumentation predicates, and higher-order logics. This approach allows us to compare the insights of a large number of seemingly dissimilar heap abstractions and also paves way for creating new abstractions by mix-and-match of models and summarization techniques.Comment: 49 pages, 20 figure

    Program Improvement by Automatic Redistribution of Intermediate Results

    Get PDF
    This paper was originally a Ph.D. thesis proposal.The problem of automatically improving the performance of computer programs has many facets. A common source of program inefficiency is the use of abstraction techniques in program design: general tools used in a specific context often do unnecessary or redundant work. Examples include needless copy operations, redundant subexpressions, multiple traversals of the same datastructure and maintenance of overly complex data invariants. I propose to focus on one broadly applicable way of improving a program's performance: redistributing intermediate results so that computation can be avoided. I hope to demonstrate that this is a basic principle of optimization from which many of the current approaches to optimization may be derived. I propose to implement a system that automatically finds and exploits opportunities for redistribution in a given program. In addition to the program source, the system will accept an explanation of correctness and purpose of the code. Beyond the specific task of program improvement, I anticipate that the research will contribute to our understanding of the design and explanatory structure of programs. Major results will include (1) definition and manipulation of representation of correctness and purpose of a program's implementation, and (2) definition, construction, and use of a representation of a program's dynamic behavior.MIT Artificial Intelligence Laborator

    Interactive program verification using virtual programs

    Get PDF
    This thesis is concerned with ways of proving the correctness of computer programs. The first part of the thesis presents a new method for doing this. The method, called continuation induction, is based on the ideas of symbolic execution, the description of a given program by a virtual program, and the demonstration that these two programs are equivalent whenever the given program terminates. The main advantage of continuation induction over other methods is that it enables programs using a wide variety of programming constructs such as recursion, iteration, non-determinism, procedures with side-effects and jumps out of blocks to be handled in a natural and uniform way. In the second part of the thesis a program verifier which uses both this method and Floyd's inductive assertion method is described. The significance of this verifier is that it is designed to be extensible, and to this end the user can declare new functions and predicates to be used in giving a natural description of the program's intention. Rules describing these new functions can then be used when verifying the program. To actually prove the verification conditions, the system employs automatic simplification, a relatively clever matcher, a simple natural deduction system and, most importantly, the user's advice. A large number of commands are provided for the user in guiding the system to a proof of the program's correctness. The system has been used to verify various programs including two sorting programs and a program to invert a permutation 'in place' the proofs of the sorting programs included a proof of the fact that the final array was a permutation of the original one. Finally, some observations and suggestions are made concerning the continued development of such interactive verification systems

    The use of proof plans in tactic synthesis

    Get PDF
    We undertake a programme of tactic synthesis. We first formalize the notion of a tactic as a rewrite rule, then give a correctness criterion for this by means of a reflection mechanism in the constructive type theory OYSTER. We further formalize the notion of a tactic specification, given as a synthesis goal and a decidability goal. We use a proof planner. CIAM. to guide the search for inductive proofs of these, and are able to successfully synthesize several tactics in this fashion. This involves two extensions to existing methods: context-sensitive rewriting and higher-order wave rules. Further, we show that from a proof of the decidability goal one may compile to a Prolog program a pseudo- tactic which may be run to efficiently simulate the input/output behaviour of the synthetic tacti

    A Transformation-Based Foundation for Semantics-Directed Code Generation

    Get PDF
    Interpreters and compilers are two different ways of implementing programming languages. An interpreter directly executes its program input. It is a concise definition of the semantics of a programming language and is easily implemented. A compiler translates its program input into another language. It is more difficult to construct, but the code that it generates runs faster than interpreted code. In this dissertation, we propose a transformation-based foundation for deriving compilers from semantic specifications in the form of four rules. These rules give apriori advice for staging, and allow explicit compiler derivation that would be less succinct with partial evaluation. When applied, these rules turn an interpreter that directly executes its program input into a compiler that emits the code that the interpreter would have executed. We formalize the language syntax and semantics to be used for the interpreter and the compiler, and also specify a notion of equality. It is then possible to precisely state the transformation rules and to prove both local and global correctness theorems. And although the transformation rules were developed so as to apply to an interpreter written in a denotational style, we consider how to modify non-denotational interpreters so that the rules apply. Finally, we illustrate these ideas by considering a larger example: a Prolog implementation

    Functional programming and graph algorithms

    Get PDF
    This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language
    corecore