123 research outputs found

    Abstract Interpretation of Polymorphic Higher-Order Functions

    Get PDF
    This thesis describes several abstract interpretations of polymorphic functions. In all the interpretations, information about any instance of a polymorphic function is obtained from that of the smallest, thus avoiding the computation of the instance directly. This is useful in the case of recursive functions, because it avoids the expensive computation of finding fixed points of functionals corresponding to complex instances. We define an explicitly typed polymorphic language with the Hindley-Milner type system to illustrate our ideas, and provide two semantics of polymorphism that relate separate instances of any polymorphic function. The choice of which semantics to use depends on the particular program analysis we want to study. For studying strictness analysis and binding-time analysis, we introduce a semantics based on embedding-closure pairs. We see how the abstract function of the smallest instance of a polymorphic function is used in building an approximation to that of any instance. Furthermore, we extend the language to include lists, and describe both strictness analysis and binding-time analysis of lists. Thus, this work extends previous work by others, on analyses of polymorphic first-order functions and also of monomorphic higher-order functions, to polymorphic higher-order functions. In relating distinct instances of a polymorphic function, the approximate abstract function is expressed as the greatest lower bound of a set of functions. This may not be very cheap to compute. However, there are often ways of obtaining the same result by considering a smaller set of functions. Another issue concerns how close the approximations are to the exact values. In the first-order case, it is shown that the approximate values coincide with the exact values. In general this is not the case, but experimental results on strictness analysis indicate that good approximations are obtained. Embedding-projection pairs are used to provide a semantics that is convenient for termination analysis of polymorphic functions. We show that the abstract interpretation of an instance can be approximated by the least upper bound of a set of functions that are built from that of the smallest

    Static analysis of Martin-Löf's intuitionistic type theory

    Get PDF
    Martin-Löf's intuitionistic type theory has been under investigation in recent years as a potential source for future functional programming languages. This is due to its properties which greatly aid the derivation of provably correct programs. These include the Curry-Howard correspondence (whereby logical formulas may be seen as specifications and proofs of logical formulas as programs) and strong normalisation (i.e. evaluation of every proof/program must terminate). Unfortunately, a corollary of these properties is that the programs may contain computationally irrelevant proof objects: proofs which are not to be printed as part of the result of a program. We show how a series of static analyses may be used to improve the efficiency of type theory as a lazy functional programming language. In particular we show how variants of abstract interpretation may be used to eliminate unnecessary computations in the object code that results from a type theoretic program. After an informal treatment of the application of abstract interpretation to type theory (where we discuss the features of type theory which make it particularly amenable to such an approach), we give formal proofs of correctness of our abstract interpretation techniques, with regard to the semantics of type theory. We subsequently describe how we have implemented abstract interpretation techniques within the Ferdinand functional language compiler. Ferdinand was developed as a lazy functional programming system by Andrew Douglas at the University of Kent at Canterbury. Finally, we show how other static analysis techniques may be applied to type theory. Some of these techniques use the abstract interpretation mechanisms previously discussed

    Static analysis of functional languages

    Get PDF
    Static analysis is the name given to a number of compile time analysis techniques used to automatically generate information which can lead to improvements in the execution performance of function languages. This thesis provides an introduction to these techniques and their implementation. The abstract interpretation framework is an example of a technique used to extract information from a program by providing the program with an alternate semantics and evaluating this program over a non-standard domain. The elements of this domain represent certain properties of interest. This framework is examined in detail, as well as various extensions and variants of it. The use of binary logical relations and program logics as alternative formulations of the framework , and partial equivalence relations as an extension to it, are also looked at. The projection analysis framework determines how much of a sub-expression can be evaluated by examining the context in which the expression is to be evaluated, and provides an elegant method for finding particular types of information from data structures. This is also examined. The most costly operation in implementing an analysis is the computation of fixed points. Methods developed to make this process more efficient are looked at. This leads to the final chapter which highlights the dependencies and relationships between the different frameworks and their mathematical disciplines.KMBT_22

    Partitioning non-strict languages for multi-threaded code generation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 107-109).by Satyan R. Coorg.Ph.D

    Efficient Late Binding of Dynamic Function Compositions

    Get PDF
    Adaptive software becomes more and more important as computing is increasingly context-dependent. Runtime adaptability can be achieved by dynamically selecting and applying context-specific code. Role-oriented programming has been proposed as a paradigm to enable runtime adaptive software by design. Roles change the objects’ behavior at runtime and thus allow adapting the software to a given context. However, this increased variability and expressiveness has a direct impact on performance and memory consumption. We found a high overhead in the steady-state performance of executing compositions of adaptations. This paper presents a new approach to use run-time information to construct a dispatch plan that can be executed efficiently by the JVM. The concept of late binding is extended to dynamic function compositions. We evaluated the implementation with a benchmark for role-oriented programming languages leveraging context-dependent role semantics achieving a mean speedup of 2.79× over the regular implementation

    A Sparse Program Dependence Graph For Object Oriented Programming Languages

    Get PDF
    The Program Dependence Graph (PDG) has achieved widespread acceptance as a useful tool for software engineering, program analysis, and automated compiler optimizations. This thesis presents the Sparse Object Oriented Program Dependence Graph (SOOPDG), a formalism that contains elements of traditional PDG\u27s adapted to compactly represent programs written in object-oriented languages such as Java. This formalism is called sparse because, in contrast to other OO and Java-specific adaptations of PDG\u27s, it introduces few node types and no new edge types beyond those used in traditional dependence-based representations. This results in correct program representations using smaller graph structures and simpler semantics when compared to other OO formalisms. We introduce the Single Flow to Use (SFU) property which requires that exactly one definition of each variable be available for each use. We demonstrate that the SOOPDG, with its support for the SFU property coupled with a higher order rewriting semantics, is sufficient to represent static Java-like programs and dynamic program behavior. We present algorithms for creating SOOPDG representations from program text, and describe graph rewriting semantics. We also present algorithms for common static analysis techniques such as program slicing, inheritance analysis, and call chain analysis. We contrast the SOOPDG with two previously published OO graph structures, the Java System Dependence Graph and the Java Software Dependence Graph. The SOOPDG results in comparatively smaller static representations of programs, cleaner graph semantics, and potentially more accurate program analysis. Finally, we introduce the Simulation Dependence Graph (SDG). The SDG is a related representation that is developed specifically to represent simulation systems, but is extensible to more general component-based software design paradigms. The SDG allows formal reasoning about issues such as component composition, a property critical to the creation and analysis of complex simulation systems and component-based design systems

    Projection-Based Program Analysis

    Get PDF
    Projection-based program analysis techniques are remarkable for their ability to give highly detailed and useful information not obtainable by other methods. The first proposed projection-based analysis techniques were those of Wadler and Hughes for strictness analysis, and Launchbury for binding-time analysis; both techniques are restricted to analysis of first-order monomorphic languages. Hughes and Launchbury generalised the strictness analysis technique, and Launchbury the binding-time analysis technique, to handle polymorphic languages, again restricted to first order. Other than a general approach to higher-order analysis suggested by Hughes, and an ad hoc implementation of higher-order binding-time analysis by Mogensen, neither of which had any formal notion of correctness, there has been no successful generalisation to higher-order analysis. We present a complete redevelopment of monomorphic projection-based program analysis from first principles, starting by considering the analysis of functions (rather than programs) to establish bounds on the intrinsic power of projection-based analysis, showing also that projection-based analysis can capture interesting termination properties. The development of program analysis proceeds in two distinct steps: first for first-order, then higher order. Throughout we maintain a rigorous notion of correctness and prove that our techniques satisfy their correctness conditions. Our higher-order strictness analysis technique is able to capture various so-called data-structure-strictness properties such as head strictness-the fact that a function may be safely assumed to evaluate the head of every cons cell in a list for which it evaluates the cons cell. Our technique, and Hunt's PER-based technique (originally proposed at about the same time as ours), are the first techniques of any kind to capture such properties at higher order. Both the first-order and higher-order techniques are the first projection-based techniques to capture joint strictness properties-for example, the fact that a function may be safely assumed to evaluate at least one of several arguments. The first-order binding-time analysis technique is essentially the same as Launchbury's; the higher-order technique is the first such formally-based higher-order generalisation. Ours are the first projection-based termination analysis techniques, and are the first techniques of any kind that are able to detect termination properties such as head termination-the fact that termination of a cons cell implies termination of the head. A notable feature of the development is the method by which the first-order analysis semantics are generalised to higher-order: except for the fixed-point constant the higher-order semantics are all instances of a higher-order semantics parameterised by the constants defining the various first-order semantics

    Programmiersprachen und Rechenkonzepte

    Get PDF
    Seit 1984 veranstaltet die GI-Fachgruppe "Programmiersprachen und Rechenkonzepte" regelmäßig im Frühjahr einen Workshop im Physikzentrum Bad Honnef. Das Treffen dient in erster Linie dem gegenseitigen Kennenlernen, dem Erfahrungsaustausch, der Diskussion und der Vertiefung gegenseitiger Kontakte. In diesem Forum werden Vorträge und Demonstrationen sowohl bereits abgeschlossener als auch noch laufender Arbeiten vorgestellt, unter anderem (aber nicht ausschließlich) zu Themen wie - Sprachen, Sprachparadigmen - Korrektheit von Entwurf und Implementierung - Werkzeuge - Software-/Hardware-Architekturen - Spezifikation, Entwurf - Validierung, Verifikation - Implementierung, Integration - Sicherheit (Safety und Security) - eingebettete Systeme - hardware-nahe Programmierung. In diesem Technischen Bericht sind einige der präsentierten Arbeiten zusammen gestellt
    • …
    corecore