656 research outputs found

    Program representation size in an intermediate language with intersection and union types

    Full text link
    The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.National Science Foundation (CCR-9417382, CISE/CCR ESS 9806747); Sun grant (EDUD-7826-990410-US); Faculty Fellowship of the Carroll School of Management, Boston College; U.K. Engineering and Physical Sciences Research Council (GR/L 36963, GR/L 15685

    Initial Semantics for Reduction Rules

    Get PDF
    We give an algebraic characterization of the syntax and operational semantics of a class of simply-typed languages, such as the language PCF: we characterize simply-typed syntax with variable binding and equipped with reduction rules via a universal property, namely as the initial object of some category of models. For this purpose, we employ techniques developed in two previous works: in the first work we model syntactic translations between languages over different sets of types as initial morphisms in a category of models. In the second work we characterize untyped syntax with reduction rules as initial object in a category of models. In the present work, we combine the techniques used earlier in order to characterize simply-typed syntax with reduction rules as initial object in a category. The universal property yields an operator which allows to specify translations---that are semantically faithful by construction---between languages over possibly different sets of types. As an example, we upgrade a translation from PCF to the untyped lambda calculus, given in previous work, to account for reduction in the source and target. Specifically, we specify a reduction semantics in the source and target language through suitable rules. By equipping the untyped lambda calculus with the structure of a model of PCF, initiality yields a translation from PCF to the lambda calculus, that is faithful with respect to the reduction semantics specified by the rules. This paper is an extended version of an article published in the proceedings of WoLLIC 2012.Comment: Extended version of arXiv:1206.4547, proves a variant of a result of PhD thesis arXiv:1206.455

    Proof-Theoretic Methods for Analysis of Functional Programs

    Get PDF
    We investigate how, in a natural deduction setting, we can specify concisely a wide variety of tasks that manipulate programs as data objects. This study will provide us with a better understanding of various kinds of manipulations of programs and also an operational understanding of numerous features and properties of a rich functional programming language. We present a technique, inspired by structural operational semantics and natural semantics, for specifying properties of, or operations on, programs. Specifications of this sort are presented as sets of inference rules and are encoded as clauses in a higher-order, intuitionistic meta-logic. Program properties are then proved by constructing proofs in this meta-logic. We argue the following points regarding these specifications and their proofs: (i) the specifications are clear and concise and they provide intuitive descriptions of the properties being described; (ii) a wide variety of program analysis tools can be specified in a single unified framework, and thus we can investigate and understand the relationship between various tools; (iii) proof theory provides a well-established and formal setting in which to examine meta-theoretic properties of these specifications; and (iv) the meta-logic we use can be implemented naturally in an extended logic programming language and thus we can produce experimental implementations of the specifications. We expect that our efforts will provide new perspectives and insights for many program manipulation tasks
    • …
    corecore