52 research outputs found

    Tagungsband zum 21. Kolloquium Programmiersprachen und Grundlagen der Programmierung

    Get PDF
    Das 21. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS 2021) setzt eine traditionelle Reihe von Arbeitstagungen fort, die 1980 von den Forschungsgruppen der Professoren Friedrich L. Bauer (TU München), Klaus Indermark (RWTH Aachen) und Hans Langmaack(CAU Kiel) ins Leben gerufen wurde.Die Veranstaltung ist ein offenes Forum für alle interessierten deutschsprachigen Wissenschaftlerinnen und Wissenschaftler zum zwanglosen Austausch neuer Ideen und Ergebnisse aus den Forschungsbereichen Entwurf und Implementierung von Programmiersprachen sowie Grundlagen und Methodik des Programmierens. Dieser Tagungsband enthält die wissenschaftlichen Beiträge,die bei dem 21. Kolloquium dieser Tagungsreihe präsentiert wurden, welches vom 27. bis 29. September 2021 in Kiel stattfand und von der Arbeitsgruppe Programmiersprachen und Übersetzerkonstruktion der Christian-Albrechts-Universität zu Kiel organisiert wurde

    Programming language abstractions for mobile code

    Get PDF
    Scala is a general-purpose programming language developed at EPFL. It combines the most important concepts found in object-oriented and functional languages. Scala is a statically typed language; in particular it features an advanced type system and supports local type inference. Furthermore it integrates well with the Java and .net platforms: their libraries are accessible without glue code and the Scala compiler generates code for both execution environments. The Scala programming language has several features that make it desirable as a language for distributed application programming. In particular, it supports first-class functions which are useful in relation with the notions of distributed scope and code mobility. In that context, the missing support for run-time types is one important drawback of the Java run-time environment as a target platform. This thesis focuses on the realisation of a new concept combining essential notions from the functional and distributed programming and implying the extension of the notion of lexical scoping to the distributed context. In short, we claim that the notion of lambda abstraction provides an elegant way for dealing with the dynamic rebinding of local references in a distributed execution environment. The key ideas exposed in this research work have been implemented in our Scala compiler. This helped us to evaluate the used techniques, in particular their impact on the reliability and the performance of distributed programs. So far, most research works related to the present subject have focused on functional programming languages, in particular on the ML language family

    Getting to the Point. Index Sets and Parallelism-Preserving Autodiff for Pointful Array Programming

    Full text link
    We present a novel programming language design that attempts to combine the clarity and safety of high-level functional languages with the efficiency and parallelism of low-level numerical languages. We treat arrays as eagerly-memoized functions on typed index sets, allowing abstract function manipulations, such as currying, to work on arrays. In contrast to composing primitive bulk-array operations, we argue for an explicit nested indexing style that mirrors application of functions to arguments. We also introduce a fine-grained typed effects system which affords concise and automatically-parallelized in-place updates. Specifically, an associative accumulation effect allows reverse-mode automatic differentiation of in-place updates in a way that preserves parallelism. Empirically, we benchmark against the Futhark array programming language, and demonstrate that aggressive inlining and type-driven compilation allows array programs to be written in an expressive, "pointful" style with little performance penalty.Comment: 31 pages with appendix, 11 figures. A conference submission is still under revie

    Fast Offline Partial Evaluation of Logic Programs

    Full text link
    One of the most important challenges in partial evaluation is the design of automatic methods for ensuring the termination of the process. In this work, we introduce sufficient conditions for the strong (i.e., independent of a computation rule) termination and quasitermination of logic programs which rely on the construction of size-change graphs. We then present a fast binding-time analysis that takes the output of the termination analysis and annotates logic programs so that partial evaluation terminates. In contrast to previous approaches, the new binding-time analysis is conceptually simpler and considerably faster, scaling to medium-sized or even large examples. © 2014 Elsevier Inc. All rights reserved.This work has been partially supported by the Spanish Ministerio de Ciencia e Innovacion under grant TIN2008-06622-C03-02 and by the Generalitat Valenciana under grant PROMETEO/2011/052.Leuschel, M.; Vidal Oriola, GF. (2014). Fast Offline Partial Evaluation of Logic Programs. Information and Computation. 235:70-97. https://doi.org/10.1016/j.ic.2014.01.005S709723

    Prioritized Garbage Collection: Explicit GC Support for Software Caches

    Full text link
    Programmers routinely trade space for time to increase performance, often in the form of caching or memoization. In managed languages like Java or JavaScript, however, this space-time tradeoff is complex. Using more space translates into higher garbage collection costs, especially at the limit of available memory. Existing runtime systems provide limited support for space-sensitive algorithms, forcing programmers into difficult and often brittle choices about provisioning. This paper presents prioritized garbage collection, a cooperative programming language and runtime solution to this problem. Prioritized GC provides an interface similar to soft references, called priority references, which identify objects that the collector can reclaim eagerly if necessary. The key difference is an API for defining the policy that governs when priority references are cleared and in what order. Application code specifies a priority value for each reference and a target memory bound. The collector reclaims references, lowest priority first, until the total memory footprint of the cache fits within the bound. We use this API to implement a space-aware least-recently-used (LRU) cache, called a Sache, that is a drop-in replacement for existing caches, such as Google's Guava library. The garbage collector automatically grows and shrinks the Sache in response to available memory and workload with minimal provisioning information from the programmer. Using a Sache, it is almost impossible for an application to experience a memory leak, memory pressure, or an out-of-memory crash caused by software caching.Comment: to appear in OOPSLA 201

    Contract-Based Resource Verification for Higher-Order Functions with Memoization

    Get PDF
    We present a new approach for specifying and verifying resource utilization of higher-order functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps <= ? not asymptotic to size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented first-order program that accurately models the higher-order control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the first-order program by producing verification conditions of the form there exists for all using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heap-allocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki's real-time queue and deque data structures that rely on aliasing of references to first-class functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala's data-parallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heap-allocated objects

    Subheap-Augmented Garbage Collection

    Get PDF
    Automated memory management avoids the tedium and danger of manual techniques. However, as no programmer input is required, no widely available interface exists to permit principled control over sometimes unacceptable performance costs. This dissertation explores the idea that performance-oriented languages should give programmers greater control over where and when the garbage collector (GC) expends effort. We describe an interface and implementation to expose heap partitioning and collection decisions without compromising type safety. We show that our interface allows the programmer to encode a form of reference counting using Hayes\u27 notion of key objects. Preliminary experimental data suggests that our proposed mechanism can avoid high overheads suffered by tracing collectors in some scenarios, especially with tight heaps. However, for other applications, the costs of applying subheaps---in human effort and runtime overheads---remain daunting

    A Concurrent IFDS Dataflow Analysis Algorithm Using Actors

    Get PDF
    There has recently been a resurgence in interest in techniques for effective programming of multi-core computers. Most programmers find general-purpose concurrent programming to be extremely difficult. This difficulty severely limits the number of applications that currently benefit from multi-core computers. There already exist many concurrent solutions for the class of regular applications, which include various algorithms for linear algebra. For the class of irregular applications, which operate on dynamic and pointer- and graph-based structures, efficient concurrent solutions have so far remained elusive. Dataflow analysis applications, which are often found in compilers and program analysis tools, have received particularly little attention with regard to execution on multi-core machines. Operating on the theory that the Actor model, which structures computations as systems of asynchronously-communicating entities, is a more appropriate method for representing irregular algorithms than the shared-memory model, this work presents a concurrent Actor-based formulation of the IFDS, or Interprocedural Finite Distributive Subset, dataflow analysis algorithm. The implementation of this algorithm is done using the Scala language and its Actors library. This algorithm achieves significant speedup on multi-core machines without using any optimistic execution. This work contributes to Actor research by showing how the Actor model can be practically applied to a dataflow analysis problem. This work contributes to static analysis research by showing how a dataflow analysis algorithm can effectively make use of multi-core machines, allowing the possibility of faster and more precise analyses

    Verifying Resource Bounds of Programs with Lazy Evaluation and Memoization

    Get PDF
    We present a new approach for specifying and verifying resource utilization of higher-order functional programs that use lazy eval- uation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? ∗ size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented first-order program that ac- curately models the higher-order control flow and the effects of memoization on resources using sets, algebraic datatypes and mu- tual recursion, and then verifying the contracts of the first-order program by producing verification conditions of the form ∃∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heap-allocated objects on 17 challenging data struc- tures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s real-time queue and deque data structures that rely on aliasing of references to first-class functions; lazy data structures based on numerical rep- resentations such as the conqueue data structure of Scala’s data- parallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when esti- mating the number of evaluation steps, and is 88% for the number of heap-allocated objects
    corecore