3,278 research outputs found

    Adaptive Lock-Free Data Structures in Haskell: A General Method for Concurrent Implementation Swapping

    Full text link
    A key part of implementing high-level languages is providing built-in and default data structures. Yet selecting good defaults is hard. A mutable data structure's workload is not known in advance, and it may shift over its lifetime - e.g., between read-heavy and write-heavy, or from heavy contention by multiple threads to single-threaded or low-frequency use. One idea is to switch implementations adaptively, but it is nontrivial to switch the implementation of a concurrent data structure at runtime. Performing the transition requires a concurrent snapshot of data structure contents, which normally demands special engineering in the data structure's design. However, in this paper we identify and formalize an relevant property of lock-free algorithms. Namely, lock-freedom is sufficient to guarantee that freezing memory locations in an arbitrary order will result in a valid snapshot. Several functional languages have data structures that freeze and thaw, transitioning between mutable and immutable, such as Haskell vectors and Clojure transients, but these enable only single-threaded writers. We generalize this approach to augment an arbitrary lock-free data structure with the ability to gradually freeze and optionally transition to a new representation. This augmentation doesn't require changing the algorithm or code for the data structure, only replacing its datatype for mutable references with a freezable variant. In this paper, we present an algorithm for lifting plain to adaptive data and prove that the resulting hybrid data structure is itself lock-free, linearizable, and simulates the original. We also perform an empirical case study in the context of heating up and cooling down concurrent maps.Comment: To be published in ACM SIGPLAN Haskell Symposium 201

    Conjoined Events

    Get PDF
    Many existing synchronous message-passing systems support choice: engaging in one event XOR another. This paper introduces the AND operator that allows a process to engage in multiple events together (one AND one more AND another; all conjoined), engaging in each event only if it can atomically engage in all the conjoined events. We demonstrate using several examples that this operator supports new, more ?exible models of programming. We show that the AND operator allows the behaviour of processes to be expressed in local rules rather than system-wide constructs. We give an optimised implementation of the AND operator and explore the performance effect on standard communications of supporting this new operator

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Benchmarking implementations of functional languages with ‘Pseudoknot', a float-intensive benchmark

    Get PDF
    Over 25 implementations of different functional languages are benchmarked using the same program, a floating-point intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important consideration is how the program can be modified and tuned to obtain maximal performance on each language implementation. With few exceptions, the compilers take a significant amount of time to compile this program, though most compilers were faster than the then current GNU C compiler (GCC version 2.5.8). Compilers that generate C or Lisp are often slower than those that generate native code directly: the cost of compiling the intermediate form is normally a large fraction of the total compilation time. There is no clear distinction between the runtime performance of eager and lazy implementations when appropriate annotations are used: lazy implementations have clearly come of age when it comes to implementing largely strict applications, such as the Pseudoknot program. The speed of C can be approached by some implementations, but to achieve this performance, special measures such as strictness annotations are required by non-strict implementations. The benchmark results have to be interpreted with care. Firstly, a benchmark based on a single program cannot cover a wide spectrum of ‘typical' applications. Secondly, the compilers vary in the kind and level of optimisations offered, so the effort required to obtain an optimal version of the program is similarly varie

    Adequacy of compositional translations for observational semantics

    Get PDF
    We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension

    Extending monads with pattern matching

    Full text link
    Sequencing of effectful computations can be neatly captured using monads and elegantly written using do notation. In practice such monads often allow additional ways of composing computations, which have to be written explicitly using combinators. We identify joinads, an abstract notion of computation that is stronger than monads and captures many such ad-hoc extensions. In particular, joinads are monads with three additional operations: one of type m a → m b → m (a, b) captures various forms of parallel composition, one of type m a → m a → m a that is inspired by choice and one of type m a → m (m a) that captures aliasing of computations. Algebraically, the first two operations form a near-semiring with commutative multiplication. We introduce docase notation that can be viewed as a monadic version of case. Joinad laws imply various syntactic equivalences of programs written using docase that are analogous to equiva-lences about case. Examples of joinads that benefit from the nota-tion include speculative parallelism, waiting for a combination of user interface events, but also encoding of validation rules using the intersection of parsers
    corecore