64 research outputs found

    PAEAN : portable and scalable runtime support for parallel Haskell dialects

    Get PDF
    Over time, several competing approaches to parallel Haskell programming have emerged. Different approaches support parallelism at various different scales, ranging from small multicores to massively parallel high-performance computing systems. They also provide varying degrees of control, ranging from completely implicit approaches to ones providing full programmer control. Most current designs assume a shared memory model at the programmer, implementation and hardware levels. This is, however, becoming increasingly divorced from the reality at the hardware level. It also imposes significant unwanted runtime overheads in the form of garbage collection synchronisation etc. What is needed is an easy way to abstract over the implementation and hardware levels, while presenting a simple parallelism model to the programmer. The PArallEl shAred Nothing runtime system design aims to provide a portable and high-level shared-nothing implementation platform for parallel Haskell dialects. It abstracts over major issues such as work distribution and data serialisation, consolidating existing, successful designs into a single framework. It also provides an optional virtual shared-memory programming abstraction for (possibly) shared-nothing parallel machines, such as modern multicore/manycore architectures or cluster/cloud computing systems. It builds on, unifies and extends, existing well-developed support for shared-memory parallelism that is provided by the widely used GHC Haskell compiler. This paper summarises the state-of-the-art in shared-nothing parallel Haskell implementations, introduces the PArallEl shAred Nothing abstractions, shows how they can be used to implement three distinct parallel Haskell dialects, and demonstrates that good scalability can be obtained on recent parallel machines.PostprintPeer reviewe

    The HdpH DSLs for scalable reliable computation

    Get PDF
    The statelessness of functional computations facilitates both parallelism and fault recovery. Faults and non-uniform communication topologies are key challenges for emergent large scale parallel architectures. We report on HdpH and HdpH-RS, a pair of Haskell DSLs designed to address these challenges for irregular task-parallel computations on large distributed-memory architectures. Both DSLs share an API combining explicit task placement with sophisticated work stealing. HdpH focuses on scalability by making placement and stealing topology aware whereas HdpH-RS delivers reliability by means of fault tolerant work stealing. We present operational semantics for both DSLs and investigate conditions for semantic equivalence of HdpH and HdpH-RS programs, that is, conditions under which topology awareness can be transparently traded for fault tolerance. We detail how the DSL implementations realise topology awareness and fault tolerance. We report an initial evaluation of scalability and fault tolerance on a 256-core cluster and on up to 32K cores of an HPC platform

    On the performance and programming of reversible molecular computers

    Get PDF
    If the 20th century was known for the computational revolution, what will the 21st be known for? Perhaps the recent strides in the nascent fields of molecular programming and biological computation will help bring about the ‘Coming Era of Nanotechnology’ promised in Drexler’s ‘Engines of Creation’. Though there is still far to go, there is much reason for optimism. This thesis examines the underlying principles needed to realise the computational aspects of such ‘engines’ in a performant way. Its main body focusses on the ways in which thermodynamics constrains the operation and design of such systems, and it ends with the proposal of a model of computation appropriate for exploiting these constraints. These thermodynamic constraints are approached from three different directions. The first considers the maximum possible aggregate performance of a system of computers of given volume, V, with a given supply of free energy. From this perspective, reversible computing is imperative in order to circumvent the Landauer limit. A result of Frank is refined and strengthened, showing that the adiabatic regime reversible computer performance is the best possible for any computer—quantum or classical. This therefore shows a universal scaling law governing the performance of compact computers of ~V^(5/6), compared to ~V^(2/3) for conventional computers. For the case of molecular computers, it is shown how to attain this bound. The second direction extends this performance analysis to the case where individual computational particles or sub-units can interact with one another. The third extends it to interactions with shared, non-computational parts of the system. It is found that accommodating these interactions in molecular computers imposes a performance penalty that undermines the earlier scaling result. Nonetheless, scaling superior to that of irreversible computers can be preserved, and appropriate mitigations and considerations are discussed. These analyses are framed in a context of molecular computation, but where possible more general computational systems are considered. The proposed model, the א-calculus, is appropriate for programming reversible molecular computers taking into account these constraints. A variety of examples and mathematical analyses accompany it. Moreover, abstract sketches of potential molecular implementations are provided. Developing these into viable schemes suitable for experimental validation will be a focus of future work

    Abstraction for web programming

    Get PDF
    This thesis considers several instances of abstraction that arose in the design and implementation of the web programming language Links. The first concerns user interfaces, specified using HTML forms. We wish to construct forms from existing form fragments without introducing dependencies on the implementation details of those fragments. Surprisingly, many existing web systems do not support this simple scenario. We present a library which captures the essence of form abstraction, and extend it with more practical facilities, such as validation of the HTML a program produces and of the input a user submits. An important part of our library is a simple semantics, given as the composition of three primitive “idioms”, an interface to computation introduced by McBride and Paterson. In order to justify this approach we present a comparison of idioms with the related notions of monads and arrows, refining the informal claims in the literature. Our library forms part of the Links framework for stateless web interactions. We describe a related aspect of this system, a preprocessor that derives generic instances of functions, which we use to serialise server state between client requests. The abstraction in this case involves the shape of datatypes: the serialisation operation is specified independently of the particular types involved. Our final instance of abstraction involves abstract types. Functional programming languages typically offer one of two styles of abstract type: the abstraction boundary may be drawn using a private data constructor, or using a type signature. We show that there is a pair of semantics-preserving translations between these two styles. In the light of this, we revisit the decision of the Haskell designers to offer the constructor style, and define a library that supports signature-style definitions in Haskell by translation into the constructor style

    S-Net for multi-memory multicores

    Get PDF
    Copyright ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 5th ACM SIGPLAN Workshop on Declarative Aspects of Multicore Programming: http://doi.acm.org/10.1145/1708046.1708054S-Net is a declarative coordination language and component technology aimed at modern multi-core/many-core architectures and systems-on-chip. It builds on the concept of stream processing to structure dynamically evolving networks of communicating asynchronous components. Components themselves are implemented using a conventional language suitable for the application domain. This two-level software architecture maintains a familiar sequential development environment for large parts of an application and offers a high-level declarative approach to component coordination. In this paper we present a conservative language extension for the placement of components and component networks in a multi-memory environment, i.e. architectures that associate individual compute cores or groups thereof with private memories. We describe a novel distributed runtime system layer that complements our existing multithreaded runtime system for shared memory multicores. Particular emphasis is put on efficient management of data communication. Last not least, we present preliminary experimental data

    Generic programming with C++ concepts and Haskell type classes—a comparison

    Get PDF
    Earlier studies have introduced a list of high-level evaluation criteria to assess how well a language supports generic programming. Languages that meet all criteria include Haskell, because of its type classes, and C++ with the concept feature. We refine these criteria into a taxonomy that captures commonalities and differences between type classes in Haskell and concepts in C++, and discuss which differences are incidental and which ones are due to other language features. The taxonomy allows for an improved understanding of language support for generic programming, and the comparison is useful for the ongoing discussions among language designers and users of both languages

    A review of the state of the art in Machine Learning on the Semantic Web: Technical Report CSTR-05-003

    Get PDF
    corecore