21 research outputs found

    Transparent fault tolerance for scalable functional computation

    Get PDF
    Reliability is set to become a major concern on emergent large-scale architectures. While there are many parallel languages, and indeed many parallel functional languages, very few address reliability. The notable exception is the widely emulated Erlang distributed actor model that provides explicit supervision and recovery of actors with isolated state. We investigate scalable transparent fault tolerant functional computation with automatic supervision and recovery of tasks. We do so by developing HdpH-RS, a variant of the Haskell distributed parallel Haskell (HdpH) DSL with Reliable Scheduling. Extending the distributed work stealing protocol of HdpH for task supervision and recovery is challenging. To eliminate elusive concurrency bugs, we validate the HdpH-RS work stealing protocol using the SPIN model checker. HdpH-RS differs from the actor model in that its principal entities are tasks, i.e. independent stateless computations, rather than isolated stateful actors. Thanks to statelessness, fault recovery can be performed automatically and entirely hidden in the HdpH-RS runtime system. Statelessness is also key for proving a crucial property of the semantics of HdpH-RS: fault recovery does not change the result of the program, akin to deterministic parallelism. HdpH-RS provides a simple distributed fork/join-style programming model, with minimal exposure of fault tolerance at the language level, and a library of higher level abstractions such as algorithmic skeletons. In fact, the HdpH-RS DSL is exactly the same as the HdpH DSL, hence users can opt in or out of fault tolerant execution without any refactoring. Computations in HdpH-RS are always as reliable as the root node, no matter how many nodes and cores are actually used. We benchmark HdpH-RS on conventional clusters and an HPC platform: all benchmarks survive Chaos Monkey random fault injection; the system scales well e.g. up to 1,400 cores on the HPC; reliability and recovery overheads are consistently low even at scale

    Giant right coronary artery aneurysm presenting with non-ST elevation myocardial infarction and severe mitral regurgitation: a case report

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Coronary artery aneurysms are seen in 1.5-5% of patients presenting for coronary angiography, but giant aneurysms, defined as being greater than 2 cm in diameter, are rare. Given the paucity of cases and limited experience in diagnosis and management of the disease, each case is a learning tool in itself.</p> <p>Case presentation</p> <p>We report the rare case of a 78-year-old Caucasian man who presented to a peripheral emergency department with chest pain and was subsequently found to have a giant right coronary artery aneurysm. Following initial investigation and treatment he was referred to our hospital for definitive management.</p> <p>Conclusion</p> <p>The case described illustrates one of the varied presentations and subsequent management of an ill-defined and heterogeneous disease process. Given the limited experience with giant aneurysms in the coronary circulation, this case provides valuable insight into the clinical presentation of the disease and gives an example of the management of the most recent such case at our hospital.</p

    JIT-Based cost analysis for dynamic program transformations

    Get PDF
    Tracing JIT compilation generates units of compilation that are easy to analyse and are known to execute frequently. The AJITPar project investigates whether the information in JIT traces can be used to dynamically transform programs for a specific parallel architecture. Hence a lightweight cost model is required for JIT traces. This paper presents the design and implementation of a system for extracting JIT trace information from the Pycket JIT compiler. We define three increasingly parametric cost models for Pycket traces. We determine the best weights for the cost model parameters using linear regression. We evaluate the effectiveness of the cost models for predicting the relative costs of transformed programs

    A Possible Alignment Between the Orbits of Planetary Systems and their Visual Binary Companions

    Get PDF
    Astronomers do not have a complete picture of the effects of wide-binary companions (semimajor axes greater than 100 au) on the formation and evolution of exoplanets. We investigate these effects using new data from Gaia Early Data Release 3 and the Transiting Exoplanet Survey Satellite mission to characterize wide-binary systems with transiting exoplanets. We identify a sample of 67 systems of transiting exoplanet candidates (with well-determined, edge-on orbital inclinations) that reside in wide visual binary systems. We derive limits on orbital parameters for the wide-binary systems and measure the minimum difference in orbital inclination between the binary and planet orbits. We determine that there is statistically significant difference in the inclination distribution of wide-binary systems with transiting planets compared to a control sample, with the probability that the two distributions are the same being 0.0037. This implies that there is an overabundance of planets in binary systems whose orbits are aligned with those of the binary. The overabundance of aligned systems appears to primarily have semimajor axes less than 700 au. We investigate some effects that could cause the alignment and conclude that a torque caused by a misaligned binary companion on the protoplanetary disk is the most promising explanation

    else False

    No full text
    There are several Haskell libraries for converting tree structured data into indented text, but they all make use of some backtracking. Over twenty years ago Oppen published a more efficient imperative implementation of a pretty printer without backtracking. We show that the same efficiency is also obtainable without destructive updates by developing a similar but purely functional Haskell implementation with the same complexity bounds. At its heart lie two lazy double ended queues. 1 Pretty Printing Pretty printing is the task of converting tree structured data into text, such that the indentation of lines reflects the tree structure. Furthermore, to minimise the number of lines of the text, substructures are put on a single line as far as possible within a given line-width limit. Here is the result of pretty printing an expression within a width of 35 characters: if True then if True then True else True els

    Final report on the GRASP project

    No full text
    This is the final report on the GRASP project, carried out under SERC grants GR/F34671 and GR/F98444, at Glasgow University 1 2 . The project supported two Principal Investigators (Simon Peyton Jones and Phil Wadler), and three Research Assistants (Kevin Hammond, Cordelia Hall and Will Partain). Four research students have worked in close association with the project. 1 Summary The purpose of GRASP was to help get the technology of functional programming out of the lab and into the hands of practitioners, by producing robust and usable compilers and profilers for these languages, on both sequential and parallel systems. The main achievements of the project are as follows: ffl We have played a key role in the development of a common international non-strict functional language, Haskell (Hudak et al. [1992]). This standardisation effort has led directly to a considerable focussing of the international research community in these languages. More details about Haskell are given in Sect..

    Epidural anaesthesia and analgesia in major surgery

    No full text
    John Rigg, Konrad Jamrozik, Paul Myles, Brendan Silbert and Phil Peyto

    Or: Keeping LOLITA Busy

    No full text
    Abstract. In this paper we report on the ongoing parallelisation of LOLITA, a natural language engineering system. Although LOLITA currently exhibits only modest parallelism, we believe that it is the largest parallel functional program ever, comprising more than 47,000 lines of Haskell. LOLITA has the following interesting features common to real world applications of lazy languages:- the code was not specifically designed for parallelism;- laziness is essential for efficiency in LOLITA;- LOLITA interfaces to data structures outside the Haskell heap, usin
    corecore