54,142 research outputs found
GUMSMP: a scalable parallel Haskell implementation
The most widely available high performance platforms today are hierarchical,
with shared memory leaves, e.g. clusters of multi-cores, or NUMA with multiple
regions. The Glasgow Haskell Compiler (GHC) provides a number of parallel
Haskell implementations targeting different parallel architectures. In particular,
GHC-SMP supports shared memory architectures, and GHC-GUM supports
distributed memory machines. Both implementations use different, but related,
runtime system (RTS) mechanisms and achieve good performance. A specialised
RTS for the ubiquitous hierarchical architectures is lacking.
This thesis presents the design, implementation, and evaluation of a new
parallel Haskell RTS, GUMSMP, that combines shared and distributed memory
mechanisms to exploit hierarchical architectures more effectively. The design
evaluates a variety of design choices and aims to efficiently combine scalable
distributed memory parallelism, using a virtual shared heap over a hierarchical
architecture, with low-overhead shared memory parallelism on shared memory
nodes. Key design objectives in realising this system are to prefer local work,
and to exploit mostly passive load distribution with pre-fetching.
Systematic performance evaluation shows that the automatic hierarchical load
distribution policies must be carefully tuned to obtain good performance. We
investigate the impact of several policies including work pre-fetching, favouring
inter-node work distribution, and spark segregation with different export and
select policies. We present the performance results for GUMSMP, demonstrating
good scalability for a set of benchmarks on up to 300 cores. Moreover, our policies
provide performance improvements of up to a factor of 1.5 compared to GHC-
GUM.
The thesis provides a performance evaluation of distributed and shared heap
implementations of parallel Haskell on a state-of-the-art physical shared memory
NUMA machine. The evaluation exposes bottlenecks in memory management,
which limit scalability beyond 25 cores. We demonstrate that GUMSMP, that
combines both distributed and shared heap abstractions, consistently outper-
forms the shared memory GHC-SMP on seven benchmarks by a factor of 3.3
on average. Specifically, we show that the best results are obtained when shar-
ing memory only within a single NUMA region, and using distributed memory
system abstractions across the regions
RELEASE: A High-level Paradigm for Reliable Large-scale Server Software
Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the rst six months. The project aim is to scale the Erlang's radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the e ectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene
RELEASE: A High-level Paradigm for Reliable Large-scale Server Software
Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the first six months. The project aim is to scale the Erlang’s radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the effectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene
The End of a Myth: Distributed Transactions Can Scale
The common wisdom is that distributed transactions do not scale. But what if
distributed transactions could be made scalable using the next generation of
networks and a redesign of distributed databases? There would be no need for
developers anymore to worry about co-partitioning schemes to achieve decent
performance. Application development would become easier as data placement
would no longer determine how scalable an application is. Hardware provisioning
would be simplified as the system administrator can expect a linear scale-out
when adding more machines rather than some complex sub-linear function, which
is highly application specific.
In this paper, we present the design of our novel scalable database system
NAM-DB and show that distributed transactions with the very common Snapshot
Isolation guarantee can indeed scale using the next generation of RDMA-enabled
network technology without any inherent bottlenecks. Our experiments with the
TPC-C benchmark show that our system scales linearly to over 6.5 million
new-order (14.5 million total) distributed transactions per second on 56
machines.Comment: 12 page
Recommended from our members
Computational Strategies for Scalable Genomics Analysis.
The revolution in next-generation DNA sequencing technologies is leading to explosive data growth in genomics, posing a significant challenge to the computing infrastructure and software algorithms for genomics analysis. Various big data technologies have been explored to scale up/out current bioinformatics solutions to mine the big genomics data. In this review, we survey some of these exciting developments in the applications of parallel distributed computing and special hardware to genomics. We comment on the pros and cons of each strategy in the context of ease of development, robustness, scalability, and efficiency. Although this review is written for an audience from the genomics and bioinformatics fields, it may also be informative for the audience of computer science with interests in genomics applications
Scalable data abstractions for distributed parallel computations
The ability to express a program as a hierarchical composition of parts is an
essential tool in managing the complexity of software and a key abstraction
this provides is to separate the representation of data from the computation.
Many current parallel programming models use a shared memory model to provide
data abstraction but this doesn't scale well with large numbers of cores due to
non-determinism and access latency. This paper proposes a simple programming
model that allows scalable parallel programs to be expressed with distributed
representations of data and it provides the programmer with the flexibility to
employ shared or distributed styles of data-parallelism where applicable. It is
capable of an efficient implementation, and with the provision of a small set
of primitive capabilities in the hardware, it can be compiled to operate
directly on the hardware, in the same way stack-based allocation operates for
subroutines in sequential machines
- …