2 research outputs found
Chunks and Tasks: a programming model for parallelization of dynamic algorithms
We propose Chunks and Tasks, a parallel programming model built on
abstractions for both data and work. The application programmer specifies how
data and work can be split into smaller pieces, chunks and tasks, respectively.
The Chunks and Tasks library maps the chunks and tasks to physical resources.
In this way we seek to combine user friendliness with high performance. An
application programmer can express a parallel algorithm using a few simple
building blocks, defining data and work objects and their relationships. No
explicit communication calls are needed; the distribution of both work and data
is handled by the Chunks and Tasks library. This makes efficient implementation
of complex applications that require dynamic distribution of work and data
easier. At the same time, Chunks and Tasks imposes restrictions on data access
and task dependencies that facilitates the development of high performance
parallel back ends. We discuss the fundamental abstractions underlying the
programming model, as well as performance and fault resilience considerations.
We also present a pilot C++ library implementation for clusters of multicore
machines and demonstrate its performance for sparse blocked matrix-matrix
multiplication.Comment: This manuscript was submitted to Parallel Computing (Elsevier) for
the special issue devoted to the conference Parallel Matrix Algorithms and
Applications (PMAA 2012). A presentation of this work was given at PMAA 2012
on June 29, 201
Fault-Tolerant Communication Runtime Support for Data-Centric Programming Models
Abstract—The largest supercomputers in the world today consist of hundreds of thousands of processing cores and many more other hardware components. At such scales, hardware faults are a commonplace, necessitating fault-resilient software systems. While different fault-resilient models are available, most focus on allowing the computational processes to survive faults. On the other hand, we have recently started investigating fault resilience techniques for data-centric programming models such as the partitioned global address space (PGAS) models. The primary difference in data-centric models is the decoupling of computation and data locality. That is, data placement is decoupled from the executing processes, allowing us to view process failure (a physical node hosting a process is dead) separately from data failure (a physical node hosting data is dead). In this paper, we take a first step toward data-centric fault resilience by designing and implementing a fault0resilient, onesided communication runtime framework using Global Arrays and its communication system, ARMCI. The framework consists of a fault-resilient process manager; low-overhead and networkassisted remote-node fault detection module; non-data-moving collective communication primitives; and failure semantics and error codes for one-sided communication runtime systems. Our performance evaluation indicates that the framework incurs little overhead compared to state-of-the-art designs and provides a fundamental framework of fault resiliency for PGAS models. I