2,399 research outputs found
Fibers are not (P)Threads: The Case for Loose Coupling of Asynchronous Programming Models and MPI Through Continuations
Asynchronous programming models (APM) are gaining more and more traction,
allowing applications to expose the available concurrency to a runtime system
tasked with coordinating the execution. While MPI has long provided support for
multi-threaded communication and non-blocking operations, it falls short of
adequately supporting APMs as correctly and efficiently handling MPI
communication in different models is still a challenge. Meanwhile, new
low-level implementations of light-weight, cooperatively scheduled execution
contexts (fibers, aka user-level threads (ULT)) are meant to serve as a basis
for higher-level APMs and their integration in MPI implementations has been
proposed as a replacement for traditional POSIX thread support to alleviate
these challenges.
In this paper, we first establish a taxonomy in an attempt to clearly
distinguish different concepts in the parallel software stack. We argue that
the proposed tight integration of fiber implementations with MPI is neither
warranted nor beneficial and instead is detrimental to the goal of MPI being a
portable communication abstraction. We propose MPI Continuations as an
extension to the MPI standard to provide callback-based notifications on
completed operations, leading to a clear separation of concerns by providing a
loose coupling mechanism between MPI and APMs. We show that this interface is
flexible and interacts well with different APMs, namely OpenMP detached tasks,
OmpSs-2, and Argobots.Comment: 12 pages, 7 figures Published in proceedings of EuroMPI/USA '20,
September 21-24, 2020, Austin, TX, US
CRAFT: A library for easier application-level Checkpoint/Restart and Automatic Fault Tolerance
In order to efficiently use the future generations of supercomputers, fault
tolerance and power consumption are two of the prime challenges anticipated by
the High Performance Computing (HPC) community. Checkpoint/Restart (CR) has
been and still is the most widely used technique to deal with hard failures.
Application-level CR is the most effective CR technique in terms of overhead
efficiency but it takes a lot of implementation effort. This work presents the
implementation of our C++ based library CRAFT (Checkpoint-Restart and Automatic
Fault Tolerance), which serves two purposes. First, it provides an extendable
library that significantly eases the implementation of application-level
checkpointing. The most basic and frequently used checkpoint data types are
already part of CRAFT and can be directly used out of the box. The library can
be easily extended to add more data types. As means of overhead reduction, the
library offers a build-in asynchronous checkpointing mechanism and also
supports the Scalable Checkpoint/Restart (SCR) library for node level
checkpointing. Second, CRAFT provides an easier interface for User-Level
Failure Mitigation (ULFM) based dynamic process recovery, which significantly
reduces the complexity and effort of failure detection and communication
recovery mechanism. By utilizing both functionalities together, applications
can write application-level checkpoints and recover dynamically from process
failures with very limited programming effort. This work presents the design
and use of our library in detail. The associated overheads are thoroughly
analyzed using several benchmarks
Pipes and Connections
This document describes the low-level Pipe and ConnectionManager objects of the Mesh-
Router system. The overall MeshRouter framework provides a general scheme for interest-
limited communications among a number of client processes. This generality is achieved by
a carefully factorized, object-oriented software implementation. Within this framework, the
Pipe and ConnectionManager (base) classes dened in this note specify the interfaces for i) ac-
tual `bits on the wire' communications and ii) dynamic client insertions during overall system
execution. Two specic implementations of the Pipe class are described in detail: a `Memo-
ryPipe' linking objects instanced on a single processor and a more general 'rtisPipe' providing
inter-processor communications built entirely from the standard RTI-s library used in current
JSAF applications. Initialization procedures within the overall MeshRouter system are dis-
cussed, with particular attention given to dynamic management of inter-processor connections.
Prototype RTI-s router processes are discussed, and simple extensions of the standard system
conguration data les are presented
An efficient MPI/OpenMP parallelization of the Hartree-Fock method for the second generation of Intel Xeon Phi processor
Modern OpenMP threading techniques are used to convert the MPI-only
Hartree-Fock code in the GAMESS program to a hybrid MPI/OpenMP algorithm. Two
separate implementations that differ by the sharing or replication of key data
structures among threads are considered, density and Fock matrices. All
implementations are benchmarked on a super-computer of 3,000 Intel Xeon Phi
processors. With 64 cores per processor, scaling numbers are reported on up to
192,000 cores. The hybrid MPI/OpenMP implementation reduces the memory
footprint by approximately 200 times compared to the legacy code. The
MPI/OpenMP code was shown to run up to six times faster than the original for a
range of molecular system sizes.Comment: SC17 conference paper, 12 pages, 7 figure
One-Sided Communication for High Performance Computing Applications
Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009Parallel programming presents a number of critical challenges to application developers. Traditionally, message passing, in which a process explicitly sends data and another explicitly receives the data, has been used to program parallel applications. With the recent growth in multi-core processors, the level of parallelism necessary for next generation machines is cause for concern in the message passing community. The one-sided programming paradigm, in which only one of the two processes involved in communication actively participates in message transfer, has seen increased interest as a potential replacement for message passing.
One-sided communication does not carry the heavy per-message overhead associated with modern message passing libraries. The paradigm offers lower synchronization costs and advanced data manipulation techniques such as remote atomic arithmetic and synchronization operations. These combine to present an appealing interface for applications with random communication patterns, which traditionally present message passing implementations with difficulties.
This thesis presents a taxonomy of both the one-sided paradigm and of applications which are ideal for the one-sided interface. Three case studies, based on real-world applications, are used to motivate both taxonomies and verify the applicability of the MPI one-sided communication and Cray SHMEM one-sided interfaces to real-world problems. While our results show a number of short-comings with existing implementations, they also suggest that a number of applications could benefit from the one-sided paradigm. Finally, an implementation of the MPI one-sided interface within Open MPI is presented, which provides a number of unique performance features necessary for efficient use of the one-sided programming paradigm
- …