2,246 research outputs found
Hierarchical Dynamic Loop Self-Scheduling on Distributed-Memory Systems Using an MPI+MPI Approach
Computationally-intensive loops are the primary source of parallelism in
scientific applications. Such loops are often irregular and a balanced
execution of their loop iterations is critical for achieving high performance.
However, several factors may lead to an imbalanced load execution, such as
problem characteristics, algorithmic, and systemic variations. Dynamic loop
self-scheduling (DLS) techniques are devised to mitigate these factors, and
consequently, improve application performance. On distributed-memory systems,
DLS techniques can be implemented using a hierarchical master-worker execution
model and are, therefore, called hierarchical DLS techniques. These techniques
self-schedule loop iterations at two levels of hardware parallelism: across and
within compute nodes. Hybrid programming approaches that combine the message
passing interface (MPI) with open multi-processing (OpenMP) dominate the
implementation of hierarchical DLS techniques. The MPI-3 standard includes the
feature of sharing memory regions among MPI processes. This feature introduced
the MPI+MPI approach that simplifies the implementation of parallel scientific
applications. The present work designs and implements hierarchical DLS
techniques by exploiting the MPI+MPI approach. Four well-known DLS techniques
are considered in the evaluation proposed herein. The results indicate certain
performance advantages of the proposed approach compared to the hybrid
MPI+OpenMP approach
DART-MPI: An MPI-based Implementation of a PGAS Runtime System
A Partitioned Global Address Space (PGAS) approach treats a distributed
system as if the memory were shared on a global level. Given such a global view
on memory, the user may program applications very much like shared memory
systems. This greatly simplifies the tasks of developing parallel applications,
because no explicit communication has to be specified in the program for data
exchange between different computing nodes. In this paper we present DART, a
runtime environment, which implements the PGAS paradigm on large-scale
high-performance computing clusters. A specific feature of our implementation
is the use of one-sided communication of the Message Passing Interface (MPI)
version 3 (i.e. MPI-3) as the underlying communication substrate. We evaluated
the performance of the implementation with several low-level kernels in order
to determine overheads and limitations in comparison to the underlying MPI-3.Comment: 11 pages, International Conference on Partitioned Global Address
Space Programming Models (PGAS14
POSH: Paris OpenSHMEM: A High-Performance OpenSHMEM Implementation for Shared Memory Systems
In this paper we present the design and implementation of POSH, an
Open-Source implementation of the OpenSHMEM standard. We present a model for
its communications, and prove some properties on the memory model defined in
the OpenSHMEM specification. We present some performance measurements of the
communication library featured by POSH and compare them with an existing
one-sided communication library. POSH can be downloaded from
\url{http://www.lipn.fr/~coti/POSH}. % 9 - 67Comment: This is an extended version (featuring the full proofs) of a paper
accepted at ICCS'1
An efficient MPI/OpenMP parallelization of the Hartree-Fock method for the second generation of Intel Xeon Phi processor
Modern OpenMP threading techniques are used to convert the MPI-only
Hartree-Fock code in the GAMESS program to a hybrid MPI/OpenMP algorithm. Two
separate implementations that differ by the sharing or replication of key data
structures among threads are considered, density and Fock matrices. All
implementations are benchmarked on a super-computer of 3,000 Intel Xeon Phi
processors. With 64 cores per processor, scaling numbers are reported on up to
192,000 cores. The hybrid MPI/OpenMP implementation reduces the memory
footprint by approximately 200 times compared to the legacy code. The
MPI/OpenMP code was shown to run up to six times faster than the original for a
range of molecular system sizes.Comment: SC17 conference paper, 12 pages, 7 figure
One-Sided Communication for High Performance Computing Applications
Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009Parallel programming presents a number of critical challenges to application developers. Traditionally, message passing, in which a process explicitly sends data and another explicitly receives the data, has been used to program parallel applications. With the recent growth in multi-core processors, the level of parallelism necessary for next generation machines is cause for concern in the message passing community. The one-sided programming paradigm, in which only one of the two processes involved in communication actively participates in message transfer, has seen increased interest as a potential replacement for message passing.
One-sided communication does not carry the heavy per-message overhead associated with modern message passing libraries. The paradigm offers lower synchronization costs and advanced data manipulation techniques such as remote atomic arithmetic and synchronization operations. These combine to present an appealing interface for applications with random communication patterns, which traditionally present message passing implementations with difficulties.
This thesis presents a taxonomy of both the one-sided paradigm and of applications which are ideal for the one-sided interface. Three case studies, based on real-world applications, are used to motivate both taxonomies and verify the applicability of the MPI one-sided communication and Cray SHMEM one-sided interfaces to real-world problems. While our results show a number of short-comings with existing implementations, they also suggest that a number of applications could benefit from the one-sided paradigm. Finally, an implementation of the MPI one-sided interface within Open MPI is presented, which provides a number of unique performance features necessary for efficient use of the one-sided programming paradigm
Coarray-based Load Balancing on Heterogeneous and Many-Core Architectures
In order to reach challenging performance goals, computer architecture is expected to change significantly in the near future. Heterogeneous chips, equipped with different types of cores and memory, will force application developers to deal with irregular communication patterns, high levels of parallelism, and unexpected behavior.
Load balancing among the heterogeneous compute units will be a critical task in order to achieve an effective usage of the computational power provided by such new architectures. In this highly dynamic scenario, Partitioned Global Address Space (PGAS) languages, like Coarray Fortran, appear a promising alternative to standard MPI programming that uses two-sided communications, in particular because of PGAS one-sided semantic and ease of programmability. In this paper, we show how Coarray Fortran can be used for implementing dynamic load balancing algorithms on an exascale compute node and how these algorithms can produce performance benefits for an Asian option pricing problem, running in symmetric mode on Intel Xeon Phi Knights Corner and Knights Landing architectures
A Parallel General Purpose Multi-Objective Optimization Framework, with Application to Beam Dynamics
Particle accelerators are invaluable tools for research in the basic and
applied sciences, in fields such as materials science, chemistry, the
biosciences, particle physics, nuclear physics and medicine. The design,
commissioning, and operation of accelerator facilities is a non-trivial task,
due to the large number of control parameters and the complex interplay of
several conflicting design goals. We propose to tackle this problem by means of
multi-objective optimization algorithms which also facilitate a parallel
deployment. In order to compute solutions in a meaningful time frame a fast and
scalable software framework is required. In this paper, we present the
implementation of such a general-purpose framework for simulation-based
multi-objective optimization methods that allows the automatic investigation of
optimal sets of machine parameters. The implementation is based on a
master/slave paradigm, employing several masters that govern a set of slaves
executing simulations and performing optimization tasks. Using evolutionary
algorithms as the optimizer and OPAL as the forward solver, validation
experiments and results of multi-objective optimization problems in the domain
of beam dynamics are presented. The high charge beam line at the Argonne
Wakefield Accelerator Facility was used as the beam dynamics model. The 3D beam
size, transverse momentum, and energy spread were optimized
Introduction to StarNEig -- A Task-based Library for Solving Nonsymmetric Eigenvalue Problems
In this paper, we present the StarNEig library for solving dense
non-symmetric (generalized) eigenvalue problems. The library is built on top of
the StarPU runtime system and targets both shared and distributed memory
machines. Some components of the library support GPUs. The library is currently
in an early beta state and only real arithmetic is supported. Support for
complex data types is planned for a future release. This paper is aimed for
potential users of the library. We describe the design choices and capabilities
of the library, and contrast them to existing software such as ScaLAPACK.
StarNEig implements a ScaLAPACK compatibility layer that should make it easy
for a new user to transition to StarNEig. We demonstrate the performance of the
library with a small set of computational experiments.Comment: 10 pages, 4 figures (10 when counting sub-figures), 2 tex-files.
Submitted to PPAM 2019, 13th international conference on parallel processing
and applied mathematics, September 8-11, 2019. Proceedings will be published
after the conference by Springer in the LNCS series. Second author's first
name is "Carl Christian" and last name "Kjelgaard Mikkelsen
Exploring Fully Offloaded GPU Stream-Aware Message Passing
Modern heterogeneous supercomputing systems are comprised of CPUs, GPUs, and
high-speed network interconnects. Communication libraries supporting efficient
data transfers involving memory buffers from the GPU memory typically require
the CPU to orchestrate the data transfer operations. A new offload-friendly
communication strategy, stream-triggered (ST) communication, was explored to
allow offloading the synchronization and data movement operations from the CPU
to the GPU. A Message Passing Interface (MPI) one-sided active target
synchronization based implementation was used as an exemplar to illustrate the
proposed strategy. A latency-sensitive nearest neighbor microbenchmark was used
to explore the various performance aspects of the implementation. The offloaded
implementation shows significant on-node performance advantages over standard
MPI active RMA (36%) and point-to-point (61%) communication. The current
multi-node improvement is less (23% faster than standard active RMA but 11%
slower than point-to-point), but plans are in progress to purse further
improvements.Comment: 12 pages, 17 figure
- …