46,671 research outputs found
Preparing HPC Applications for the Exascale Era: A Decoupling Strategy
Production-quality parallel applications are often a mixture of diverse
operations, such as computation- and communication-intensive, regular and
irregular, tightly coupled and loosely linked operations. In conventional
construction of parallel applications, each process performs all the
operations, which might result inefficient and seriously limit scalability,
especially at large scale. We propose a decoupling strategy to improve the
scalability of applications running on large-scale systems.
Our strategy separates application operations onto groups of processes and
enables a dataflow processing paradigm among the groups. This mechanism is
effective in reducing the impact of load imbalance and increases the parallel
efficiency by pipelining multiple operations. We provide a proof-of-concept
implementation using MPI, the de-facto programming system on current
supercomputers. We demonstrate the effectiveness of this strategy by decoupling
the reduce, particle communication, halo exchange and I/O operations in a set
of scientific and data-analytics applications. A performance evaluation on
8,192 processes of a Cray XC40 supercomputer shows that the proposed approach
can achieve up to 4x performance improvement.Comment: The 46th International Conference on Parallel Processing (ICPP-2017
Adaptive control in rollforward recovery for extreme scale multigrid
With the increasing number of compute components, failures in future
exa-scale computer systems are expected to become more frequent. This motivates
the study of novel resilience techniques. Here, we extend a recently proposed
algorithm-based recovery method for multigrid iterations by introducing an
adaptive control. After a fault, the healthy part of the system continues the
iterative solution process, while the solution in the faulty domain is
re-constructed by an asynchronous on-line recovery. The computations in both
the faulty and healthy subdomains must be coordinated in a sensitive way, in
particular, both under and over-solving must be avoided. Both of these waste
computational resources and will therefore increase the overall
time-to-solution. To control the local recovery and guarantee an optimal
re-coupling, we introduce a stopping criterion based on a mathematical error
estimator. It involves hierarchical weighted sums of residuals within the
context of uniformly refined meshes and is well-suited in the context of
parallel high-performance computing. The re-coupling process is steered by
local contributions of the error estimator. We propose and compare two criteria
which differ in their weights. Failure scenarios when solving up to
unknowns on more than 245\,766 parallel processes will be
reported on a state-of-the-art peta-scale supercomputer demonstrating the
robustness of the method
Performance Evaluation and Modeling of HPC I/O on Non-Volatile Memory
HPC applications pose high demands on I/O performance and storage capability.
The emerging non-volatile memory (NVM) techniques offer low-latency, high
bandwidth, and persistence for HPC applications. However, the existing I/O
stack are designed and optimized based on an assumption of disk-based storage.
To effectively use NVM, we must re-examine the existing high performance
computing (HPC) I/O sub-system to properly integrate NVM into it. Using NVM as
a fast storage, the previous assumption on the inferior performance of storage
(e.g., hard drive) is not valid any more. The performance problem caused by
slow storage may be mitigated; the existing mechanisms to narrow the
performance gap between storage and CPU may be unnecessary and result in large
overhead. Thus fully understanding the impact of introducing NVM into the HPC
software stack demands a thorough performance study.
In this paper, we analyze and model the performance of I/O intensive HPC
applications with NVM as a block device. We study the performance from three
perspectives: (1) the impact of NVM on the performance of traditional page
cache; (2) a performance comparison between MPI individual I/O and POSIX I/O;
and (3) the impact of NVM on the performance of collective I/O. We reveal the
diminishing effects of page cache, minor performance difference between MPI
individual I/O and POSIX I/O, and performance disadvantage of collective I/O on
NVM due to unnecessary data shuffling. We also model the performance of MPI
collective I/O and study the complex interaction between data shuffling,
storage performance, and I/O access patterns.Comment: 10 page
Multi-Architecture Monte-Carlo (MC) Simulation of Soft Coarse-Grained Polymeric Materials: SOft coarse grained Monte-carlo Acceleration (SOMA)
Multi-component polymer systems are important for the development of new
materials because of their ability to phase-separate or self-assemble into
nano-structures. The Single-Chain-in-Mean-Field (SCMF) algorithm in conjunction
with a soft, coarse-grained polymer model is an established technique to
investigate these soft-matter systems. Here we present an im- plementation of
this method: SOft coarse grained Monte-carlo Accelera- tion (SOMA). It is
suitable to simulate large system sizes with up to billions of particles, yet
versatile enough to study properties of different kinds of molecular
architectures and interactions. We achieve efficiency of the simulations
commissioning accelerators like GPUs on both workstations as well as
supercomputers. The implementa- tion remains flexible and maintainable because
of the implementation of the scientific programming language enhanced by
OpenACC pragmas for the accelerators. We present implementation details and
features of the program package, investigate the scalability of our
implementation SOMA, and discuss two applications, which cover system sizes
that are difficult to reach with other, common particle-based simulation
methods
- …