561 research outputs found
A Multilevel Approach to Topology-Aware Collective Operations in Computational Grids
The efficient implementation of collective communiction operations has
received much attention. Initial efforts produced "optimal" trees based on
network communication models that assumed equal point-to-point latencies
between any two processes. This assumption is violated in most practical
settings, however, particularly in heterogeneous systems such as clusters of
SMPs and wide-area "computational Grids," with the result that collective
operations perform suboptimally. In response, more recent work has focused on
creating topology-aware trees for collective operations that minimize
communication across slower channels (e.g., a wide-area network). While these
efforts have significant communication benefits, they all limit their view of
the network to only two layers. We present a strategy based upon a multilayer
view of the network. By creating multilevel topology-aware trees we take
advantage of communication cost differences at every level in the network. We
used this strategy to implement topology-aware versions of several MPI
collective operations in MPICH-G2, the Globus Toolkit[tm]-enabled version of
the popular MPICH implementation of the MPI standard. Using information about
topology provided by MPICH-G2, we construct these multilevel topology-aware
trees automatically during execution. We present results demonstrating the
advantages of our multilevel approach by comparing it to the default
(topology-unaware) implementation provided by MPICH and a topology-aware
two-layer implementation.Comment: 16 pages, 8 figure
Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?
Dense Multi-GPU systems have recently gained a lot of attention in the HPC
arena. Traditionally, MPI runtimes have been primarily designed for clusters
with a large number of nodes. However, with the advent of MPI+CUDA applications
and CUDA-Aware MPI runtimes like MVAPICH2 and OpenMPI, it has become important
to address efficient communication schemes for such dense Multi-GPU nodes. This
coupled with new application workloads brought forward by Deep Learning
frameworks like Caffe and Microsoft CNTK pose additional design constraints due
to very large message communication of GPU buffers during the training phase.
In this context, special-purpose libraries like NVIDIA NCCL have been proposed
for GPU-based collective communication on dense GPU systems. In this paper, we
propose a pipelined chain (ring) design for the MPI_Bcast collective operation
along with an enhanced collective tuning framework in MVAPICH2-GDR that enables
efficient intra-/inter-node multi-GPU communication. We present an in-depth
performance landscape for the proposed MPI_Bcast schemes along with a
comparative analysis of NVIDIA NCCL Broadcast and NCCL-based MPI_Bcast. The
proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement,
compared to NCCL-based solutions, for intra- and inter-node broadcast latency,
respectively. In addition, the proposed designs provide up to 7% improvement
over NCCL-based solutions for data parallel training of the VGG network on 128
GPUs using Microsoft CNTK.Comment: 8 pages, 3 figure
Efficient Broadcast for Multicast-Capable Interconnection Networks
The broadcast function MPI_Bcast() from the
MPI-1.1 standard is one of the most heavily
used collective operations for the message
passing programming paradigm.
This diploma thesis makes use of a feature called
"Multicast", which is supported by several
network technologies (like Ethernet or
InfiniBand), to create an efficient MPI_Bcast()
implementation, especially for large communicators
and small-sized messages.
A preceding analysis of existing real-world
applications leads to an algorithm which does not
only perform well for synthetical benchmarks
but also even better for a wide class of
parallel applications. The finally derived
broadcast has been implemented for the
open source MPI library "Open MPI" using
IP multicast.
The achieved results prove that
the new broadcast is usually always better
than existing point-to-point implementations,
as soon as the number of MPI processes exceeds the
8 node boundary. The performance gain reaches
a factor of 4.9 on 342 nodes, because the
new algorithm scales practically independently
of the number of involved processes.Die Broadcastfunktion MPI_Bcast() aus dem MPI-1.1
Standard ist eine der meistgenutzten kollektiven
Kommunikationsoperationen des nachrichtenbasierten
Programmierparadigmas.
Diese Diplomarbeit nutzt die Multicastfähigkeit,
die von mehreren Netzwerktechnologien (wie Ethernet
oder InfiniBand) bereitgestellt wird, um eine
effiziente MPI_Bcast() Implementation zu erschaffen,
insbesondere für große Kommunikatoren und kleinere
Nachrichtengrößen.
Eine vorhergehende Analyse von existierenden
parallelen Anwendungen führte dazu, dass der neue
Algorithmus nicht nur bei synthetischen Benchmarks
gut abschneidet, sondern sein Potential bei echten
Anwendungen noch besser entfalten kann. Der
letztendlich daraus entstandene Broadcast wurde
für die Open-Source MPI Bibliothek "Open MPI"
entwickelt und basiert auf IP Multicast.
Die erreichten Ergebnisse belegen, dass der neue
Broadcast üblicherweise immer besser als jegliche
Punkt-zu-Punkt Implementierungen ist, sobald die
Anzahl von MPI Prozessen die Grenze von 8 Knoten
überschreitet. Der Geschwindigkeitszuwachs
erreicht einen Faktor von 4,9 bei 342 Knoten,
da der neue Algorithmus praktisch unabhängig
von der Knotenzahl skaliert
Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms
Matrix multiplication is a very important computation kernel both in its own
right as a building block of many scientific applications and as a popular
representative for other scientific applications. Cannon algorithm which dates
back to 1969 was the first efficient algorithm for parallel matrix
multiplication providing theoretically optimal communication cost. However this
algorithm requires a square number of processors. In the mid 1990s, the SUMMA
algorithm was introduced. SUMMA overcomes the shortcomings of Cannon algorithm
as it can be used on a non-square number of processors as well. Since then the
number of processors in HPC platforms has increased by two orders of magnitude
making the contribution of communication in the overall execution time more
significant. Therefore, the state of the art parallel matrix multiplication
algorithms should be revisited to reduce the communication cost further. This
paper introduces a new parallel matrix multiplication algorithm, Hierarchical
SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the
communication cost of SUMMA by introducing a two-level virtual hierarchy into
the two-dimensional arrangement of processors. Experiments on an IBM BlueGene-P
demonstrate the reduction of communication cost up to 2.08 times on 2048 cores
and up to 5.89 times on 16384 cores.Comment: 9 page
- …