1,274 research outputs found
Canary: Congestion-Aware In-Network Allreduce Using Dynamic Trees
The allreduce operation is an essential building block for many distributed
applications, ranging from the training of deep learning models to scientific
computing. In an allreduce operation, data from multiple hosts is aggregated
together and then broadcasted to each host participating in the operation.
Allreduce performance can be improved by a factor of two by aggregating the
data directly in the network. Switches aggregate data coming from multiple
ports before forwarding the partially aggregated result to the next hop. In all
existing solutions, each switch needs to know the ports from which it will
receive the data to aggregate. However, this forces packets to traverse a
predefined set of switches, making these solutions prone to congestion. For
this reason, we design Canary, the first congestion-aware in-network allreduce
algorithm. Canary uses load balancing algorithms to forward packets on the
least congested paths. Because switches do not know from which ports they will
receive the data to aggregate, they use timeouts to aggregate the data in a
best-effort way. We develop a P4 Canary prototype and evaluate it on a Tofino
switch. We then validate Canary through simulations on large networks, showing
performance improvements up to 40% compared to the state-of-the-art
GraphStep: A System Architecture for Sparse-Graph Algorithms
Many important applications are organized around
long-lived, irregular sparse graphs (e.g., data and knowledge
bases, CAD optimization, numerical problems, simulations). The
graph structures are large, and the applications need regular
access to a large, data-dependent portion of the graph for each
operation (e.g., the algorithm may need to walk the graph, visiting
all nodes, or propagate changes through many nodes in the
graph). On conventional microprocessors, the graph structures
exceed on-chip cache capacities, making main-memory bandwidth
and latency the key performance limiters. To avoid this
“memory wall,” we introduce a concurrent system architecture
for sparse graph algorithms that places graph nodes in small
distributed memories paired with specialized graph processing
nodes interconnected by a lightweight network. This gives us a
scalable way to map these applications so that they can exploit
the high-bandwidth and low-latency capabilities of embedded
memories (e.g., FPGA Block RAMs). On typical spreading activation
queries on the ConceptNet Knowledge Base, a sample
application, this translates into an order of magnitude speedup
per FPGA compared to a state-of-the-art Pentium processor
A Fast Compiler for NetKAT
High-level programming languages play a key role in a growing number of
networking platforms, streamlining application development and enabling precise
formal reasoning about network behavior. Unfortunately, current compilers only
handle "local" programs that specify behavior in terms of hop-by-hop forwarding
behavior, or modest extensions such as simple paths. To encode richer "global"
behaviors, programmers must add extra state -- something that is tricky to get
right and makes programs harder to write and maintain. Making matters worse,
existing compilers can take tens of minutes to generate the forwarding state
for the network, even on relatively small inputs. This forces programmers to
waste time working around performance issues or even revert to using
hardware-level APIs.
This paper presents a new compiler for the NetKAT language that handles rich
features including regular paths and virtual networks, and yet is several
orders of magnitude faster than previous compilers. The compiler uses symbolic
automata to calculate the extra state needed to implement "global" programs,
and an intermediate representation based on binary decision diagrams to
dramatically improve performance. We describe the design and implementation of
three essential compiler stages: from virtual programs (which specify behavior
in terms of virtual topologies) to global programs (which specify network-wide
behavior in terms of physical topologies), from global programs to local
programs (which specify behavior in terms of single-switch behavior), and from
local programs to hardware-level forwarding tables. We present results from
experiments on real-world benchmarks that quantify performance in terms of
compilation time and forwarding table size
Partial aggregation for collective communication in distributed memory machines
High Performance Computing (HPC) systems interconnect a large number of Processing Elements (PEs) in high-bandwidth networks to simulate complex scientific problems. The increasing scale of HPC systems poses great challenges on algorithm designers. As the average distance between PEs increases, data movement across hierarchical memory subsystems introduces high latency. Minimizing latency is particularly challenging in collective communications, where many PEs may interact in complex communication patterns. Although collective communications can be optimized for network-level parallelism, occasional synchronization delays due to dependencies in the communication pattern degrade application performance.
To reduce the performance impact of communication and synchronization costs, parallel algorithms are designed with sophisticated latency hiding techniques. The principle is to interleave computation with asynchronous communication, which increases the overall occupancy of compute cores. However, collective communication primitives abstract parallelism which limits the integration of latency hiding techniques. Approaches to work around these limitations either modify the algorithmic structure of application codes, or replace collective primitives with verbose low-level communication calls. While these approaches give fine-grained control for latency hiding, implementing collective communication algorithms is challenging and requires expertise knowledge about HPC network topologies.
A collective communication pattern is commonly described as a Directed Acyclic Graph (DAG) where a set of PEs, represented as vertices, resolve data dependencies through communication along the edges. Our approach improves latency hiding in collective communication through partial aggregation. Based on mathematical rules of binary operations and homomorphism, we expose data parallelism in a respective DAG to overlap computation with communication. The proposed concepts are implemented and evaluated with a subset of collective primitives in the Message Passing Interface (MPI), an established communication standard in scientific computing. An experimental analysis with communication-bound microbenchmarks shows considerable performance benefits for the evaluated collective primitives. A detailed case study with a large-scale distributed sort algorithm demonstrates, how partial aggregation significantly improves performance in data-intensive scenarios. Besides better latency hiding capabilities with collective communication primitives, our approach enables further optimizations of their implementations within MPI libraries.
The vast amount of asynchronous programming models, which are actively studied in the HPC community, benefit from partial aggregation in collective communication patterns. Future work can utilize partial aggregation to improve the interaction of MPI collectives with acclerator architectures, and to design more efficient communication algorithms
An improved generalization of mesh-connected computers with multiple buses
©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Mesh-connected computers (MCCs) are a class of important parallel architectures due to their simple and regular interconnections. However, their performances are restricted by their large diameters. Various augmenting mechanisms have been proposed to enhance the communication efficiency of MCCs. One major approach is to add nonconfigurable buses for improved broadcasting. A typical example is the mesh-connected computer with multiple buses (MMB). We propose a new class of generalized MMBs, the improved generalized MMBs (IMMBs). We compare IMMBs with MMBs and a class of previously proposed generalized MMBs (GMMBs). We show the power of IMMBs by considering semigroup and prefix computations. Specifically, as our main result we show that for any constant 0<ϵ<1, one can construct an N½×N½ square IMMB using which semigroup and prefix computations on N operands can be carried out in O(Nϵ) time, while maintaining O(1) broadcasting time. Compared with the previous best complexities O(N⅛) and O(N&frac116;) achieved on a rectangular MMB and GMMB, respectively, for the same computations, our results show that IMMBs are more powerful than MMBs and GMMBsYi Pen; Zheng, S.Q.; Keqin Li; Hong She
Recommended from our members
An Algorithmic Taxonomy of Production System Machines
This paper presents a survey of computer architectures designed to execute production systems. After a brief description of production systems and production system languages, the paper summarizes match algorithms, particularly the Rete algorithm, and outlines suggested parallelizations. Most parallel production system algorithms have as their unit of sequential computation a single production's left-hand side, activations of a single Rete node, a single activation of a Rete node, or a single comparison in a Rete node. The paper discusses a number of proposed production system machine architectures in terms of the parallel and sequential computations performed in the algorithms suggested for each machine. A taxonomy of parallel production system algorithms, describing in detail the distribution and replication of data and computations, concludes the paper
A bibliography on parallel and vector numerical algorithms
This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also
- …