44,049 research outputs found
Recursive Algorithms for Distributed Forests of Octrees
The forest-of-octrees approach to parallel adaptive mesh refinement and
coarsening (AMR) has recently been demonstrated in the context of a number of
large-scale PDE-based applications. Although linear octrees, which store only
leaf octants, have an underlying tree structure by definition, it is not often
exploited in previously published mesh-related algorithms. This is because the
branches are not explicitly stored, and because the topological relationships
in meshes, such as the adjacency between cells, introduce dependencies that do
not respect the octree hierarchy. In this work we combine hierarchical and
topological relationships between octree branches to design efficient recursive
algorithms.
We present three important algorithms with recursive implementations. The
first is a parallel search for leaves matching any of a set of multiple search
criteria. The second is a ghost layer construction algorithm that handles
arbitrarily refined octrees that are not covered by previous algorithms, which
require a 2:1 condition between neighboring leaves. The third is a universal
mesh topology iterator. This iterator visits every cell in a domain partition,
as well as every interface (face, edge and corner) between these cells. The
iterator calculates the local topological information for every interface that
it visits, taking into account the nonconforming interfaces that increase the
complexity of describing the local topology. To demonstrate the utility of the
topology iterator, we use it to compute the numbering and encoding of
higher-order nodal basis functions.
We analyze the complexity of the new recursive algorithms theoretically, and
assess their performance, both in terms of single-processor efficiency and in
terms of parallel scalability, demonstrating good weak and strong scaling up to
458k cores of the JUQUEEN supercomputer.Comment: 35 pages, 15 figures, 3 table
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Dynamic Control Flow in Large-Scale Machine Learning
Many recent machine learning models rely on fine-grained dynamic control flow
for training and inference. In particular, models based on recurrent neural
networks and on reinforcement learning depend on recurrence relations,
data-dependent conditional execution, and other features that call for dynamic
control flow. These applications benefit from the ability to make rapid
control-flow decisions across a set of computing devices in a distributed
system. For performance, scalability, and expressiveness, a machine learning
system must support dynamic control flow in distributed and heterogeneous
environments.
This paper presents a programming model for distributed machine learning that
supports dynamic control flow. We describe the design of the programming model,
and its implementation in TensorFlow, a distributed machine learning system.
Our approach extends the use of dataflow graphs to represent machine learning
models, offering several distinctive features. First, the branches of
conditionals and bodies of loops can be partitioned across many machines to run
on a set of heterogeneous devices, including CPUs, GPUs, and custom ASICs.
Second, programs written in our model support automatic differentiation and
distributed gradient computations, which are necessary for training machine
learning models that use control flow. Third, our choice of non-strict
semantics enables multiple loop iterations to execute in parallel across
machines, and to overlap compute and I/O operations.
We have done our work in the context of TensorFlow, and it has been used
extensively in research and production. We evaluate it using several real-world
applications, and demonstrate its performance and scalability.Comment: Appeared in EuroSys 2018. 14 pages, 16 figure
Towards an abstract parallel branch and bound machine
Many (parallel) branch and bound algorithms look very different from each other at first
glance. They exploit, however, the same underlying computational model. This phenomenon
can be used to define branch and bound algorithms in terms of a set of basic rules that are applied in a specific (predefined) order.
In the sequential case, the specification of Mitten's rules turns out to be sufficient for
the development of branch and bound algorithms. In the parallel case, the situation is a
bit more complicated. We have to consider extra parameters such as work distribution and
knowledge sharing. Here, the implementation of parallel branch and bound algorithms can be
seen as a tuning of the parameters combined with the specification of Mitten's rules.
These observations lead to generic systems, where the user provides the specifications of
the problem to be solved, and the system generates a branch and bound algorithm running on
a specific architecture. We will discuss some proposals that appeared in the literature.
Next, we raise the question whether the proposed models are flexible enough. We analyze
the design decisions to be taken when implementing a parallel branch and bound algorithm.
It results in a classification model, which is validated by checking whether it captures
existing branch and bound implementations.
Finally, we return to the issue of flexibility of existing systems, and propose to add an
abstract machine model to the generic framework. The model defines a virtual parallel
branch and bound machine, within which the design decisions can be expressed in terms of
the abstract machine. We will outline some ideas on which the machine may be based, and
present directions of future work
- …