1,738 research outputs found
A parallel priority queue with fast updates for GPU architectures
The high computational throughput of modern graphics processing units (GPUs)
make them the de-facto architecture for high-performance computing
applications. However, to achieve peak performance, GPUs require highly
parallel workloads, as well as memory access patterns that exhibit good
locality of reference. As a result, many state-of-the-art algorithms and data
structures designed for GPUs sacrifice work-optimality to achieve the necessary
parallelism. Furthermore, some abstract data types are avoided completely due
to there being no corresponding data structure that performs well on the GPU.
One such abstract data type is the priority queue. Many well-known algorithms
rely on priority queue operations as a building block. While various priority
queue structures have been developed that are parallel, cache-aware, or
cache-oblivious, none has been shown to be efficient on GPUs. In this paper, we
present the parBucketHeap, a parallel, cache-efficient data structure designed
for modern GPU architectures that supports standard priority queue operations,
as well as bulk update. We analyze the structure in several well-known
computational models and show that it provides both optimal parallelism and is
cache-efficient. We implement the parBucketHeap and, using it, we solve the
single-source shortest path (SSSP) problem. Experimental results indicate that,
for sufficiently large, dense graphs with high diameter, we out-perform current
state-of-the-art SSSP algorithms on the GPU by up to a factor of 5. Unlike
existing GPU SSSP algorithms, our approach is work-optimal and places
significantly less load on the GPU, reducing power consumption
Gunrock: A High-Performance Graph Processing Library on the GPU
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs have been two
significant challenges for developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We evaluate Gunrock on five key graph
primitives and show that Gunrock has on average at least an order of magnitude
speedup over Boost and PowerGraph, comparable performance to the fastest GPU
hardwired primitives, and better performance than any other GPU high-level
graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the
previous version v5
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU
Optimisation and parallelism in synchronous digital circuit simulators
Digital circuit simulation often requires a large amount of computation, resulting in long run times. We consider several techniques for optimising a brute force synchronous
circuit simulator: an algorithm using an event queue that avoids recalculating quiescent parts of the circuit, a marking algorithm that is similar to the event queue but that avoids a central data structure, and a lazy algorithm that avoids calculating signals whose values are not needed. Two target architectures for the simulator are used: a sequential CPU, and a parallel GPGPU. The interactions between the different optimisations are discussed, and the performance is measured while the algorithms are simulating a simple but realistic scalable circuit
A minimalistic approach for fast computation of geodesic distances on triangular meshes
The computation of geodesic distances is an important research topic in
Geometry Processing and 3D Shape Analysis as it is a basic component of many
methods used in these areas. In this work, we present a minimalistic parallel
algorithm based on front propagation to compute approximate geodesic distances
on meshes. Our method is practical and simple to implement and does not require
any heavy pre-processing. The convergence of our algorithm depends on the
number of discrete level sets around the source points from which distance
information propagates. To appropriately implement our method on GPUs taking
into account memory coalescence problems, we take advantage of a graph
representation based on a breadth-first search traversal that works
harmoniously with our parallel front propagation approach. We report
experiments that show how our method scales with the size of the problem. We
compare the mean error and processing time obtained by our method with such
measures computed using other methods. Our method produces results in
competitive times with almost the same accuracy, especially for large meshes.
We also demonstrate its use for solving two classical geometry processing
problems: the regular sampling problem and the Voronoi tessellation on meshes.Comment: Preprint submitted to Computers & Graphic
Lock-free Concurrent Data Structures
Concurrent data structures are the data sharing side of parallel programming.
Data structures give the means to the program to store data, but also provide
operations to the program to access and manipulate these data. These operations
are implemented through algorithms that have to be efficient. In the sequential
setting, data structures are crucially important for the performance of the
respective computation. In the parallel programming setting, their importance
becomes more crucial because of the increased use of data and resource sharing
for utilizing parallelism.
The first and main goal of this chapter is to provide a sufficient background
and intuition to help the interested reader to navigate in the complex research
area of lock-free data structures. The second goal is to offer the programmer
familiarity to the subject that will allow her to use truly concurrent methods.Comment: To appear in "Programming Multi-core and Many-core Computing
Systems", eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and
Distributed Computin
- …