87 research outputs found
The Adaptive Priority Queue with Elimination and Combining
Priority queues are fundamental abstract data structures, often used to
manage limited resources in parallel programming. Several proposed parallel
priority queue implementations are based on skiplists, harnessing the potential
for parallelism of the add() operations. In addition, methods such as Flat
Combining have been proposed to reduce contention by batching together multiple
operations to be executed by a single thread. While this technique can decrease
lock-switching overhead and the number of pointer changes required by the
removeMin() operations in the priority queue, it can also create a sequential
bottleneck and limit parallelism, especially for non-conflicting add()
operations.
In this paper, we describe a novel priority queue design, harnessing the
scalability of parallel insertions in conjunction with the efficiency of
batched removals. Moreover, we present a new elimination algorithm suitable for
a priority queue, which further increases concurrency on balanced workloads
with similar numbers of add() and removeMin() operations. We implement and
evaluate our design using a variety of techniques including locking, atomic
operations, hardware transactional memory, as well as employing adaptive
heuristics given the workload.Comment: Accepted at DISC'14 - this is the full version with appendices,
including more algorithm
Non-blocking Priority Queue based on Skiplists with Relaxed Semantics
Priority queues are data structures that store information in an orderly fashion. They are of tremendous importance because they are an integral part of many applications, like Dijkstra’s shortest path algorithm, MST algorithms, priority schedulers, and so on.
Since priority queues by nature have high contention on the delete_min operation, the design of an efficient priority queue should involve an intelligent choice of the data structure as well as relaxation bounds on the data structure. Lock-free data structures provide higher scalability as well as progress guarantee than a lock-based data structure. That is another factor to be considered in the priority queue design.
We present a relaxed non-blocking priority queue based on skiplists. We address all the design issues mentioned above in our priority queue. Use of skiplists allows multiple threads to concurrently access different parts of the skiplist quickly, whereas relaxing the priority queue delete_min operation distributes contention over the skiplist instead of just at the front. Furthermore, a non-blocking implementation guarantees that the system will make progress even when some process fails.
Our priority queue is internally composed of several priority queues, one for each thread and one shared priority queue common to all threads. Each thread selects the best value from its local priority queue and the shared priority queue and returns the value. In case a thread is unable to delete an item, it tries to spy items from other threads\u27 local priority queues.
We experimentally and theoretically show the correctness of our data structure. We also compare the performance of our data structure with other variations like priority queues based on coarse-grained skiplists for both relaxed and non-relaxed semantics
Using Nesting to Push the Limits of Transactional Data Structure Libraries
Transactional data structure libraries (TDSL) combine the ease-of-programming of transactions with the high performance and scalability of custom-tailored concurrent data structures. They can be very efficient thanks to their ability to exploit data structure semantics in order to reduce overhead, aborts, and wasted work compared to general-purpose software transactional memory. However, TDSLs were not previously used for complex use-cases involving long transactions and a variety of data structures.
In this paper, we boost the performance and usability of a TDSL, towards allowing it to support complex applications. A key idea is nesting. Nested transactions create checkpoints within a longer transaction, so as to limit the scope of abort, without changing the semantics of the original transaction. We build a Java TDSL with built-in support for nested transactions over a number of data structures. We conduct a case study of a complex network intrusion detection system that invests a significant amount of work to process each packet. Our study shows that our library outperforms publicly available STMs twofold without nesting, and by up to 16x when nesting is used
VBR: Version Based Reclamation
Safe lock-free memory reclamation is a difficult problem. Existing solutions follow three basic methods (or their combinations): epoch based reclamation, hazard pointers, and optimistic reclamation. Epoch-based methods are fast, but do not guarantee lock-freedom. Hazard pointer solutions are lock-free but typically do not provide high performance. Optimistic methods are lock-free and fast, but previous optimistic methods did not go all the way. While reads were executed optimistically, writes were protected by hazard pointers. In this work we present a new reclamation scheme called version based reclamation (VBR), which provides a full optimistic solution to lock-free memory reclamation, obtaining lock-freedom and high efficiency. Speculative execution is known as a fundamental tool for improving performance in various areas of computer science, and indeed evaluation with a lock-free linked-list, hash-table and skip-list shows that VBR outperforms state-of-the-art existing solutions
Concurrent Access Algorithms for Different Data Structures: A Research Review
Algorithms for concurrent data structure have gained attention in recent years as multi-core processors have become ubiquitous. Several features of shared-memory multiprocessors make concurrent data structures significantly more difficult to design and to verify as correct than their sequential counterparts. The primary source of this additional difficulty is concurrency. This paper provides an overview of the some concurrent access algorithms for different data structures
The Power of Choice in Priority Scheduling
Consider the following random process: we are given queues, into which
elements of increasing labels are inserted uniformly at random. To remove an
element, we pick two queues at random, and remove the element of lower label
(higher priority) among the two. The cost of a removal is the rank of the label
removed, among labels still present in any of the queues, that is, the distance
from the optimal choice at each step. Variants of this strategy are prevalent
in state-of-the-art concurrent priority queue implementations. Nonetheless, it
is not known whether such implementations provide any rank guarantees, even in
a sequential model.
We answer this question, showing that this strategy provides surprisingly
strong guarantees: Although the single-choice process, where we always insert
and remove from a single randomly chosen queue, has degrading cost, going to
infinity as we increase the number of steps, in the two choice process, the
expected rank of a removed element is while the expected worst-case
cost is . These bounds are tight, and hold irrespective of the
number of steps for which we run the process.
The argument is based on a new technical connection between "heavily loaded"
balls-into-bins processes and priority scheduling.
Our analytic results inspire a new concurrent priority queue implementation,
which improves upon the state of the art in terms of practical performance
A semantically aware transactional concurrency control for GPGPU computing
PhD ThesisThe increased parallel nature of the GPU a ords an opportunity for
the exploration of multi-thread computing. With the availability of
GPU has recently expanded into the area of general purpose program-
ming, a concurrency control is required to exploit parallelism as well
as maintaining the correctness of program. Transactional memory,
which is a generalised solution for concurrent con
ict, meanwhile allow
application programmers to develop concurrent code in a more intu-
itive manner, is superior to pessimistic concurrency control for general
use. The most important component in any transactional memory
technique is the policy to solve con
icts on shared data, namely the
contention management policy.
The work presented in this thesis aims to develop a software trans-
actional memory model which can solve both concurrent con
ict and
semantic con
ict at the same time for the GPU. The technique di ers
from existing CPU approaches on account of the di erent essential ex-
ecution paths and hardware basis, plus much more parallel resources
are available on the GPU. We demonstrate that both concurrent con-
icts and semantic con
icts can be resolved in a particular contention
management policy for the GPU, with a di erent application of locks
and priorities from the CPU.
The basic problem and a software transactional memory solution idea
is proposed rst. An implementation is then presented based on the
execution mode of this model. After that, we extend this system to re-
solve semantic con
ict at the same time. Results are provided nally,
which compare the performance of our solution with an established
GPU software transactional memory and a famous CPU transactional
memory, at varying levels of concurrent and semantic con
icts
- …