2,982 research outputs found

    Brief Announcement: Fast and Scalable Group Mutual Exclusion

    Get PDF
    The group mutual exclusion (GME) problem is a generalization of the classical mutual exclusion problem in which every critical section is associated with a type or session. Critical sections belonging to the same session can execute concurrently, whereas critical sections belonging to different sessions must be executed serially. The well-known read-write mutual exclusion problem is a special case of the group mutual exclusion problem. In a shared memory system, locks based on traditional mutual exclusion or its variants are commonly used to manage contention among processes. In concurrent algorithms based on fine-grained synchronization, a single lock is used to protect access to a small number of shared objects (e.g., a lock for every tree node) so as to minimize contention window. Evidently, a large number of shared objects in the system would translate into a large number of locks. Also, when fine-grained synchronization is used, most lock accesses are expected to be uncontended in practice. Most existing algorithms for the solving the GME problem have high space-complexity per lock. Further, all algorithms except for one have high step-complexity in the uncontented case. This makes them unsuitable for use in concurrent algorithms based on fine-grained synchronization. In this work, we present a novel GME algorithm for an asynchronous shared-memory system that has O(1) space-complexity per GME lock when the system contains a large number of GME locks as well as O(1) step-complexity when the system contains no conflicting requests

    Synthesis of Parametric Programs using Genetic Programming and Model Checking

    Get PDF
    Formal methods apply algorithms based on mathematical principles to enhance the reliability of systems. It would only be natural to try to progress from verification, model checking or testing a system against its formal specification into constructing it automatically. Classical algorithmic synthesis theory provides interesting algorithms but also alarming high complexity and undecidability results. The use of genetic programming, in combination with model checking and testing, provides a powerful heuristic to synthesize programs. The method is not completely automatic, as it is fine tuned by a user that sets up the specification and parameters. It also does not guarantee to always succeed and converge towards a solution that satisfies all the required properties. However, we applied it successfully on quite nontrivial examples and managed to find solutions to hard programming challenges, as well as to improve and to correct code. We describe here several versions of our method for synthesizing sequential and concurrent systems.Comment: In Proceedings INFINITY 2013, arXiv:1402.661

    Exclusion and Object Tracking in a Network of Processes

    Get PDF
    This paper concerns two fundamental problems in distributed computing---mutual exclusion and mobile object tracking. For a variant of the mutual exclusion problem where the network topology is taken into account, all existing distributed solutions make use of tokens. It turns out that these token-based solutions for mutual exclusion can also be adapted for object tracking, as the token behaves very much like a mobile object. To handle objects with replication, we go further to consider the more general kk-exclusion problem which has not been as well studied in a network setting. A strong fairness property for kk-exclusion requires that a process trying to enter the critical section will eventually succeed even if \emph{up to} k−1k-1 processes stay in the critical section indefinitely. We present a comparative survey of existing token-based mutual exclusion algorithms, which have provided much inspiration for later kk-exclusion algorithms. We then propose two solutions to the kk-exclusion problem, the second of which meets the strong fairness requirement. Fault-tolerance issues are also discussed along with the suggestion of a third algorithm that is also strongly fair. Performances of the three algorithms are compared by simulation. Finally, we show how the various exclusion algorithms can be adapted for tracking mobile objects

    Efficient concurrent data structure access parallelism techniques for increasing scalability

    Get PDF
    Multi-core processors have revolutionised the way data structures are designed by bringing parallelism to mainstream computing. Key to exploiting hardware parallelism available in multi-core processors are concurrent data structures. However, some concurrent data structure abstractions are inherently sequential and incapable of harnessing the parallelism performance of multi-core processors. Designing and implementing concurrent data structures to harness hardware parallelism is challenging due to the requirement of correctness, efficiency and practicability under various application constraints. In this thesis, our research contribution is towards improving concurrent data structure access parallelism to increase data structure performance. We propose new design frameworks that improve access parallelism of already existing concurrent data structure designs. Also, we propose new concurrent data structure designs with significant performance improvements. To give an insight into the interplay between hardware and concurrent data structure access parallelism, we give a detailed analysis and model the performance scalability with varying parallelism.In the first part of the thesis, we focus on data structure semantic relaxation. By relaxing the semantics of a data structure, a bigger design space, that allows weaker synchronization and more useful parallelism, is unveiled. Investigating new data structure designs, capable of trading semantics for achieving better performance in a monotonic way, is a major challenge in the area. We algorithmically address this challenge in this part of the thesis. We present an efficient, lock-free, concurrent data structure design framework for out-of-order semantic relaxation. We introduce a new two-dimensional algorithmic design, that uses multiple instances of a given data structure to improve access parallelism. In the second part of the thesis, we propose an efficient priority queue that improves access parallelism by reducing the number of synchronization points for each operation. Priority queues are fundamental abstract data types, often used to manage limited resources in parallel systems. Typical proposed parallel priority queue implementations are based on heaps or skip lists. In recent literature, skip lists have been shown to be the most efficient design choice for implementing priority queues. Though numerous intricate implementations of skip list based queues have been proposed in the literature, their performance is constrained by the high number of global atomic updates per operation and the high memory consumption, which are proportional to the number of sub-lists in the queue. In this part of the thesis, we propose an alternative approach for designing lock-free linearizable priority queues, that significantly improve memory efficiency and throughput performance, by reducing the number of global atomic updates and memory consumption as compared to skip-list based queues. To achieve this, our new design combines two structures; a search tree and a linked list, forming what we call a Tree Search List Queue (TSLQueue). Subsequently, we analyse and introduce a model for lock-free concurrent data structure access parallelism. The major impediment to scaling concurrent data structures is memory contention when accessing shared data structure access points, leading to thread serialisation, and hindering parallelism. Aiming to address this challenge, a significant amount of work in the literature has proposed multi-access techniques that improve concurrent data structure parallelism. However, there is little work on analysing and modelling the execution behaviour of concurrent multi-access data structures especially in a shared memory setting. In this part of the thesis, we analyse and model the general execution behaviour of concurrent multi-access data structures in the shared memory setting. We study and analyse the behaviour of the two popular random access patterns: shared (Remote) and exclusive (Local) access, and the behaviour of the two most commonly used atomic primitives for designing lock-free data structures: Compare and Swap, and, Fetch and Add
    • …
    corecore