46 research outputs found

    Scalable Range Locks for Scalable Address Spaces and Beyond

    Full text link
    Range locks are a synchronization construct designed to provide concurrent access to multiple threads (or processes) to disjoint parts of a shared resource. Originally conceived in the file system context, range locks are gaining increasing interest in the Linux kernel community seeking to alleviate bottlenecks in the virtual memory management subsystem. The existing implementation of range locks in the kernel, however, uses an internal spin lock to protect the underlying tree structure that keeps track of acquired and requested ranges. This spin lock becomes a point of contention on its own when the range lock is frequently acquired. Furthermore, where and exactly how specific (refined) ranges can be locked remains an open question. In this paper, we make two independent, but related contributions. First, we propose an alternative approach for building range locks based on linked lists. The lists are easy to maintain in a lock-less fashion, and in fact, our range locks do not use any internal locks in the common case. Second, we show how the range of the lock can be refined in the mprotect operation through a speculative mechanism. This refinement, in turn, allows concurrent execution of mprotect operations on non-overlapping memory regions. We implement our new algorithms and demonstrate their effectiveness in user-space and kernel-space, achieving up to 9×\times speedup compared to the stock version of the Linux kernel. Beyond the virtual memory management subsystem, we discuss other applications of range locks in parallel software. As a concrete example, we show how range locks can be used to facilitate the design of scalable concurrent data structures, such as skip lists.Comment: 17 pages, 9 figures, Eurosys 202

    Lock Holder Preemption Avoidance via Transactional Lock Elision

    Get PDF
    Abstract In this short paper we show that hardware-based transactional lock elision can provide benefit by reducing the incidence of lock holder preemption, decreasing lock hold times and promoting improved scalability

    Accelerating Native Calls using Transactional Memory

    Get PDF
    Abstract Transitioning between executing managed code and executing native code requires synchronization to ensure that invariants used by the managed runtime are maintained or restored after the execution of the native code. We describe how transactional memory can be used to accelerate the execution of native methods by reducing the need for such synchronization. We also present the results of a simple experiment that, although preliminary, suggests that this approach may have a significant effect on performance. Unlike most of the work on exploiting transactional memory, this approach does not depend on concurrency and improving scalability. Indeed, the experiment presented uses a singlethreaded benchmark
    corecore