626 research outputs found

    Nested, but Separate: Isolating Unrelated Critical Sections in Real-Time Nested Locking

    Get PDF
    Prior work has produced multiprocessor real-time locking protocols that ensure asymptotically optimal bounds on priority inversion, that support fine-grained nesting of critical sections, or that are independence-preserving under clustered scheduling. However, while several protocols manage to come with two out of these three desirable features, no protocol to date accomplishes all three. Motivated by this gap in capabilities, this paper introduces the Group Independence-Preserving Protocol (GIPP), the first protocol to support fine-grained nested locking, guarantee a notion of independence preservation for fine-grained nested locking, and ensure asymptotically optimal priority-inversion bounds. As a stepping stone, this paper further presents the Clustered k-Exclusion Independence-Preserving Protocol (CKIP), the first asymptotically optimal independence-preserving k-exclusion lock for clustered scheduling. The GIPP and the CKIP rely on allocation inheritance (a.k.a. migratory priority inheritance) as a key mechanism to accomplish independence preservation

    Sharing Non-Processor Resources in Multiprocessor Real-Time Systems

    Get PDF
    Computing devices are increasingly being leveraged in cyber-physical systems, in which computing devices sense, control, and interact with the physical world. Associated with many such real-world interactions are strict timing constraints, which if unsatisfied, can lead to catastrophic consequences. Modern examples of such timing constraints are prevalent in automotive systems, such as airbag controllers, anti-lock brakes, and new autonomous features. In all of these examples, a failure to correctly respond to an event in a timely fashion could lead to a crash, damage, injury and even loss of life. Systems with imperative timing constraints are called real-time systems, and are broadly the subject of this dissertation. Much previous work on real-time systems and scheduling theory assumes that computing tasks are independent, i.e., the only resource they share is the platform upon which they are executed. In practice, however, tasks share many resources, ranging from more overt resources such as shared memory objects, to less overt ones, including data buses and other hardware and I/O devices. Accesses to some such resources must be synchronized to ensure safety, i.e., logical correctness, while other resources may exhibit better run-time performance if accesses are explicitly synchronized. The goal of this dissertation was to develop new synchronization algorithms and associated analysis techniques that can be used to synchronize access to many classes of resources, while improving the overall resource utilization, specifically as measured by real-time schedulability. Towards that goal, the Real-Time Nested Locking Protocol (RNLP), the first multiprocessor real-time locking protocol that supports lock nesting or fine-grained locking is proposed and analyzed. Furthermore, the RNLP is extended to support reader/writer locking, as well as k-exclusion locking. All presented RNLP variants are proven optimal. Furthermore, experimental results demonstrate the schedulability-related benefits of the RNLP. Additionally, three new synchronization algorithms are presented, which are specifically motivated by the need to manage shared hardware resources to improve real-time predictability. Furthermore, two new classes of shared resources are defined, and the first synchronization algorithms for them are proposed. To analyze these new algorithms, a novel analysis technique called idleness analysis is presented, which can be used to incorporate the effects of blocking into schedulability analysis.Doctor of Philosoph

    A summary of research in system software and concurrency at the University of Malta : multithreading

    Get PDF
    Multithreading has emerged as a leading paradigm for the development of applications with demanding performance requirements. This can be attributed to the benefits that are reaped through the overlapping of I/O with computation and the added bonus of speedup when multiprocessors are employed. However, the use of multithreading brings with it new challenges. Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced, as thread migration also disrupts cache content. Moreover, contention for shared data within a thread scheduler for shared memory multiprocessors has an adverse effect on efficiency when handling fine grained threads. Over the past few years, the System Software Research Group at the University of Malta has conducted research into the effective design of user-level thread schedulers, identifying several weaknesses in conventional designs and subsequently proposing a radical overhaul of the status quo to overcome these deficiencies. Various results have been published in academic conferences and journals [1–4]; this brief report highlights the principal findings. The related problem of communication and I/O bottlenecks in multithreaded systems and contemporary computer systems in general is discussed elsewhere in these proceedings [5].peer-reviewe

    Supporting Nested Resources in MrsP

    Get PDF
    The original MrsP proposal presented a new multiprocessor resource sharing protocol based on the properties and behaviour of the Priority Ceiling Protocol, supported by a novel helping mechanism. While this approach proved to be as simple and elegant as the single processor protocol, the implications with regard to nested resources was identified as requiring further clarification. In this work we present a complete approach to nested resources behaviour and analysis for the MrsP protocol

    Towards Scalable Parallel Fibonacci Heap Implementation

    Get PDF
    With the advancement of multiple processors, the sequential algorithms are being investigated and gradually substituted for its concurrent equivalent to effectively exploit the parallel architecture. Parallel algorithms speed up the performance by dividing the task into a number of processes (or threads) that can be scheduled and executed simultaneously in independent processing units. Various well-known basic algorithms and data-structures have been explored for its efficient parallel counterparts and have been published as popular libraries. However, advanced data-structures and algorithms have not seen similar investigation mainly because they have many optimization steps mostly backed by many states and finding safe and efficient parallel implementation isn’t an easy endeavor. Safety concerns for shared-memory parallel implementation are of utmost importance as it provides a basis for consistency of any data structure and algorithm. There are well-known tools like locks, semaphores, atomic operations and so on that assist towards safe parallel implementation but using them effectively and in well-defined synchronization are key factors in the overall performance of any data-structures and algorithms. This paper explores an advanced data structure, Fibonacci Heap, and its operations to evaluate its implementation using two different synchronization mechanisms: Coarse-grained and Fine-grained. The analysis in this paper shows that a fine-grained synchronized Fibonacci Heap implementation with certainly relaxed semantics is more scalable with growing number of concurrency in comparison to the coarse-grained synchronized Fibonacci Heap implementation

    Nested, but Separate: Isolating Unrelated Critical Sections in Real-Time Nested Locking

    Get PDF
    • …
    corecore