534 research outputs found

    Nested, but Separate: Isolating Unrelated Critical Sections in Real-Time Nested Locking

    Get PDF
    Prior work has produced multiprocessor real-time locking protocols that ensure asymptotically optimal bounds on priority inversion, that support fine-grained nesting of critical sections, or that are independence-preserving under clustered scheduling. However, while several protocols manage to come with two out of these three desirable features, no protocol to date accomplishes all three. Motivated by this gap in capabilities, this paper introduces the Group Independence-Preserving Protocol (GIPP), the first protocol to support fine-grained nested locking, guarantee a notion of independence preservation for fine-grained nested locking, and ensure asymptotically optimal priority-inversion bounds. As a stepping stone, this paper further presents the Clustered k-Exclusion Independence-Preserving Protocol (CKIP), the first asymptotically optimal independence-preserving k-exclusion lock for clustered scheduling. The GIPP and the CKIP rely on allocation inheritance (a.k.a. migratory priority inheritance) as a key mechanism to accomplish independence preservation

    Comparing two-phase locking and optimistic concurrency control protocols in multiprocessor real-time databases

    Get PDF
    Previous studies (Haritsa et al., 1990) have shown that optimistic concurrency control (OCC) generally performs better than lock-based protocols in disk-based real-time database systems (RTDBS). We compare the two concurrency control protocols in both disk-based and memory-resident multiprocessor RTDBS. Based on their performance characteristics, a new lock-based protocol, called two phase locking-lock write all (2PL-LW), is proposed. The results of our performance evaluation experiments show that different characteristics of the two environments indeed have great impact on the protocols' performance. We identify such system characteristics and show that our new lock-based protocols, 2PL-LW, is better than OCC in meeting transaction deadlines in both disk-based and memory-resident RTDBS.published_or_final_versio

    Nested, but Separate: Isolating Unrelated Critical Sections in Real-Time Nested Locking

    Get PDF

    Rethinking State-Machine Replication for Parallelism

    Full text link
    State-machine replication, a fundamental approach to designing fault-tolerant services, requires commands to be executed in the same order by all replicas. Moreover, command execution must be deterministic: each replica must produce the same output upon executing the same sequence of commands. These requirements usually result in single-threaded replicas, which hinders service performance. This paper introduces Parallel State-Machine Replication (P-SMR), a new approach to parallelism in state-machine replication. P-SMR scales better than previous proposals since no component plays a centralizing role in the execution of independent commands---those that can be executed concurrently, as defined by the service. The paper introduces P-SMR, describes a "commodified architecture" to implement it, and compares its performance to other proposals using a key-value store and a networked file system

    Sharing Non-Processor Resources in Multiprocessor Real-Time Systems

    Get PDF
    Computing devices are increasingly being leveraged in cyber-physical systems, in which computing devices sense, control, and interact with the physical world. Associated with many such real-world interactions are strict timing constraints, which if unsatisfied, can lead to catastrophic consequences. Modern examples of such timing constraints are prevalent in automotive systems, such as airbag controllers, anti-lock brakes, and new autonomous features. In all of these examples, a failure to correctly respond to an event in a timely fashion could lead to a crash, damage, injury and even loss of life. Systems with imperative timing constraints are called real-time systems, and are broadly the subject of this dissertation. Much previous work on real-time systems and scheduling theory assumes that computing tasks are independent, i.e., the only resource they share is the platform upon which they are executed. In practice, however, tasks share many resources, ranging from more overt resources such as shared memory objects, to less overt ones, including data buses and other hardware and I/O devices. Accesses to some such resources must be synchronized to ensure safety, i.e., logical correctness, while other resources may exhibit better run-time performance if accesses are explicitly synchronized. The goal of this dissertation was to develop new synchronization algorithms and associated analysis techniques that can be used to synchronize access to many classes of resources, while improving the overall resource utilization, specifically as measured by real-time schedulability. Towards that goal, the Real-Time Nested Locking Protocol (RNLP), the first multiprocessor real-time locking protocol that supports lock nesting or fine-grained locking is proposed and analyzed. Furthermore, the RNLP is extended to support reader/writer locking, as well as k-exclusion locking. All presented RNLP variants are proven optimal. Furthermore, experimental results demonstrate the schedulability-related benefits of the RNLP. Additionally, three new synchronization algorithms are presented, which are specifically motivated by the need to manage shared hardware resources to improve real-time predictability. Furthermore, two new classes of shared resources are defined, and the first synchronization algorithms for them are proposed. To analyze these new algorithms, a novel analysis technique called idleness analysis is presented, which can be used to incorporate the effects of blocking into schedulability analysis.Doctor of Philosoph

    Scheduling Issues in Real-Time Systems

    Get PDF
    The most important objective of real-time systems is to fulfill time-critical missions in satisfying their application requirements and timing constraints. Software utilities can analyze real-time tasks and extract their characteristics and requirements for assisting the systems to guarantee schedulability. Real- time scheduling is the core of the real-time system design. It should allow real-time systems to exhibit predictable timing correctness regardless of possible uncertainty in run-time environments. In this dissertation, we study the problem of scheduling real-time tasks with resource and fault-tolerance requirements. For tasks with resource requirements, two types of platforms are examined: multiprocessor hard real-time systems and real-time database systems; for task with fault-tolerance requirements, we focus on hard real-time systems. We investigate preemptive priority-based scheduling for tasks with resource requirements in context of hard real-time systems. Rate-monotonic and earliest deadline first priority assignment strategies can meet deadlines if the schedulability conditions are satisfied. We propose resource control protocols, for these scheduling strategies, based on the concepts of priority inheritance and priority ceiling and describe schedulability conditions for meeting deadlines. Real-time database systems have different objectives for transaction scheduling. Minimizing miss ratio usually is the major concern. We study the significance of the knowledge of execution time in system performance and propose a class of optimistic concurrency control protocols using the knowledge of execution time. Our simulation results indicate that the knowledge of execution time substantially improve system performance. Fault-tolerance is an ability to maintain system in a safe and stable state such that the real-time application functions correctly and its timing constraints are satisfied even in the presence of faults. We develop a scheduling algorithm which attempts to build as many fault-tolerant tasks as possible into a schedule. We approximate system reliability by Markov chain models and illustrate the applicability of the proposed reliability models. We compare the proposed fault-tolerance scheduling approach with the basic fault-tolerance scheduling schemes and the simulation results show that our method provides better reliability than the basic scheduling schemes. (Also cross-referenced as UMIACS-TR-95-73

    Parallel network protocol stacks using replication

    Get PDF
    Computing applications demand good performance from networking systems. This includes high-bandwidth communication using protocols with sophisticated features such as ordering, reliability, and congestion control. Much of this protocol processing occurs in software, both on desktop systems and servers. Multi-processing is a requirement on today\u27s computer architectures because their design does not allow for increased processor frequencies. At the same time, network bandwidths continue to increase. In order to meet application demand for throughput, protocol processing must be parallel to leverage the full capabilities of multi-processor or multi-core systems. Existing parallelization strategies have performance difficulties that limit their scalability and their application to single, high-speed data streams. This dissertation introduces a new approach to parallelizing network protocol processing without the need for locks or for global state. Rather than maintain global states, each processor maintains its own copy of protocol state. Therefore, updates are local and don\u27t require fine-grained locks or explicit synchronization. State management work is replicated, but logically independent work is parallelized. Along with the approach, this dissertation describes Dominoes, a new framework for implementing replicated processing systems. Dominoes organizes the state information into Domains and the communication into Channels. These two abstractions provide a powerful, but flexible model for testing the replication approach. This dissertation uses Dominoes to build a replicated network protocol system. The performance of common protocols, such as TCP/IP, is increased by multiprocessing single connections. On commodity hardware, throughput increases between 15-300% depending on the type of communication. Most gains are possible when communicating with unmodified peer implementations, such as Linux. In addition to quantitative results, protocol behavior is studied as it relates to the replication approach
    • …
    corecore