10 research outputs found

    Response-time analysis of DAG tasks supporting heterogeneous computing

    Get PDF
    Hardware platforms are evolving towards parallel and heterogeneous architectures to overcome the increasing necessity of more performance in the real-time domain. Parallel programming models are fundamental to exploit the performance capabilities of these architectures. This paper proposes a novel response time analysis (RTA) for verifying the schedulability of DAG tasks supporting heterogeneous computing. It analyzes the impact of executing part of the DAG in the accelerator device. As a result, the response time upper bound of the system is more precise than the one provided by currently existing RTA targeting homogeneous architectures.This work is supported by the Spanish Ministry of Science and Innovation under contract TIN2015-65316-PPeer ReviewedPostprint (published version

    An Execution Model for Fine-Grained Parallelism in Ada

    Get PDF
    20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), 25 to 29, Jun, 2015. Madrid, Spain. Best Paper Award.This paper extends the authors earlier proposal for providing Ada with support for fine-grained parallelism with an execution model based on the concept of abstract executors, detailing the progress guarantees that these executors must provide and how these can be assured even in the presence of potentially blocking operations. The paper also describes how this execution model can be applied to real-time systems

    Global EDF Scheduling for Parallel Real-Time Tasks

    Get PDF
    As multicore processors become ever more prevalent, it is important for real-time programs to take advantage of intra-task parallelism in order to support computation-intensive applications with tight deadlines. In this thesis, we consider the Global Earliest Deadline First (GEDF) scheduling policy for task sets consisting of parallel tasks. Each task can be represented by a directed acyclic graph (DAG) where nodes represent computational work and edges represent dependences between nodes. In this model, we prove that GEDF provides a capacity augmentation bound of 4-2/m and a resource augmentation bound of 2-1/m. The capacity augmentation bound acts as a linear-time schedulability test since it guarantees that any task set with total utilization of at most m/(4-2/m) where each task\u27s critical-path length is at most 1/(4-2/m) of its deadline is schedulable on m cores under GEDF. In addition, we present a pseudo-polynomial time fixed-point schedulability test for GEDF; this test uses a carry-in work calculation based on the proof for the capacity bound. Finally, we present and evaluate a prototype platform --- called PGEDF --- for scheduling parallel tasks using GEDF. PGEDF is built by combining the GNU OpenMP runtime system and the LITMUS_RT operating system. This platform allows programmers to write parallel OpenMP tasks and specify real-time parameters such as deadlines for tasks. We perform two kinds of experiments to evaluate the performance of GEDF for parallel tasks. (1) We run numerical simulations for DAG tasks. (2) We execute randomly generated tasks using PGEDF. Both sets of experiments indicate that GEDF performs surprisingly well and outperforms an existing scheduling techniques that involves task decomposition

    Operating System Support for Redundant Multithreading

    Get PDF
    Failing hardware is a fact and trends in microprocessor design indicate that the fraction of hardware suffering from permanent and transient faults will continue to increase in future chip generations. Researchers proposed various solutions to this issue with different downsides: Specialized hardware components make hardware more expensive in production and consume additional energy at runtime. Fault-tolerant algorithms and libraries enforce specific programming models on the developer. Compiler-based fault tolerance requires the source code for all applications to be available for recompilation. In this thesis I present ASTEROID, an operating system architecture that integrates applications with different reliability needs. ASTEROID is built on top of the L4/Fiasco.OC microkernel and extends the system with Romain, an operating system service that transparently replicates user applications. Romain supports single- and multi-threaded applications without requiring access to the application's source code. Romain replicates applications and their resources completely and thereby does not rely on hardware extensions, such as ECC-protected memory. In my thesis I describe how to efficiently implement replication as a form of redundant multithreading in software. I develop mechanisms to manage replica resources and to make multi-threaded programs behave deterministically for replication. I furthermore present an approach to handle applications that use shared-memory channels with other programs. My evaluation shows that Romain provides 100% error detection and more than 99.6% error correction for single-bit flips in memory and general-purpose registers. At the same time, Romain's execution time overhead is below 14% for single-threaded applications running in triple-modular redundant mode. The last part of my thesis acknowledges that software-implemented fault tolerance methods often rely on the correct functioning of a certain set of hardware and software components, the Reliable Computing Base (RCB). I introduce the concept of the RCB and discuss what constitutes the RCB of the ASTEROID system and other fault tolerance mechanisms. Thereafter I show three case studies that evaluate approaches to protecting RCB components and thereby aim to achieve a software stack that is fully protected against hardware errors

    Memory Efficient Scheduling for Multicore Real-time Systems

    Get PDF
    Modern real-time systems are becoming increasingly complex and requiring significant computational power to meet their demands. Since the increase in uniprocessor speed has slowed down in the last decade, multicore processors are now the preferred way to supply the increased performance demand of real-time systems. A significant amount of work in the real-time community has focused on scheduling solutions for multicore processors for both sequential and parallel real-time tasks. Even though such solutions are able to provide strict timing guarantees on the overall response time of real-time tasks, they rely on the assumption that the worst-case execution time (WCET) of each individual task is known. However, physical shared resources such as main memory and I/O are heavily employed in multicore processors. These resources are limited and therefore subject to contention. In fact, the execution time of one task when run in parallel with other tasks is significantly larger than the execution time of the same task when run in isolation. In addition, the presence of shared resources increases the timing unpredictability due to the conflicts generated by multiple cores. As a result, the adoption of multicore processors for real-time systems is dependent upon solving such sources of unpredictability. In this dissertation, we investigate memory bus contention. In particular, two main problems are associated with memory contention: (1) unpredictable behavior and (2) hindrance of performance. We show how to mitigate these two problems through scheduling. Scheduling is an attractive tool that can be easily integrated into the system without the need for hardware modifications. We adopt an execution model that exposes memory as a resource to the scheduling algorithm. Thus, the theory of real-time multiprocessor scheduling, that has seen significant advances in recent years, can be utilized to schedule both processor cores and memory. Since the real-time workload on multicore processors can be modeled as sequential or parallel tasks, we also study parallel task scheduling by taking memory time into account

    Toward Efficient Scheduling for Parallel Real-Time Tasks on Multiprocessors

    Get PDF
    Modern real-time applications are becoming more demanding computationally while their temporal requirements, dictated by the physical world, often remain unchanged. This coupled with the increasing prevalence of multiprocessors in real-time systems necessitates that highly computation-demanding real-time tasks need to be parallelized to exploit the parallelism offered by the underlying hardware, in order to satisfy their temporal constraints. Scheduling parallel real-time tasks, however, introduces a new layer of complexity due to the allowance for intra-task parallelism. This dissertation addresses the problem of scheduling parallel real-time tasks in which tasks may (or may not) access shared non-processor resources, such as in-memory buffers or data structures. Specifically, for independent tasks, we propose new scheduling algorithms and schedulability analyses for parallel tasks with these characteristics, under federated and global scheduling.Experimental results show that the proposed algorithms and analyses improve the previously introduced methods. For parallel tasks that may access shared non-processor resources, we present a blocking analysis for two different types of spinlocks; through evaluations, we make a recommendation for a preferable ordering of locks. We also study practical runtime parallel scheduler designs for soft real-time applications and present a design that is more suitable for soft real-time systems

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    Fehlertolerante Mehrkernprozessoren fĂĽr gemischt-kritische Echtzeitsysteme

    Get PDF
    Current and future computing systems must be appropriately designed to cope with random hardware faults in order to provide a dependable service and correct functionality. Dependability has many facets to be addressed when designing a system and that is specially challenging in mixed-critical real-time systems, where safety standards play an important role and where responding in time can be as important as responding correctly or even responding at all. The thesis addresses the dependability of mixed-critical real-time systems, considering three important requirements: integrity, resilience and real-time. More specifically, it looks into the architectural and performance aspects of achieving dependability, concentrating its scope on error detection and handling in hardware -- more specifically in the Network-on-Chip (NoC), the backbone of modern MPSoC -- and on the performance of error handling and recovery in software. The thesis starts by looking at the impacts of random hardware faults on the NoC and on the system, with special focus on soft errors. Then, it addresses the uncovered weaknesses in the NoC by proposing a resilient NoC for mixed-critical real-time systems that is able to provide a highly reliable service with transparent protection for the applications. Formal communication time analysis is provided with common ARQ protocols modeled for NoCs and including a novel ARQ-based protocol optimized for DMAs. After addressing the efficient use of ARQ-based protocols in NoCs, the thesis proposes the Advanced Integrity Q-service (AIQ), a low-overhead mechanism to achieve integrity and real-time guarantees of NoC transactions on an End-to-End (E2E) basis. Inspired by transactions in distributed systems, the mechanism differs from the previous approach in that it does not provide error recovery in hardware but delegates the task to software, making use of existing functionality in cross-layer fault-tolerance solutions. Finally, the thesis addresses error handling in software as seen in cross-layer approaches. It addresses the performance of replicated software execution in many-core platforms. Replicated software execution provides protection to the system against random hardware faults. It relies on hardware-supported error detection and error handling in software. The replica-aware co-scheduling is proposed to achieve high performance with replicated execution, which is not possible with standard real-time schedulers.Um einen zuverlässigen Betrieb und korrekte Funktionalität zu gewährleisten, müssen aktuelle und zukünftige Computersysteme so ausgelegt werden, dass sie mit diesen Fehlern umgehen können. Zuverlässigkeit hat viele Aspekte, die bei der Entwicklung eines Systems berücksichtigt werden müssen. Das gilt insbesondere für Echtzeitsysteme mit gemischter Kritikalität, bei denen Sicherheitsstandards, die ein korrektes und rechtzeitiges Verhalten fordern, eine wichtige Rolle spielen. Diese Dissertation befasst sich mit der Zuverlässigkeit von gemischt-kritischen Echtzeitsystemen unter Berücksichtigung von drei wichtigen Anforderungen: Integrität, Resilienz und Echtzeit. Genauer gesagt, behandelt sie Architektur- und Leistungsaspekte die notwendig sind um Zuverlässigkeit zu erreichen, wobei der Schwerpunkt auf der Fehlererkennung und -behandlung in der Hardware – genauer gesagt im Network-on-Chip (NoC), dem Rückgrat des modernen MPSoC – und auf der Leistung der Fehlerbehandlung und -behebung in der Software liegt. Die Arbeit beginnt mit der Untersuchung der Auswirkung von zufälligen Hardwarefehlern auf das NoC und das System, wobei der Schwerpunkt auf weichen Fehler (soft errors) liegt. Anschließend werden die aufgedeckten Schwachstellen im NoC behoben, indem ein widerstandsfähiges NoC für gemischt-kritische Echtzeitsysteme vorgeschlagen wird, das in der Lage ist, einen höchst zuverlässigen Betrieb mit transparentem Schutz für die Anwendungen zu bieten. Nach der Auseinandersetzung mit der effizienten Nutzung von ARQ-basierten Protokolle in NoCs, wird der Advanced Integrity Q-Service (AIQ) vorgestellt, der ein Mechanismus mit geringem Overhead ist, um Integrität und Echtzeit-Garantien von NoC-Transaktionen auf Ende-zu-Ende (E2E)-Basis zu erreichen. Inspiriert von Transaktionen in verteilten Systemen unterscheidet sich der Mechanismus vom bisherigen Konzept dadurch, dass er keine Fehlerbehebung in der Hardware vorsieht, sondern diese Aufgabe an die Software delegiert. Schließlich befasst sich die Dissertation mit der Fehlerbehandlung in Software, wie sie in schichtübergreifenden Methoden zu sehen ist. Sie behandelt die Leistung der replizierten Software-Ausführung in Many-Core-Plattformen. Es setzt auf hardwaregestützte Fehlererkennung und Fehlerbehandlung in der Software. Das Replika-bewusste Co-Scheduling wird vorgeschlagen, um eine hohe Performance bei replizierter Ausführung zu erreichen, was mit Standard-Echtzeit-Schedulern nicht möglich ist
    corecore