8 research outputs found

    An Architecture for Declarative Real-Time Scheduling on Linux

    Get PDF
    This paper proposes a novel framework and programming model for real-time applications supporting a declarative access to real-time CPU scheduling features that are available on an operating system. The core idea is to let applications declare their temporal characteristics and/or requirements on the CPU allocation, where, for example, some of them may require real-time POSIX priorities, whilst others might need resource reservations through SCHED_DEADLINE. The framework can properly handle such a set of heterogeneous requirements configuring an underlying multi-core platform so to exploit the various scheduling disciplines that are available in the kernel, matching applications requirements. The framework is realized as a modular architecture in which different plugins handle independently certain real-time scheduling features within the underlying kernel, easing the customization of its behavior to support other schedulers or operating systems by adding further plugins

    Design and development of deadline based scheduling mechanisms for multiprocessor systems

    Get PDF
    Multiprocessor systems are nowadays de facto standard for both personal computers and server workstations. Benefits of multicore technology will be used in the next few years for embedded devices and cellular phones as well. Linux, as a General Purpose Operating System (GPOS), must support many different hardware platform, from workstations to mobile devices. Unfortu- nately, Linux has not been designed to be a Real-Time Operating System (RTOS). As a consequence, time-sensitive (e.g. audio/video players) or sim- ply real-time interactive applications, may suffer degradations in their QoS. In this thesis we extend the implementation of the “Earliest Deadline First” algorithm in the Linux kernel from single processor to multicore systems, allowing processes migration among the CPUs. We also discuss the design choices and present the experimental results that show the potential of our work

    Advance Reservations for Distributed Real-Time Workflows with Probabilistic Service Guarantees

    Get PDF
    This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a Grid of physical resources made available by a provider. The discussion focuses on how such a problem may be mathematically formalised, both in terms of constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system, and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented which may be plugged into the formal problem description, depending on the specific needs of individual workflows

    GEDF Tardiness: Open Problems Involving Uniform Multiprocessors and Affinity Masks Resolved

    Get PDF
    Prior work has shown that the global earliest-deadline-first (GEDF) scheduler is soft real-time (SRT)-optimal for sporadic task systems in a variety of contexts, meaning that bounded deadline tardiness can be guaranteed under it for any task system that does not cause platform overutilization. However, one particularly compelling context has remained elusive: multiprocessor platforms in which tasks have affinity masks that determine the processors where they may execute. Actual GEDF implementations, such as the SCHED_DEADLINE class in Linux, have dealt with this unresolved question by foregoing SRT guarantees once affinity masks are set. This unresolved question, as it pertains to SCHED_DEADLINE, was included by Peter Zijlstra in a list of important open problems affecting Linux in his keynote talk at ECRTS 2017. In this paper, this question is resolved along with another open problem that at first blush seems unrelated but actually is. Specifically, both problems are closed by establishing two results. First, a proof strategy used previously to establish GEDF tardiness bounds that are exponential in size on heterogeneous uniform multiprocessors is generalized to show that polynomial bounds exist on a wider class of platforms. Second, both uniform multiprocessors and identical multiprocessors with affinities are shown to be within this class. These results yield the first polynomial GEDF tardiness bounds for the uniform case and the first such bounds of any kind for the identical-with-affinities case

    Improving Responsiveness of Time-Sensitive Applications by Exploiting Dynamic Task Dependencies

    Get PDF
    In this paper, a mechanism is presented for reducing priority inversion in multi-programmed computing systems. Contrarily to well-known approaches from the literature, this paper tackles cases where the dependency relationships among tasks cannot be known in advance to the operating system (OS). The presented mechanism allows tasks to explicitly declare said relationships, enabling the OS scheduler to take advantage of such information and trigger priority inheritance, resulting in reduced priority inversion. We present the prototype implementation of the concept within the Linux kernel, in the form of modifications to the standard POSIX condition variables code, along with an extensive evaluation including a quantitative assessment of the benefits for applications making use of the technique, as well as comprehensive overhead measurements. Also, we present an associated technique for theoretical schedulability analysis of a system using the new mechanism, which is useful to determine whether all tasks can meet their deadlines or not, in the specific scenario of tasks interacting only through remote procedure calls, and under partitioned scheduling

    Design, testing and performance analisys of efficient lock-free solutions for multi-core Linux scheduler

    Get PDF
    Multiprocessor systems are nowadays de facto standard for both personal computers and server workstations. Benefits of multi-core technology has recently been used for embedded devices and cellular phones as well. Linux has not been originally designed to be a Real-Time Operating System (RTOS) but, recently, a new scheduling class, named SCHED_DEADLINE, was added to it. SCHED_DEADLINE is an implementation of the well known Earliest Deadline First algorithm. In this thesis we first present PRACTISE, a tool for developing, debugging, testing and analyse real-time scheduling data structures in user space. Unlike other similar tools, PRACTISE executes code in parallel, allowing to test and analyse the performance of the code in a realistic multiprocessor scenario. We also show an implementation of a skiplist, realized with the help of the tool above. This implementation is intended to be used for processes migration among the CPUs in SCHED_DEADLINE. To effectively manage the concurrent accesses to the data structure we used a revised version of the flat combining framework

    CARTOS: A Common API for Real-Time Operating Systems

    No full text
    很多研究開發人員致力於在各種作業系統上實行即時服務,如即時排程、許可控制、延遲抖動控制。大多數即時服務實作沒有良好且能跨各種作業系統的可移植性。因此,在這篇論文中,一個可移植的即時架構被提出來舒緩即時服務的發展。基於這個即時架構,我們針對即時作業系統提出一個共通的應用程式介面設計。我們在Linux 2.4.18 和 2.6.18 實行我們的應用程式介面設計並且評估其效能。實驗結果顯示我們的實作改善了 Linux 2.4.18 和 2.6.18 的即時性效能。我們的應用程式介面設計也能被實行在各種作業系統上並能幫助程式開發人員節省花在即時服務實作的時間和精力。Many researchers have been devoted on implementing real-time services such as real-time scheduling, admission control and jitter control on various operating systems. Most real-time service implementations do not have satisfactory portability across diversified operating systems. Therefore, a portable real-time framework is proposed in this thesis to ease the development of real-time service. Based on the real-time framework, we present a common API for real-time operating systems. We implemented our API on Linux 2.4.18 and 2.6.18 and conducted a performance evaluation. The experimental results show that our implementations improve real-time performances of Linux 2.4.18 and 2.6.18. Our API can be implemented on various operating systems as well and help programmers to save time and effort on real-time service implementations.1 Introduction 1 Related Work 4 API Design 6.1 Real-Time Scheduling Interface 9.2 Real-Time Task Interface 14.3 Time Interface 18 Implementation 20.1 Implement Real-Time API on Linux 2.4 23.2 Implement Real-Time API on Linux 2.6 26 Performance Evaluation 31.1 Definitions and Measurement Approaches 31.2 Results and Discussion 33.2.1 Experiment on Linux 2.4 33.2.2 Experiment on Linux 2.6 43.2.3 Improvement 53 Conclusion and Future Work 56ibliography 57 錄.1 The Common Real-Time Framework 7.2 The Requirements of the Real-Time Scheduling 9.3 The Framework of the Task Management in General Purpose OS 14.1 The Framework of Implementing our Real-Time API on Linux 21.2 The Implementation Method of our Real-Time Service Implementation 21.3 The Data Structure Defined in Linux Kernel 22.4 The Flowchart of Real-Time Scheduling on Linux 2.4 24.5 Modified Kernel Functions in Linux 2.4 25.6 The Flowchart of Real-Time Scheduling on Linux 2.6 26.7 Modified Kernel Functions in Linux 2.6 28.8 Reset the Next Running Task in Linux 2.6 29.1 Linux Kernel Latency 32.2 Kernel Latency Distribution without Load (Linux 2.4), 99% Samples 36.3 Kernel Latency Distribution without Load (CARTOS on Linux 2.4), 99% Samples 36.4 Kernel Latency without Load, Linux 2.4 vs CARTOS (1000 sets, total 100000 samples) 37.5 Kernel Latency Distribution with Load (Linux 2.4), 97% Samples 40.6 Kernel Latency Distribution with Load (CARTOS on Linux 2.4), 97% Samples 40.7 Kernel Latency with Load, Linux 2.4 vs CARTOS (1000 sets, total 100000 samples) 41.8 Kernel Latency Distribution without Load (Linux 2.6), 99% Samples 46.9 Kernel Latency Distribution without Load (CARTOS on Linux 2.6), 99% Samples 46.10 Kernel Latency without Load, Linux 2.6 vs CARTOS (1000 sets, total 100000 samples) 47.11 Kernel Latency Distribution with Load (Linux 2.6), 98% Samples 50.12 Kernel Latency Distribution with Load (CARTOS on Linux 2.6), 99% Samples 50.13 Kernel Latency with Load, Linux 2.6 vs CARTOS (1000 sets, total 100000 samples) 51.14 Worse Kernel Latency with Load (Linux 2.4), 100000 Samples 54.15 Worse Kernel Latency with Load (CARTOS on Linux 2.4), 100000 Samples 54.16 Worse Kernel Latency with Load (Linux 2.6), 100000 Samples 55.17 Worse Kernel Latency with Load (CARTOS on Linux 2.6), 100000 Samples 5
    corecore