163 research outputs found

    Using Imprecise Computing for Improved Real-Time Scheduling

    Get PDF
    Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system. Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode

    MC-Fluid: Fluid Model-Based Mixed-Criticality Scheduling on Multiprocessors

    Get PDF
    A mixed-criticality system consists of multiple components with different criticalities. While mixed-criticality scheduling has been extensively studied for the uniprocessor case, the problem of efficient scheduling for the multiprocessor case has largely remained open. We design a fluid model-based multiprocessor mixed-criticality scheduling algorithm, called MC-Fluid in which each task is executed in proportion to its criticality-dependent rate. We propose an exact schedulability condition for MC-Fluid and an optimal assignment algorithm for criticality-dependent execution rates with polynomial-time complexity. Since MC-Fluid cannot be implemented directly on real hardware platforms, we propose another scheduling algorithm, called MC-DP-Fair, which can be implemented while preserving the same schedulability properties as MC-Fluid. We show that MC-Fluid has a speedup factor of (1 + √ 5) /2 (~ 1.618), which is best known in multiprocessor MC scheduling, and simulation results show that MC-DP-Fair outperforms all existing algorithms

    A Survey of Research into Mixed Criticality Systems

    Get PDF
    This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards

    Resource-Efficient Scheduling Of Multiprocessor Mixed-Criticality Real-Time Systems

    Get PDF
    Timing guarantee is critical to ensure the correctness of embedded software systems that interact with the physical environment. As modern embedded real-time systems evolves, they face three challenges: resource constraints, mixed-criticality, and multiprocessors. This dissertation focuses on resource-efficient scheduling techniques for mixed-criticality systems on multiprocessor platforms. While Mixed-Criticality (MC) scheduling has been extensively studied on uniprocessor plat- forms, the problem on multiprocessor platforms has been largely open. Multiprocessor al- gorithms are broadly classified into two categories: global and partitioned. Global schedul- ing approaches use a global run-queue and migrate tasks among processors for improved schedulability. Partitioned scheduling approaches use per processor run-queues and can reduce preemption/migration overheads in real implementation. Existing global scheduling schemes for MC systems have suffered from low schedulability. Our goal in the first work is to improve the schedulability of MC scheduling algorithms. Inspired by the fluid scheduling model in a regular (non-MC) domain, we have developed the MC-Fluid scheduling algo- rithm that executes a task with criticality-dependent rates. We have evaluated MC-Fluid in terms of the processor speedup factor: MC-Fluid is a multiprocessor MC scheduling algo- rithm with a speed factor of 4/3, which is known to be optimal. In other words, MC-Fluid can schedule any feasible mixed-criticality task system if each processor is sped up by a factor of 4/3. Although MC-Fluid is speedup-optimal, it is not directly implementable on multiprocessor platforms of real processors due to the fractional processor assumption where multiple task can be executed on one processor at the same time. In the second work, we have considered the characteristic of a real processor (executing only one task at a time) and have developed the MC-Discrete scheduling algorithm for regular (non-fluid) scheduling platforms. We have shown that MC-Discrete is also speedup-optimal. While our previous two works consider global scheduling approaches, our last work con- siders partitioned scheduling approaches, which are widely used in practice because of low implementation overheads. In addition to partitioned scheduling, the work consid- ers the limitation of conventional MC scheduling algorithms that drops all low-criticality tasks when violating a certain threshold of actual execution times. In practice, the system designer wants to execute the tasks as much as possible. To address the issue, we have de- veloped the MC-ADAPT scheduling framework under uniprocessor platforms to drop as few low-criticality tasks as possible. Extending the framework with partitioned multiprocessor platforms, we further reduce the dropping of low-criticality tasks by allowing migration of low-criticality tasks at the moment of a criticality switch. We have evaluated the quality of task dropping solution in terms of speedup factor. In existing work, the speedup factor has been used to evaluate MC scheduling algorithms in terms of schedulability under the worst-case scheduling scenario. In this work, we apply the speedup factor to evaluate MC scheduling algorithms in terms of the quality of their task dropping solution under various MC scheduling scenarios. We have derived that MC-ADAPT has a speedup factor of 1.618 for task dropping solution

    Overhead Based Cluster Scheduling of Mixed Criticality Systems on Multicore Platform

    Get PDF
    The cluster-based technique is gaining focus for scheduling tasks of mixed-criticality (MC) real-time multicore systems. In this technique, the cores of the MC system are distributed in groups known as clusters. When all cores are distributed in clusters, the tasks are partitioned into clusters, which are scheduled on the cores within each cluster using a global approach. In this study, a cluster-based technique is adopted for scheduling tasks of real-time mixed-criticality systems (MCS). The Decreasing Criticality Decreasing Utilization with the worst-fit (DCDU-WF) technique is used for partitioning of tasks to clusters, whereas a novel mixed-criticality cluster-based boundary fair (MC-Bfair) scheduling approach is used for scheduling tasks on cores within clusters. The MC-Bfair scheduling algorithm reduces the number context switches and migration of tasks, which minimizes the overhead of mixed-criticality tasks. The migration and context switch overhead time is added at the time of each migration and context switch respectively for a task. In low critical mode, the low mode context switch and migration overhead time is added to task execution time, while the high mode overhead time of migration and context switch is added to the execution time of a task in high critical mode. The results obtained from experiments show the better schedulablity performance of proposed cluster-based technique as compared to cluster-based fixed priority (CB-FP), MC-EKG-VD-1, global and partitioned scheduling techniques e.g., for target utilization U=0.6, the proposed technique schedule 66.7% task sets while MC-EKG-VD-1, CB-FP, partitioned and global techniques schedule 50%, 33.3%, 16.7% and 0% task sets respectively

    Mixed-criticality real-time task scheduling with graceful degradation

    Get PDF
    ”The mixed-criticality real-time systems implement functionalities of different degrees of importance (or criticalities) upon a shared platform. In traditional mixed-criticality systems, under a hi mode switch, no guaranteed service is provided to lo-criticality tasks. After a mode switch, only hi-criticality tasks are considered for execution while no guarantee is made to the lo-criticality tasks. However, with careful optimistic design, a certain degree of service guarantee can be provided to lo-criticality tasks upon a mode switch. This concept is broadly known as graceful degradation. Guaranteed graceful degradation provides a better quality of service as well as it utilizes the system resource more efficiently. In this thesis, we study two efficient techniques of graceful degradation. First, we study a mixed-criticality scheduling technique where graceful degradation is provided in the form of minimum cumulative completion rates. We present two easy-to-implement admission-control algorithms to determine which lo-criticality jobs to complete in hi mode. The scheduling is done by following deadline virtualization, and two heuristics are shown for virtual deadline settings. We further study the schedulability analysis and the backward mode switch conditions, which are proposed and proved in (Guo et al., 2018). Next, we present a probabilistic scheduling technique for mixed-criticality tasks on multiprocessor systems where a system-wide permitted failure probability is known. The schedulability conditions are derived along with the processor allocation scheme. The work is extended from (Guo et al., 2015), where the probabilistic model is first introduced for independent task scheduling on a uniprocessor platform. We further consider the failure dependency between tasks while scheduling on multiprocessor platforms. We provide related theoretical analysis to show the correctness of our work. To show the effectiveness of our proposed techniques, we conduct a detailed experimental evaluation under different circumstances”--Abstract, page iii

    Improving the Schedulability and Quality of Service for Federated Scheduling of Parallel Mixed-Criticality Tasks on Multiprocessors

    Get PDF
    This paper presents federated scheduling algorithm, called MCFQ, for a set of parallel mixed-criticality tasks on multiprocessors. The main feature of MCFQ algorithm is that different alternatives to assign each high-utilization, high-critical task to the processors are computed. Given the different alternatives, we carefully select one alternative for each such task so that all the other tasks can be successfully assigned on the remaining processors. Such flexibility in choosing the right alternative has two benefits. First, it has higher likelihood to satisfy the total resource requirement of all the tasks while ensuring schedulability. Second, computational slack becomes available by intelligently selecting the alternative such that the total resource requirement of all the tasks is minimized. Such slack then can be used to improve the QoS of the system (i.e., never discard some low-critical tasks). Our experimental results using randomly-generated parallel mixed-critical tasksets show that MCFQ can schedule much higher number of tasksets and can improve the QoS of the system significantly in comparison to the state of the art

    Mixed-Criticality Scheduling to Minimize Makespan

    Get PDF
    In the mixed-criticality job model, each job is characterized by two execution time parameters, representing a smaller (less conservative) estimate and a larger (more conservative) estimate on its actual, unknown, execution time. Each job is further classified as being either less critical or more critical. The desired execution semantics are that all jobs should execute correctly provided all jobs complete upon being allowed to execute for up to the smaller of their execution time estimates, whereas if some jobs need to execute beyond their smaller execution time estimates (but not beyond their larger execution time estimates), then only the jobs classified as being more critical are required to execute correctly. The scheduling of collections of such mixed-criticality jobs upon identical multiprocessor platforms in order to minimize the makespan is considered here

    Precise energy efficient scheduling of mixed-criticality tasks & sustainable mixed-criticality scheduling

    Get PDF
    In this thesis, the imprecise mixed-criticality model (IMC) is extended to precise scheduling of tasks, and integrated with the dynamic voltage and frequency scaling (DVFS) technique to enable energy minimization. The challenge in precise scheduling of MC systems is to simultaneously guarantee the timing correctness for all tasks, hi and lo, under both pessimistic and optimistic (less pessimistic) assumptions. To the best of knowledge this is the first work to address the integration of DVFS energy conserving techniques with precise scheduling of lo-tasks of the MC model. In this thesis, the utilization based schedulability tests and sufficient conditions for such systems under Earliest Deadline First EDF-VD scheduling policy are presented. Quantitative study in the forms of speedup bound and approximation ratio are also proved for the unified model. Extensive experimental studies are conducted to verify the theoretical results as well as the effectiveness of the proposed algorithm. In safety- critical systems, it is essential to perform schedulability analysis prior to run-time. Parameters characterizing the run-time workload are generated by pessimistic techniques; hence, adopting conservative estimates may result in systems performing much better than anticipated during run-time. This thesis also addresses the following questions associated to the better performance of the task system: (i) How does parameter change affect the schedulability of a task set (system)? (ii) In the event that a mixed-criticality system design is deemed schedulable and specific part/parts of the system are reassigned to be of low-criticality, is the system still safe to run? (iii) If a system is presumed to be non-schedulable, does it invariably benefit to reduce the criticality of some task? To answer these questions, in this thesis, we not only study the property of sustainability with regards to criticality levels, but also revisit sustainability of several uniprocessor and multiprocessor scheduling policies with respect to other parameters --Abstract, page iii
    • 

    corecore