206 research outputs found

    Resource-Efficient Scheduling Of Multiprocessor Mixed-Criticality Real-Time Systems

    Get PDF
    Timing guarantee is critical to ensure the correctness of embedded software systems that interact with the physical environment. As modern embedded real-time systems evolves, they face three challenges: resource constraints, mixed-criticality, and multiprocessors. This dissertation focuses on resource-efficient scheduling techniques for mixed-criticality systems on multiprocessor platforms. While Mixed-Criticality (MC) scheduling has been extensively studied on uniprocessor plat- forms, the problem on multiprocessor platforms has been largely open. Multiprocessor al- gorithms are broadly classified into two categories: global and partitioned. Global schedul- ing approaches use a global run-queue and migrate tasks among processors for improved schedulability. Partitioned scheduling approaches use per processor run-queues and can reduce preemption/migration overheads in real implementation. Existing global scheduling schemes for MC systems have suffered from low schedulability. Our goal in the first work is to improve the schedulability of MC scheduling algorithms. Inspired by the fluid scheduling model in a regular (non-MC) domain, we have developed the MC-Fluid scheduling algo- rithm that executes a task with criticality-dependent rates. We have evaluated MC-Fluid in terms of the processor speedup factor: MC-Fluid is a multiprocessor MC scheduling algo- rithm with a speed factor of 4/3, which is known to be optimal. In other words, MC-Fluid can schedule any feasible mixed-criticality task system if each processor is sped up by a factor of 4/3. Although MC-Fluid is speedup-optimal, it is not directly implementable on multiprocessor platforms of real processors due to the fractional processor assumption where multiple task can be executed on one processor at the same time. In the second work, we have considered the characteristic of a real processor (executing only one task at a time) and have developed the MC-Discrete scheduling algorithm for regular (non-fluid) scheduling platforms. We have shown that MC-Discrete is also speedup-optimal. While our previous two works consider global scheduling approaches, our last work con- siders partitioned scheduling approaches, which are widely used in practice because of low implementation overheads. In addition to partitioned scheduling, the work consid- ers the limitation of conventional MC scheduling algorithms that drops all low-criticality tasks when violating a certain threshold of actual execution times. In practice, the system designer wants to execute the tasks as much as possible. To address the issue, we have de- veloped the MC-ADAPT scheduling framework under uniprocessor platforms to drop as few low-criticality tasks as possible. Extending the framework with partitioned multiprocessor platforms, we further reduce the dropping of low-criticality tasks by allowing migration of low-criticality tasks at the moment of a criticality switch. We have evaluated the quality of task dropping solution in terms of speedup factor. In existing work, the speedup factor has been used to evaluate MC scheduling algorithms in terms of schedulability under the worst-case scheduling scenario. In this work, we apply the speedup factor to evaluate MC scheduling algorithms in terms of the quality of their task dropping solution under various MC scheduling scenarios. We have derived that MC-ADAPT has a speedup factor of 1.618 for task dropping solution

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones

    Preemptive Uniprocessor Scheduling of Mixed-Criticality Sporadic Task Systems

    Full text link

    Hierarchical Scheduling for Real-Time Periodic Tasks in Symmetric Multiprocessing

    Get PDF
    In this paper, we present a new hierarchical scheduling framework for periodic tasks in symmetric multiprocessor (SMP) platforms. Partitioned and global scheduling are the two main approaches used by SMP based systems where global scheduling is recommended for overall performance and partitioned scheduling is recommended for hard real-time performance. Our approach combines both the global and partitioned approaches of traditional SMP-based schedulers to provide hard real-time performance guarantees for critical tasks and improved response times for soft real-time tasks. Implemented as part of VxWorks, the results are confirmed using a real-time benchmark application, where response times were improved for soft real-time tasks while still providing hard real-time performance

    MCFlow: Middleware for Mixed-Criticality Distributed Real-Time Systems

    Get PDF
    Traditional fixed-priority scheduling analysis for periodic/sporadic task sets is based on the assumption that all tasks are equally critical to the correct operation of the system. Therefore, every task has to be schedulable under the scheduling policy, and estimates of tasks\u27 worst case execution times must be conservative in case a task runs longer than is usual. To address the significant under-utilization of a system\u27s resources under normal operating conditions that can arise from these assumptions, several \emph{mixed-criticality scheduling} approaches have been proposed. However, to date there has been no quantitative comparison of system schedulability or run-time overhead for the different approaches. In this dissertation, we present what is to our knowledge the first side-by-side implementation and evaluation of those approaches, for periodic and sporadic mixed-criticality tasks on uniprocessor or distributed systems, under a mixed-criticality scheduling model that is common to all these approaches. To make a fair evaluation of mixed-criticality scheduling, we also address some previously open issues and propose modifications to improve schedulability and correctness of particular approaches. To facilitate the development and evaluation of mixed-criticality applications, we have designed and developed a distributed real-time middleware, called MCFlow, for mixed-criticality end-to-end tasks running on multi-core platforms. The research presented in this dissertation provides the following contributions to the state of the art in real-time middleware: (1) an efficient component model through which dependent subtask graphs can be configured flexibly for execution within a single core, across cores of a common host, or spanning multiple hosts; (2) support for optimizations to inter-component communication to reduce data copying without sacrificing the ability to execute subtasks in parallel; (3) a strict separation of timing and functional concerns so that they can be configured independently; (4) an event dispatching architecture that uses lock free algorithms where possible to reduce memory contention, CPU context switching, and priority inversion; and (5) empirical evaluations of MCFlow itself and of different mixed criticality scheduling approaches both with a single host and end-to-end across multiple hosts. The results of our evaluation show that in terms of basic distributed real-time behavior MCFlow performs comparably to the state of the art TAO real-time object request broker when only one core is used and outperforms TAO when multiple cores are involved. We also identify and categorize different use cases under which different mixed criticality scheduling approaches are preferable

    Utilization-Based Scheduling of Flexible Mixed-Criticality Real-Time Tasks

    Get PDF
    Mixed-criticality models are an emerging paradigm for the design of real-time systems because of their significantly improved resource efficiency. However, formal mixed-criticality models have traditionally been characterized by two impractical assumptions: once \textit{any} high-criticality task overruns, \textit{all} low-criticality tasks are suspended and \textit{all other} high-criticality tasks are assumed to exhibit high-criticality behaviors at the same time. In this paper, we propose a more realistic mixed-criticality model, called the flexible mixed-criticality (FMC) model, in which these two issues are addressed in a combined manner. In this new model, only the overrun task itself is assumed to exhibit high-criticality behavior, while other high-criticality tasks remain in the same mode as before. The guaranteed service levels of low-criticality tasks are gracefully degraded with the overruns of high-criticality tasks. We derive a utilization-based technique to analyze the schedulability of this new mixed-criticality model under EDF-VD scheduling. During runtime, the proposed test condition serves an important criterion for dynamic service level tuning, by means of which the maximum available execution budget for low-criticality tasks can be directly determined with minimal overhead while guaranteeing mixed-criticality schedulability. Experiments demonstrate the effectiveness of the FMC scheme compared with state-of-the-art techniques.Comment: This paper has been submitted to IEEE Transaction on Computers (TC) on Sept-09th-201

    Mixed-Criticality Job Models: A Comparison

    Get PDF
    The Vestal model in widely used in the real-time scheduling community for representing mixed-criticality real-time workloads. This model requires that multiple WCET estimates -- one for each criticality level in a system -- be obtained for each task. Burns suggests that being required to obtain too many WCET estimates may place an undue burden on system developers, and proposes a simplification to the Vestal model that makes do with just two WCET estimates per task. Burns makes a convincing case in favor of adopting this simplified model; here, we report on our attempts at comparing the two models -- Vestal’s original model, and Burns’ simplification – with regards to expressiveness, as well as schedulability and the tractability of determining schedulability
    • …
    corecore