643 research outputs found

    Precise energy efficient scheduling of mixed-criticality tasks & sustainable mixed-criticality scheduling

    Get PDF
    In this thesis, the imprecise mixed-criticality model (IMC) is extended to precise scheduling of tasks, and integrated with the dynamic voltage and frequency scaling (DVFS) technique to enable energy minimization. The challenge in precise scheduling of MC systems is to simultaneously guarantee the timing correctness for all tasks, hi and lo, under both pessimistic and optimistic (less pessimistic) assumptions. To the best of knowledge this is the first work to address the integration of DVFS energy conserving techniques with precise scheduling of lo-tasks of the MC model. In this thesis, the utilization based schedulability tests and sufficient conditions for such systems under Earliest Deadline First EDF-VD scheduling policy are presented. Quantitative study in the forms of speedup bound and approximation ratio are also proved for the unified model. Extensive experimental studies are conducted to verify the theoretical results as well as the effectiveness of the proposed algorithm. In safety- critical systems, it is essential to perform schedulability analysis prior to run-time. Parameters characterizing the run-time workload are generated by pessimistic techniques; hence, adopting conservative estimates may result in systems performing much better than anticipated during run-time. This thesis also addresses the following questions associated to the better performance of the task system: (i) How does parameter change affect the schedulability of a task set (system)? (ii) In the event that a mixed-criticality system design is deemed schedulable and specific part/parts of the system are reassigned to be of low-criticality, is the system still safe to run? (iii) If a system is presumed to be non-schedulable, does it invariably benefit to reduce the criticality of some task? To answer these questions, in this thesis, we not only study the property of sustainability with regards to criticality levels, but also revisit sustainability of several uniprocessor and multiprocessor scheduling policies with respect to other parameters --Abstract, page iii

    Preemptive uniprocessor scheduling of dual-criticality implicit-deadline sporadic tasks

    Get PDF
    Many reactive systems must be designed and analyzed prior to deployment in the presence of considerable epistemic uncertainty: the precise nature of the external environment the system will encounter, as well as the run-time behavior of the platform upon which it is implemented, cannot be predicted with complete certainty prior to deployment. The widely-studied Vestal model for mixed-criticality workloads addresses uncertainties in estimating the worst-case execution time (WCET) of real-time code. Different estimations, at different levels of assurance, are made about these WCET values; it is required that all functionalities execute correctly if the less conservative assumptions hold, while only the more critical functionalities are required to execute correctly in the (presumably less likely) event that the less conservative assumptions fail to hold but the more conservative assumptions do. A generalization of the Vestal model is considered here, in which a degraded (but non-zero) level of service is required for the less critical functionalities even in the event of only the more conservative assumptions holding. An algorithm is derived for scheduling dual-criticality implicit-deadline sporadic task systems specified in this more general model upon preemptive uniprocessor platforms, and proved to be speedup-optimal

    Mixed-criticality real-time task scheduling with graceful degradation

    Get PDF
    ”The mixed-criticality real-time systems implement functionalities of different degrees of importance (or criticalities) upon a shared platform. In traditional mixed-criticality systems, under a hi mode switch, no guaranteed service is provided to lo-criticality tasks. After a mode switch, only hi-criticality tasks are considered for execution while no guarantee is made to the lo-criticality tasks. However, with careful optimistic design, a certain degree of service guarantee can be provided to lo-criticality tasks upon a mode switch. This concept is broadly known as graceful degradation. Guaranteed graceful degradation provides a better quality of service as well as it utilizes the system resource more efficiently. In this thesis, we study two efficient techniques of graceful degradation. First, we study a mixed-criticality scheduling technique where graceful degradation is provided in the form of minimum cumulative completion rates. We present two easy-to-implement admission-control algorithms to determine which lo-criticality jobs to complete in hi mode. The scheduling is done by following deadline virtualization, and two heuristics are shown for virtual deadline settings. We further study the schedulability analysis and the backward mode switch conditions, which are proposed and proved in (Guo et al., 2018). Next, we present a probabilistic scheduling technique for mixed-criticality tasks on multiprocessor systems where a system-wide permitted failure probability is known. The schedulability conditions are derived along with the processor allocation scheme. The work is extended from (Guo et al., 2015), where the probabilistic model is first introduced for independent task scheduling on a uniprocessor platform. We further consider the failure dependency between tasks while scheduling on multiprocessor platforms. We provide related theoretical analysis to show the correctness of our work. To show the effectiveness of our proposed techniques, we conduct a detailed experimental evaluation under different circumstances”--Abstract, page iii

    Combining Task-level and System-level Scheduling Modes for Mixed Criticality Systems

    Get PDF
    Different scheduling algorithms for mixed criticality systems have been recently proposed. The common denominator of these algorithms is to discard low critical tasks whenever high critical tasks are in lack of computation resources. This is achieved upon a switch of the scheduling mode from Normal to Critical. We distinguish two main categories of the algorithms: system-level mode switch and task-level mode switch. System-level mode algorithms allow low criticality (LC) tasks to execute only in normal mode. Task-level mode switch algorithms enable to switch the mode of an individual high criticality task (HC), from low (LO) to high (HI), to obtain priority over all LC tasks. This paper investigates an online scheduling algorithm for mixed-criticality systems that supports dynamic mode switches for both task level and system level. When a HC task job overruns its LC budget, then only that particular job is switched to HI mode. If the job cannot be accommodated, then the system switches to Critical mode. To accommodate for resource availability of the HC jobs, the LC tasks are degraded by stretching their periods until the Critical mode exhibiting job complete its execution. The stretching will be carried out until the resource availability is met. We have mechanized and implemented the proposed algorithm using Uppaal. To study the efficiency of our scheduling algorithm, we examine a case study and compare our results to the state of the art algorithms.Comment: \copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Software Fault Tolerance in Real-Time Systems: Identifying the Future Research Questions

    Get PDF
    Tolerating hardware faults in modern architectures is becoming a prominent problem due to the miniaturization of the hardware components, their increasing complexity, and the necessity to reduce the costs. Software-Implemented Hardware Fault Tolerance approaches have been developed to improve the system dependability to hardware faults without resorting to custom hardware solutions. However, these come at the expense of making the satisfaction of the timing constraints of the applications/activities harder from a scheduling standpoint. This paper surveys the current state of the art of fault tolerance approaches when used in the context real-time systems, identifying the main challenges and the cross-links between these two topics. We propose a joint scheduling-failure analysis model that highlights the formal interactions among software fault tolerance mechanisms and timing properties. This model allows us to present and discuss many open research questions with the final aim to spur the future research activities

    An Optimization Based Design for Integrated Dependable Real-Time Embedded Systems

    Get PDF
    Moving from the traditional federated design paradigm, integration of mixedcriticality software components onto common computing platforms is increasingly being adopted by automotive, avionics and the control industry. This method faces new challenges such as the integration of varied functionalities (dependability, responsiveness, power consumption, etc.) under platform resource constraints and the prevention of error propagation. Based on model driven architecture and platform based design’s principles, we present a systematic mapping process for such integration adhering a transformation based design methodology. Our aim is to convert/transform initial platform independent application specifications into post integration platform specific models. In this paper, a heuristic based resource allocation approach is depicted for the consolidated mapping of safety critical and non-safety critical applications onto a common computing platform meeting particularly dependability/fault-tolerance and real-time requirements. We develop a supporting tool suite for the proposed framework, where VIATRA (VIsual Automated model TRAnsformations) is used as a transformation tool at different design steps. We validate the process and provide experimental results to show the effectiveness, performance and robustness of the approach

    Improving the Schedulability and Quality of Service for Federated Scheduling of Parallel Mixed-Criticality Tasks on Multiprocessors

    Get PDF
    This paper presents federated scheduling algorithm, called MCFQ, for a set of parallel mixed-criticality tasks on multiprocessors. The main feature of MCFQ algorithm is that different alternatives to assign each high-utilization, high-critical task to the processors are computed. Given the different alternatives, we carefully select one alternative for each such task so that all the other tasks can be successfully assigned on the remaining processors. Such flexibility in choosing the right alternative has two benefits. First, it has higher likelihood to satisfy the total resource requirement of all the tasks while ensuring schedulability. Second, computational slack becomes available by intelligently selecting the alternative such that the total resource requirement of all the tasks is minimized. Such slack then can be used to improve the QoS of the system (i.e., never discard some low-critical tasks). Our experimental results using randomly-generated parallel mixed-critical tasksets show that MCFQ can schedule much higher number of tasksets and can improve the QoS of the system significantly in comparison to the state of the art

    Using Imprecise Computing for Improved Real-Time Scheduling

    Get PDF
    Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system. Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode

    Scheduling Mixed-Criticality Real-Time Systems

    Get PDF
    This dissertation addresses the following question to the design of scheduling policies and resource allocation mechanisms in contemporary embedded systems that are implemented on integrated computing platforms: in a multitasking system where it is hard to estimate a task's worst-case execution time, how do we assign task priorities so that 1) the safety-critical tasks are asserted to be completed within a specified length of time, and 2) the non-critical tasks are also guaranteed to be completed within a predictable length of time if no task is actually consuming time at the worst case? This dissertation tries to answer this question based on the mixed-criticality real-time system model, which defines multiple worst-case execution scenarios, and demands a scheduling policy to provide provable timing guarantees to each level of critical tasks with respect to each type of scenario. Two scheduling algorithms are proposed to serve this model. The OCBP algorithm is aimed at discrete one-shot tasks with an arbitrary number of criticality levels. The EDF-VD algorithm is aimed at recurrent tasks with two criticality levels (safety-critical and non-critical). Both algorithms are proved to optimally minimize the percentage of computational resource waste within two criticality levels. More in-depth investigations to the relationship among the computational resource requirement of different criticality levels are also provided for both algorithms.Doctor of Philosoph
    • …
    corecore