145 research outputs found
Scheduling policies and system software architectures for mixed-criticality computing
Mixed-criticality model of computation is being increasingly
adopted in timing-sensitive systems. The model not only
ensures that the most critical tasks in a system never fails,
but also aims for better systems resource utilization in normal condition. In this report, we describe the widely used
mixed-criticality task model and fixed-priority scheduling
algorithms for the model in uniprocessors. Because of the
necessity by the mixed-criticality task model and scheduling
policies, isolation, both temporal and spatial, among tasks is
one of the main requirements from the system design point
of view. Different virtualization techniques have been used
to design system software architecture with the goal of isolation. We discuss such a few system software architectures
which are being and can be used for mixed-criticality model
of computation
ATMP: An Adaptive Tolerance-based Mixed-criticality Protocol for Multi-core Systems
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.The challenge of mixed-criticality scheduling is to keep tasks of higher criticality running in case of resource shortages caused by faults. Traditionally, mixedcriticality scheduling has focused on methods to handle faults where tasks overrun their optimistic worst-case execution time (WCET) estimate. In this paper we present the Adaptive Tolerance based Mixed-criticality Protocol (ATMP), which generalises the concept of mixed-criticality scheduling to handle also faults of other nature, like failure of cores in a multi-core system. ATMP is an adaptation method triggered by resource shortage at runtime. The first step of ATMP is to re-partition the task to the available cores and the second step is to optimise the utility at each core using the tolerance-based real-time computing model (TRTCM). The evaluation shows that the utility optimisation of ATMP can achieve a smoother degradation of service compared to just abandoning tasks
Combining Task-level and System-level Scheduling Modes for Mixed Criticality Systems
Different scheduling algorithms for mixed criticality systems have been
recently proposed. The common denominator of these algorithms is to discard low
critical tasks whenever high critical tasks are in lack of computation
resources. This is achieved upon a switch of the scheduling mode from Normal to
Critical. We distinguish two main categories of the algorithms: system-level
mode switch and task-level mode switch. System-level mode algorithms allow low
criticality (LC) tasks to execute only in normal mode. Task-level mode switch
algorithms enable to switch the mode of an individual high criticality task
(HC), from low (LO) to high (HI), to obtain priority over all LC tasks. This
paper investigates an online scheduling algorithm for mixed-criticality systems
that supports dynamic mode switches for both task level and system level. When
a HC task job overruns its LC budget, then only that particular job is switched
to HI mode. If the job cannot be accommodated, then the system switches to
Critical mode. To accommodate for resource availability of the HC jobs, the LC
tasks are degraded by stretching their periods until the Critical mode
exhibiting job complete its execution. The stretching will be carried out until
the resource availability is met. We have mechanized and implemented the
proposed algorithm using Uppaal. To study the efficiency of our scheduling
algorithm, we examine a case study and compare our results to the state of the
art algorithms.Comment: \copyright 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
A Survey of Research into Mixed Criticality Systems
This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards
Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems
A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical
ones. The actual research challenge in this field is to devise robust scheduling protocols
to minimise the impact on less critical tasks.
This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems.
The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime.
The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones
Considerations on the Least Upper Bound for Mixed-Criticality Real-Time Systems
5th Brazilian Symposium on Computing Systems Engineering, SBESC 2015 (SBESC 2015). 3 to 6, Nov, 2015. Foz do Iguaçu, Brasil.Real-time mixed-criticality systems (MCS) are designed so that tasks with different criticality levels share the same
computing platform. Scheduling mechanisms must ensure that
high criticality tasks are safe independently of lower criticality
tasks’ behaviour. In this paper we provide theoretical schedulability properties for MCS by showing that: (a) the least upper
bound on processor utilisation of MCS is in general null for both
uniprocessor and multiprocessor platforms; (b) this bound lies
in interval [ln 2, 2( √2 − 1)] if higher criticality tasks do not have
periods larger than lower criticality ones; and (c) if the task
of these uniprocessor systems have harmonic periods, the least
upper bound reaches 1
Robust Mixed-Criticality Systems
Certification authorities require correctness and survivability. In the temporal domain this requires a convincing argument that all deadlines will be met under error free conditions, and that when certain defined errors occur the behaviour of the system is still predictable and safe. This means that occasional execution-time overruns should be tolerated and where more severe errors occur levels of graceful degradation should be supported. With mixed-criticality systems, fault tolerance must be criticality aware, i.e. some tasks should degrade less than others. In this paper a quantitative notion of robustness is defined, and it is shown how fixed priority-based task scheduling can be structured to maximise the likelihood of a system remaining fail operational or fail robust (the latter implying that an occasional job may be skipped if all other deadlines are met). Analysis is developed for fail operational and fail robust behaviour, optimal priority ordering is addressed and an experimental evaluation is described. Overall, the approach presented allows robustness to be balanced against schedulability. A designer would thus be able to explore the design space so defined
Mixed-Criticality Scheduling on Multiprocessors using Task Grouping
Real-time systems are increasingly running a mix of tasks with different criticality levels: for instance, unmanned aerial vehicle has multiple software functions with different safety criticality levels, but runs them on a single, shared computational platform. In addition, these systems are increasingly deployed on multiprocessor platforms because this can help to reduce their cost, space, weight, and power consumption. To assure the safety of such systems, several mixed-criticality scheduling algorithms have been developed that can provide mixed-criticality timing guarantees. However, most existing algorithms have two important limitations: they do not guarantee strong isolation among the high-criticality tasks, and they offer poor real-time performance for the low-criticality tasks
- …