2,206 research outputs found

    Mixed-criticality real-time task scheduling with graceful degradation

    Get PDF
    ”The mixed-criticality real-time systems implement functionalities of different degrees of importance (or criticalities) upon a shared platform. In traditional mixed-criticality systems, under a hi mode switch, no guaranteed service is provided to lo-criticality tasks. After a mode switch, only hi-criticality tasks are considered for execution while no guarantee is made to the lo-criticality tasks. However, with careful optimistic design, a certain degree of service guarantee can be provided to lo-criticality tasks upon a mode switch. This concept is broadly known as graceful degradation. Guaranteed graceful degradation provides a better quality of service as well as it utilizes the system resource more efficiently. In this thesis, we study two efficient techniques of graceful degradation. First, we study a mixed-criticality scheduling technique where graceful degradation is provided in the form of minimum cumulative completion rates. We present two easy-to-implement admission-control algorithms to determine which lo-criticality jobs to complete in hi mode. The scheduling is done by following deadline virtualization, and two heuristics are shown for virtual deadline settings. We further study the schedulability analysis and the backward mode switch conditions, which are proposed and proved in (Guo et al., 2018). Next, we present a probabilistic scheduling technique for mixed-criticality tasks on multiprocessor systems where a system-wide permitted failure probability is known. The schedulability conditions are derived along with the processor allocation scheme. The work is extended from (Guo et al., 2015), where the probabilistic model is first introduced for independent task scheduling on a uniprocessor platform. We further consider the failure dependency between tasks while scheduling on multiprocessor platforms. We provide related theoretical analysis to show the correctness of our work. To show the effectiveness of our proposed techniques, we conduct a detailed experimental evaluation under different circumstances”--Abstract, page iii

    Managing Uncertainty: A Case for Probabilistic Grid Scheduling

    Get PDF
    The Grid technology is evolving into a global, service-orientated architecture, a universal platform for delivering future high demand computational services. Strong adoption of the Grid and the utility computing concept is leading to an increasing number of Grid installations running a wide range of applications of different size and complexity. In this paper we address the problem of elivering deadline/economy based scheduling in a heterogeneous application environment using statistical properties of job historical executions and its associated meta-data. This approach is motivated by a study of six-month computational load generated by Grid applications in a multi-purpose Grid cluster serving a community of twenty e-Science projects. The observed job statistics, resource utilisation and user behaviour is discussed in the context of management approaches and models most suitable for supporting a probabilistic and autonomous scheduling architecture

    Meta-QoS performance of earliest-deadline-first and rate-monotonic scheduling of smoothed video data in a client-server environment

    Get PDF
    In this paper we present an extensive performance study of two modified EDF and RM scheduling algorithms which are enhanced to provide quality of service (QoS) guarantees for smoothed video data. With a probabilistic definition of QoS, we incorporate admission control conditions into the two algorithms. Furthermore, we also include a counter-based scheduling module as the core scheduling mechanism which adaptively adjusts the actual QoS levels assigned to requests. Our theoretical analysis of the two enhanced algorithms, called QEDF and QRM, shows that the QRM algorithm is more robust than the QEDF algorithm for different workload and utilization conditions. We also propose to use a new metric called meta-QoS to quantify the overall performance of a packet scheduler given a set of simultaneous requests. In our experiments, we find that the QRM algorithm can sustain a rather stable level of meta-QoS even when the workload and utilization levels are increased. On the other hand, the QEDF algorithm is found to be less desirable for a high level of utilization and a large number of requests.published_or_final_versio

    Generalizing List Scheduling for Stochastic Soft Real-time Parallel Applications

    Get PDF
    Advanced architecture processors provide features such as caches and branch prediction that result in improved, but variable, execution time of software. Hard real-time systems require tasks to complete within timing constraints. Consequently, hard real-time systems are typically designed conservatively through the use of tasks? worst-case execution times (WCET) in order to compute deterministic schedules that guarantee task?s execution within giving time constraints. This use of pessimistic execution time assumptions provides real-time guarantees at the cost of decreased performance and resource utilization. In soft real-time systems, however, meeting deadlines is not an absolute requirement (i.e., missing a few deadlines does not severely degrade system performance or cause catastrophic failure). In such systems, a guaranteed minimum probability of completing by the deadline is sufficient. Therefore, there is considerable latitude in such systems for improving resource utilization and performance as compared with hard real-time systems, through the use of more realistic execution time assumptions. Given probability distribution functions (PDFs) representing tasks? execution time requirements, and tasks? communication and precedence requirements, represented as a directed acyclic graph (DAG), this dissertation proposes and investigates algorithms for constructing non-preemptive stochastic schedules. New PDF manipulation operators developed in this dissertation are used to compute tasks? start and completion time PDFs during schedule construction. PDFs of the schedules? completion times are also computed and used to systematically trade the probability of meeting end-to-end deadlines for schedule length and jitter in task completion times. Because of the NP-hard nature of the non-preemptive DAG scheduling problem, the new stochastic scheduling algorithms extend traditional heuristic list scheduling and genetic list scheduling algorithms for DAGs by using PDFs instead of fixed time values for task execution requirements. The stochastic scheduling algorithms also account for delays caused by communication contention, typically ignored in prior DAG scheduling research. Extensive experimental results are used to demonstrate the efficacy of the new algorithms in constructing stochastic schedules. Results also show that through the use of the techniques developed in this dissertation, the probability of meeting deadlines can be usefully traded for performance and jitter in soft real-time systems

    Real-Time Guarantees For Wireless Networked Sensing And Control

    Get PDF
    Wireless networks are increasingly being explored for mission-critical sensing and control in emerging domains such as connected and automated vehicles, Industrial 4.0, and smart city. In wireless networked sensing and control (WSC) systems, reliable and real- time delivery of sensed data plays a crucial role for the control decision since out-of-date information will often be irrelevant and even leads to negative effects to the system. Since WSC differs dramatically from the traditional real-time (RT) systems due to its wireless nature, new design objective and perspective are necessary to achieve real-time guarantees. First, we proposed Optimal Node Activation Multiple Access (ONAMA) scheduling protocol that activates as many nodes as possible while ensuring transmission reliability (in terms of packets delivery ratio). We implemented and tested ONAMA on two testbeds both with 120+ sensor nodes. Second, we proposed algorithms to address the problem of clustering heterogeneous reliability requirements into a limit set of service levels. Our solutions are optimal, and they also provide guaranteed reliability, which is critical for wireless sensing and control. Third, we proposed a probabilistic real-time wireless communication framework that effectively integrates real-time scheduling theory with wireless communication. The per- packet probabilistic real-time QoS was formally modeled. By R3 mapping, the upper-layer requirement and the lower-layer link reliability are translated into the number of trans- mission opportunities needed. By optimal real-time communication scheduling as well as admission test and traffic period optimization, the system utilization is maximized while the schedulability is maintained. Finally, we further investigated the problem of how to minimize delay variation (i.e., jitter) while ensuring that packets are delivered by their deadlines
    • …
    corecore