32,414 research outputs found

    Multi-Mode Virtualization for Soft Real-Time Systems

    Get PDF
    Real-time virtualization is an emerging technology for embedded systems integration and latency-sensitive cloud applications. Earlier real-time virtualization platforms require offline configuration of the scheduling parameters of virtual machines (VMs) based on their worst-case workloads, but this static approach results in pessimistic resource allocation when the workloads in the VMs change dynamically. Here, we present Multi-Mode-Xen (M2-Xen), a real-time virtualization platform for dynamic real-time systems where VMs can operate in modes with different CPU resource requirements at run-time. M2-Xen has three salient capabilities: (1) dynamic allocation of CPU resources among VMs in response to their mode changes, (2) overload avoidance at both the VM and host levels during mode transitions, and (3) fast mode transitions between different modes. M2-Xen has been implemented within Xen 4.8 using the real-time deferrable server (RTDS) scheduler. Experimental results show that M2-Xen maintains real-time performance in different modes, avoids overload during mode changes, and performs fast mode transitions

    Performance guarantee for online deadline scheduling in the presence of overload

    Get PDF
    Earliest deadline first (EDF) is a widely-used online algorithm for scheduling jobs with deadlines in real-time systems. Yet, existing results on the performance guarantee of EDF are limited to underloaded systems [6,12,14]. This paper initiates the study of EDF for overloaded systems, attaining similar performance guarantees as in the underloaded setting. Specifically, we show that EDF with a simple form of admission control is optimal for scheduling on both uniprocessor and multiprocessors when moderately faster processors are available (our analysis actually admits a tradeoff between speed and extra processors). This is the first result attaining optimality under overload. Another contribution of this paper is an improved analysis of the competitiveness for weighted deadline scheduling.published_or_final_versio

    Flexible Scheduling in Middleware for Distributed rate-based real-time applications - Doctoral Dissertation, May 2002

    Get PDF
    Distributed rate-based real-time systems, such as process control and avionics mission computing systems, have traditionally been scheduled statically. Static scheduling provides assurance of schedulability prior to run-time overhead. However, static scheduling is brittle in the face of unanticipated overload, and treats invocation-to-invocation variations in resource requirements inflexibly. As a consequence, processing resources are often under-utilized in the average case, and the resulting systems are hard to adapt to meet new real-time processing requirements. Dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling often has a high run-time cost because certain decisions are enforced on-line. Furthermore, under conditions of overload tasks can be scheduled dynamically that may never be dispatched, or that upon dispatch would miss their deadlines. We review the implications of these factors on rate-based distributed systems, and posits the necessity to combine static and dynamic approaches to exploit the strengths and compensate for the weakness of either approach in isolation. We present a general hybrid approach to real-time scheduling and dispatching in middleware, that can employ both static and dynamic components. This approach provides (1) feasibility assurance for the most critical tasks, (2) the ability to extend this assurance incrementally to operations in successively lower criticality equivalence classes, (3) the ability to trade off bounds on feasible utilization and dispatching over-head in cases where, for example, execution jitter is a factor or rates are not harmonically related, and (4) overall flexibility to make more optimal use of scarce computing resources and to enforce a wider range of application-specified execution requirements. This approach also meets additional constraints of an increasingly important class of rate-based systems, those with requirements for robust management of real-time performance in the face of rapidly and widely changing operating conditions. To support these requirements, we present a middleware framework that implements the hybrid scheduling and dispatching approach described above, and also provides support for (1) adaptive re-scheduling of operations at run-time and (2) reflective alternation among several scheduling strategies to improve real-time performance in the face of changing operating conditions. Adaptive re-scheduling must be performed whenever operating conditions exceed the ability of the scheduling and dispatching infrastructure to meet the critical real-time requirements of the system under the currently specified rates and execution times of operations. Adaptive re-scheduling relies on the ability to change the rates of execution of at least some operations, and may occur under the control of a higher-level middleware resource manager. Different rates of execution may be specified under different operating conditions, and the number of such possible combinations may be arbitrarily large. Furthermore, adaptive rescheduling may in turn require notification of rate-sensitive application components. It is therefore desirable to handle variations in operating conditions entirely within the scheduling and dispatching infrastructure when possible. A rate-based distributed real-time application, or a higher-level resource manager, could thus fall back on adaptive re-scheduling only when it cannot achieve acceptable real-time performance through self-adaptation. Reflective alternation among scheduling heuristics offers a way to tune real-time performance internally, and we offer foundational support for this approach. In particular, run-time observable information such as that provided by our metrics-feedback framework makes it possible to detect that a given current scheduling heuristic is underperforming the level of service another could provide. Furthermore we present empirical results for our framework in a realistic avionics mission computing environment. This forms the basis for guided adaption. This dissertation makes five contributions in support of flexible and adaptive scheduling and dispatching in middleware. First, we provide a middle scheduling framework that supports arbitrary and fine-grained composition of static/dynamic scheduling, to assure critical timeliness constraints while improving noncritical performance under a range of conditions. Second, we provide a flexible dispatching infrastructure framework composed of fine-grained primitives, and describe how appropriate configurations can be generated automatically based on the output of the scheduling framework. Third, we describe algorithms to reduce the overhead and duration of adaptive rescheduling, based on sorting for rate selection and priority assignment. Fourth, we provide timely and efficient performance information through an optimized metrics-feedback framework, to support higher-level reflection and adaptation decisions. Fifth, we present the results of empirical studies to quantify and evaluate the performance of alternative canonical scheduling heuristics, across a range of load and load jitter conditions. These studies were conducted within an avionics mission computing applications framework running on realistic middleware and embedded hardware. The results obtained from these studies (1) demonstrate the potential benefits of reflective alternation among distinct scheduling heuristics at run-time, and (2) suggest performance factors of interest for future work on adaptive control policies and mechanisms using this framework

    Deadline Missing Prediction Through the Use of Milestones

    Get PDF
    Distributed Real-Time Thread is an important concept for distributed real-time systems. Distributed Threads are schedulable entities with an end-to-end deadline that transpose nodes, carrying their scheduling context. In each node, the thread will be locally scheduled according to a local deadline, which is defined by a deadline partitioning algorithm. Mechanisms for predicting the missing of deadlines are fundamental if corrective actions are incorporated for improving system quality of service. In this work, a mechanism for predicting missing deadlines is proposed and evaluated through simulation. In order to illustrate the main characteristics of the proposed mechanism, experiments will be presented taking into account different scenarios of normal load and overload. Simulations show that the deadline missing prediction mechanism proposed presents good results for improving the overall performance and availability of distributed systems

    Overload handling in soft real-time systems : a case study using ROOM/ObjecTime

    Get PDF
    In real-time systems, deadlines are imposed on the response time. Not meeting a deadline in a hard real-time system is equivalent to its failure, while in soft real-time systems occasional minor delays in responding to events are acceptable. Only when the delays are frequent or considerable, performance degradation up to system malfunction can be observed. The developers for real-time systems must therefore pay special attention to the performance of the system. The scheduling of tasks in the system becomes critically important as it directly affects the system performance. While, traditionally, real-time system developers have used low-level software programming paradigms, the rising complexity of real-time software is creating a demand for CASE tools that allow for development using a combination of visual modeling and design, augmented with code-segments. One such tool is the ObjecTime Developer based on ROOM (Real-Time Object Oriented Modeling) development methodology. In this thesis we study usability and effectiveness of this tool for building a soft real time system with special attention on the performance and behaviour of the system under overload conditions. As a working example we develop a radar simulator system, which observes air targets whose speeds, number and distances are constantly changing; thus, creating a constantly varying load of the entire system, with dynamically appearing objects. The major contribution of this thesis is developing custom overload handling policies to improve performance and illustrating how they may be implemented within the framework of the tool. The native scheduling policy of ObjecTime, the classical priority-based policy and our own policy based on the period-of-execution are studie

    Statistic Rate Monotonic Scheduling

    Full text link
    In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.National Science Foundation (CCR-970668
    • …
    corecore