44 research outputs found

    AND THE STATE-OF-THE-ART

    Get PDF
    Our goal in this article is to give an overview of the broad are

    End-to-End Scheduling Strategies for Aperiodic Tasks in Middleware

    Get PDF
    Many mission-critical distributed real-time applicationsmust handle aperiodic tasks with hard end-to-end dead-lines. Existing middleware such as RT-CORBA lacksschedulability analysis and run-time scheduling mecha-nisms that can provide real-time guarantees to aperiodictasks. This paper makes the following contributions to thestate of the art for end-to-end aperiodic scheduling in mid-dleware. First, we compare two approaches to aperiodicscheduling, the deferrable server and the aperiodic utiliza-tion bound, using representative workloads. Numerical re-sults show that the deferrable server analysis is less pes-simistic than the aperiodic utilization bounds when appliedoffline. Second, we propose a practical approach to tuningdeferrable servers for end-to-end tasks. Third, we describedeferrable server mechanisms we have developed for TAO’sfederated event channel. Finally, we present empirical re-sults from a Linux testbed that demonstrate the efficiency ofthose deferrable server mechanisms

    Real-Time Systems: An Introduction and the State-of-the-Art

    Full text link
    This encyclopedia article gives an overview of the broad area of real-time systems. This task is daunting because real-time systems are everywhere, and yet no generally accepted definition differentiates real-time systems from non-real-time systems

    Towards achieving execution time predictability in web services middleware

    Get PDF
    Web services middleware are typically designed optimised for throughput. Requests are accepted unconditionally and no differentiation is made in processing. Many use the thread-pool pattern to execute requests in parallel using processor sharing. Clusters hosting web services dispatch requests only to balance out the load among the executors. Such optimisations for throughput work out negatively on the predictability of execution. Processor sharing results in the increase of execution time with the number of concurrent requests, making it impossible to predict or control the execution of a request. Existing works fail to address the need for predictability in web service execution. Some achieve a level of differentiated processing, but fail to consider predictability as their main quality attribute. Some give a probabilistic guarantee on service levels. However, from a predictability perspective they are inconsistent. A few achieve predictable execution times, though only in closed systems where request properties are known at system design time. Web services operate on the Internet, where request properties are relatively unknown. This thesis investigates the problem of achieving predictable times in web service executions. We introduce the notion of a processing deadline for service execution, which the web services engine must adhere to in completing the request in a repeatable and a consistent manner. Reaching such execution deadlines by the services engine is made possible by three main features. Firstly a deadline based scheduling algorithm introduced, ensures the processing deadlines are followed. A laxity based analytical model and an admission control algorithm it is based on, selects requests for execution, resulting in a wider range of laxities to enable more requests with overlapping executions to be scheduled together. Finally, a real-time scheduler component introduced in to the server uses a priority model to schedule the execution of requests by controlling the execution of individual worker threads in custom-made thread pools. Predictability of execution in cluster based deployments is further facilitated by four dispatching algorithms that consider the request deadlines and laxity property in the dispatching process. A performance model derived for a similar system approximates the waiting time where requests with smaller deadlines (having higher priority) experience smaller waiting times than requests with longer deadlines. These techniques are implemented in web services middleware in standalone and cluster-based configurations. They are evaluated against their unmodified versions and techniques such as round-robin and class based dispatching, to measure their predictability gain. Empirical evidence indicate the enhancements enable the middleware to achieve more than 90% of the deadlines, while accepting at least 20% of the requests in high traffic conditions. The enhancements additionally prevent the middleware from reaching overloaded conditions in heavy traffic, while maintaining comparable throughput rates to the unmodified versions of the middleware. Analytical and simulation results for the performance model confirms that deadline based preemptive scheduling results in a better balance of waiting times where high priority requests experience lower waiting times while lower priority requests are not over-starved compared to other techniques such as static priority ordering, First-Come-First-Served, Round-Robin and non-preemptive deadline based scheduling

    Multi-resource management in embedded real-time systems

    Get PDF
    This thesis addresses the problem of online multi-resource management in embedded real-time systems. It focuses on three research questions. The first question concentrates on how to design an efficient hierarchical scheduling framework for supporting independent development and analysis of component based systems, to provide temporal isolation between components. The second question investigates how to change the mapping of resources to tasks and components during run-time efficiently and predictably, and how to analyze the latency of such a system mode change in systems comprised of several scalable components. The third question deals with the scheduling and analysis of a set of parallel-tasks with real-time constraints which require simultaneous access to several different resources. For providing temporal isolation we chose a reservation-based approach. We first focused on processor reservations, where timed events play an important role. Common examples are task deadlines, periodic release of tasks, budget replenishment and budget depletion. Efficient timer management is therefore essential. We investigated the overheads in traditional timer management techniques and presented a mechanism called Relative Timed Event Queues (RELTEQ), which provides an expressive set of primitives at a low processor and memory overhead. We then leveraged RELTEQ to create an efficient, modular and extensible design for enhancing a real-time operating system with periodic tasks, polling, idling periodic and deferrable servers, and a two-level fixed-priority Hierarchical Scheduling Framework (HSF). The HSF design provides temporal isolation and supports independent development of components by separating the global and local scheduling, and allowing each server to define a dedicated scheduler. Furthermore, the design addresses the system overheads inherent to an HSF and prevents undesirable interference between components. It limits the interference of inactive servers on the system level by means of wakeup events and a combination of inactive server queues with a stopwatch queue. Our implementation is modular and requires only a few modifications of the underlying operating system. We then investigated scalable components operating in a memory-constrained system. We first showed how to reduce the memory requirements in a streaming multimedia application, based on a particular priority assignment of the different components along the processing chain. Then we investigated adapting the resource provisions to tasks during runtime, referred to as mode changes. We presented a novel mode change protocol called Swift Mode Changes, which relies on Fixed Priority with Deferred preemption Scheduling to reduce the mode change latency bound compared to existing protocols based on Fixed Priority Preemptive Scheduling. We then presented a new partitioned parallel-task scheduling algorithm called Parallel-SRP (PSRP), which generalizes MSRP for multiprocessors, and the corresponding schedulability analysis for the problem of multi-resource scheduling of parallel tasks with real-time constraints. We showed that the algorithm is deadlock-free, derived a maximum bound on blocking, and used this bound as a basis for a schedulability test. We then demonstrated how PSRP can exploit the inherent parallelism of a platform comprised of multiple heterogeneous resources. Finally, we presented Grasp, which is a visualization toolset aiming to provide insight into the behavior of complex real-time systems. Its flexible plugin infrastructure allows for easy extension with custom visualization and analysis techniques for automatic trace verification. Its capabilities include the visualization of hierarchical multiprocessor systems, including partitioned and global multiprocessor scheduling with migrating tasks and jobs, communication between jobs via shared memory and message passing, and hierarchical scheduling in combination with multiprocessor scheduling. For tracing distributed systems with asynchronous local clocks Grasp also supports the synchronization of traces from different processors during the visualization and analysis

    Flexible multi-policy scheduling based on CPU inheritance

    Get PDF
    Journal ArticleTraditional processor scheduling mechanisms in operating systems are fairly rigid, often supporting only one fixed scheduling policy, or, at most, a few "scheduling classes" whose implementations are closely tied together in the OS kernel. This paper presents CPU inheritance scheduling, a novel processor scheduling framework in which arbitrary threads can act as schedulers for other threads. Widely different scheduling policies can be implemented under the framework, and many different policies can coexist in a single system, providing much greater scheduling flexibility. Modular, hierarchical control can be provided over the processor utilization of arbitrary administrative domains, such as processes, jobs, users, and groups, and the CPU resources consumed can be accounted for and attributed accurately. Applications as well as the OS can implement customized local scheduling policies; the framework ensures that all the different policies work together logically and predictably. As a side effect, the framework also cleanly addresses priority inversion by providing a generalized form of priority inheritance that automatically works within and among multiple diverse scheduling policies. CPU inheritance scheduling extends naturally to multiprocessors, and supports processor management techniques such as processor affinity [7] and scheduler activations [1]. Experimental results and simulations indicate that this framework can be provided with negligible overhead in typical situations, and fairly small (5-10%) performance degradation even in scheduling-intensive situations

    Operating System Concepts for Reconfigurable Computing: Review and Survey

    Get PDF
    One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The article also presents an overview on published and available operating systems targeting the area of reconfigurable computing. The purpose of this article is to identify and summarize common patterns among those systems that can be seen as de facto standard. Furthermore, open problems, not covered by these already available systems, are identified
    corecore