294,277 research outputs found

    Imprecise Computation Model, Synchronous Periodic Real-time Task Sets and Total Weighted Error

    Get PDF
    This paper proposes two scheduling approaches, one-level and two-level scheduling, for synchronous periodic real-time task sets based on the Imprecise Computation Model. The imperative of real-time systems is a reaction on an event within a limited amount of time. Sometimes the available time and resources are not enough for the computations to complete within the deadlines, but still enough to produce approximate results. The Imprecise Computation Model is motivated by this idea, which gives the flexibility to trade off precision for timeliness. In this model a task is logically decomposed into a mandatory and optional subtask. Only the mandatory subtask is required to complete by its deadline, while the optional subtask may be left unfinished. Usually, different scheduling policies are used for the scheduling of mandatory and optional subtasks. For both proposed approaches the earliest deadline first and rate monotonic scheduling algorithms are used for the scheduling of mandatory subtasks, whereas the optional subtasks are scheduled in a way that the total weighted error is minimized. The basic idea of one-level scheduling is to extend the mandatory execution times, while in two-level scheduling the mandatory and optional subtasks are separately scheduled. The single preemptive processor model is assumed

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones

    Efficient schedulability tests for real-time embedded systems with urgent routines

    Get PDF
    Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test

    Machine learning regression to boost scheduling performance in hyper-scale cloud-computing data centres

    Get PDF
    Data centres increase their size and complexity due to the increasing amount of heterogeneous work loads and patterns to be served. Such a mix of various purpose workloads makes the optimisation of resource management systems according to temporal or application-level patterns difficult. Data centre operators have developed multiple resource-management models to improve scheduling perfor mance in controlled scenarios. However, the constant evolution of the workloads makes the utilisation of only one resource-management model sub-optimal in some scenarios. In this work, we propose: (a) a machine learning regression model based on gradient boosting to pre dict the time a resource manager needs to schedule incoming jobs for a given period; and (b) a resource management model, Boost, that takes advantage of this regression model to predict the scheduling time of a catalogue of resource managers so that the most performant can be used for a time span. The benefits of the proposed resource-management model are analysed by comparing its scheduling performance KPIs to those provided by the two most popular resource-management models: two level, used by Apache Mesos, and shared-state, employed by Google Borg. Such gains are empirically eval uated by simulating a hyper-scale data centre that executes a realistic synthetically generated workload that follows real-world trace patternsMinisterio de Ciencia e Innovación RTI2018-098062-A-I0

    CO-DESIGN OF DYNAMIC REAL-TIME SCHEDULING AND COOPERATIVE CONTROL FOR HUMAN-AGENT COLLABORATION SYSTEMS BASED ON MUTUAL TRUST

    Get PDF
    Mutual trust is a key factor in human-human collaboration. Inspired by this social interaction, we analyze human-agent mutual trust in the collaboration of one human and (semi)autonomous multi-agent systems. In the thesis, we derive time-series human-agent mutual trust models based on results from human factors engineering. To avoid both over- trust and under-trust, we set up dynamic timing models for the multi-agent scheduling problem and develop necessary and sufficient conditions to test the schedulability of the human multi-agent collaborative task. Furthermore, we extend the collaboration between one human and multiple agents into the collaboration between multi-human network and swarm-based agents network. To measure the collaboration between these two networks, we propose a novel measurement, called fitness. By fitness, we can simplify multi-human and swarms collaboration into one-human and swarms collaboration. Cooperative control is incorporated into the swarm systems to enable several large-scale agent teams to simultaneously reach navigational goals and avoid collisions. Our simulation results show that the proposed algorithm can be applied to human- agent collaboration systems and guarantee effective real-time scheduling of collaboration systems while ensuring a proper level of human-agent mutual trust

    Combined Scheduling of Time-Triggered Plans and Priority Scheduled Task Sets

    Full text link
    © Owner/Author (2016). This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM SIGAda Ada Letters, 36(1), 68-76, http://dx.doi.org/10.1145/10.1145/2971571.2971580.[EN] Preemptive, priority-based scheduling on the one hand, and time-triggered scheduling on the other, are the two major techniques in use for development of real-time and embedded software. Both have their advantages and drawbacks with respect to the other, and are commonly adopted in mutual exclusion. In a previous paper, we proposed a software architecture that enables the combined and controlled execution of time-triggered plans and priority-scheduled tasks. The goal was to take advantage of the best of both approaches by providing deterministic, jitter-controlled execution of time-triggered tasks (e.g., control tasks), coexisting with a set of priority-scheduled tasks, with less demanding jitter requirements. In this paper, we briefly describe the approach, in which the time-triggered plan is executed at the highest priority level, controlled by scheduling decisions taken only at particular points in time, signalled by recurrent timing events. The rest of priority levels are used by a set of concurrent tasks scheduled by static or dynamic priorities. We also discuss several open issues such as schedulability analysis, use of the approach in multiprocessor architectures, usability in mixed-criticality systems and needed changes to make this approach Ravenscar compliant.This work has been partly supported by the Spanish Government’s project M2C2 (TIN2014-56158-C4-1-P-AR) and the European Commission’s project EMC2 (ARTEMIS-JU Call 2013 AIPP-5, Contract 621429).Real Sáez, JV.; Sáez Barona, S.; Crespo Lorente, A. (2016). Combined Scheduling of Time-Triggered Plans and Priority Scheduled Task Sets. Ada Letters. 36(1):68-76. https://doi.org/10.1145/2971571.2971580S6876361T. P. Baker and A. Shaw. The cyclic executive model and Ada. In Proceedings IEEE Real Time Systems Symposium 1988, Huntsville, Alabama, pages 120--129, 1988.P. Balbastre, I. Ripoll, J. Vidal, and A. Crespo. A Task Model to Reduce Control Delays. Real-Time Systems, 27(3):215--236, September 2004.A. Burns and R. Davis. Mixed Criticality Systems - A Review. Technical report, Depatment of Computer Science, University of York, 2013.A. Cervin. Integrated Control and Real-Time Scheduling. PhD thesis, Lund Institute of Technology, April 2003.R. Dobrin. Combining Offline Schedule Construction and Fixed Priority Scheduling in Real-Time Computer Systems. PhD thesis, Mälardalen University, 2005.S. Hong, X. Hu, and M. Lemmon. Reducing Delay Jitter of Real-Time Control Tasks through Adaptive Deadline Adjustments. In IEEE Computer Society, editor, 22nd Euromicro Conference on Real-Time Systems -- ECRTS, pages 229--238, 2010.J. W. S. Liu. Real-Time Systems. Prentice-Hall Inc., 2000.J. Palencia and M. González-Harbour. Schedulability Analysis for Tasks with Static and Dynamic Offsets. In 9th IEEE Real-Time Systems Symposium, 1998.M. J. Pont. The Engineering of Reliable Embedded Systems: LPC1769 edition. Number ISBN: 978-0-9930355-0-0. SafeTTy Systems Limited, 2014.J. Real and A. Crespo. Incorporating Operating Modes to an Ada Real-Time Framework. Ada Letters, 30(1):73--85, April 2010.J. Real, S. Sáez, and A. Crespo. Combining time-triggered plans with priority scheduled task sets. In M. Bertogna and L. M. Pinho, editors, Reliable Software Technologies -- Ada-Europe 2016, volume 9695 of Lecture Notes in Computer Science. Springer, June 2016.S. Sáez, J. Real, and A. Crespo. An integrated framework for multiprocessor, multimoded real-time applications. In M. Brorsson and L. Pinho, editors, Reliable Software Technologies -- Ada-Europe 2012, volume 7308, pages 18--34. Springer-Verlag, June 2012.S. Sáez, J. Real, and A. Crespo. Implementation of Timing-Event Anities in Ada/Linux. Ada Letters, 35(1), April 2015.A. J. Wellings and A. Burns. A Framework for Real-Time Utilities for Ada 2005. Ada Letters, XXVII(2), August 2007

    The Impact of Intelligent Aiding for Multiple Unmanned Aerial Vehicle Schedule Management

    Get PDF
    There is increasing interest in designing systems such that the current many-to-one ratio of operators to unmanned vehicles (UVs) can be inverted. Instead of lower-level tasks performed by today’s UV teams, the sole operator would focus on high-level supervisory control tasks. A key challenge in the design of such single-operator systems will be the need to minimize periods of excessive workload that arise when critical tasks for several UVs occur simultaneously. Thus some kind of decision support is needed that facilitates an operator’s ability to evaluate different action alternatives for managing a multiple UV mission schedule in real-time. This paper describes two decision support experiments that attempted to provide UAV operators with multivariate scheduling assistance, with mixed results. Those automated decision support tools that provided more local, as opposed to global, visual recommendations produced superior performance, suggesting that meta-information displays could saturate operators and reduce performance.This research was sponsored by Boeing Phantom Work and Mitre, Inc

    Using hierarchical scheduling to support soft real-time applications in general-purpose operating systems

    Get PDF
    Journal ArticleThe CPU schedulers in general-purpose operating systems are designed to provide fast response time for interactive applications and high throughput for batch applications. The heuristics used to achieve these goals do not lend themselves to scheduling real-time applications, nor do they meet other scheduling requirements such as coordinating scheduling across several processors or machines, or enforcing isolation between applications, users, and administrative domains. Extending the scheduling subsystems of general-purpose operating systems in an ad hoc manner is time consuming and requires considerable expertise as well as source code to the operating system. Furthermore, once extended, the new scheduler may be as inflexible as the original. The thesis of this dissertation is that extending a general-purpose operating system with a general, heterogeneous scheduling hierarchy is feasible and useful. A hierarchy of schedulers generalizes the role of CPU schedulers by allowing them to schedule other schedulers in addition to scheduling threads. A general, heterogeneous scheduling hierarchy is one that allows arbitrary (or nearly arbitrary) scheduling algorithms throughout the hierarchy. In contrast, most of the previous work on hierarchical scheduling has imposed restrictions on the schedulers used in part or all of the hierarchy. This dissertation describes the Hierarchical Loadable Scheduler (HLS) architecture, which permits schedulers to be dynamically composed in the kernel of a general-purpose operating system. The most important characteristics of HLS, and the ones that distinguish it from previous work, are that it has demonstrated that a hierarchy of nearly arbitrary schedulers can be efficiently implemented in a general-purpose operating system, and that the behavior of a hierarchy of soft real-time schedulers can be reasoned about in order to provide guaranteed scheduling behavior to application threads. The flexibility afforded by HLS permits scheduling behavior to be tailored to meet complex requirements without encumbering users who have modest requirements with the performance and administrative costs of a complex scheduler. Contributions of this dissertation include the following. (1) The design, prototype implementation, and performance evaluation of HLS in Windows 2000. (2) A system of guarantees for scheduler composition that permits reasoning about the scheduling behavior of a hierarchy of soft real-time schedulers. Guarantees assure users that application requirements can be met throughout the lifetime of the application, and also provide application developers with a model of CPU allocation to which they can program. (3) The design, implementation, and evaluation of two augmented CPU reservation schedulers, which provide increase scheduling predictability when low-level operating system activity steals time from applications

    Scheduling Manufacturing Systems With Work-in-Process Inventory Control: Single-Part-Type Systems

    Get PDF
    In this paper, a real-time feedback control algorithm is developed for scheduling single-part-type production lines in which there are three important classes of activities: operations, failures, and starvation or blockage. The scheduling objectives are to keep the actual production as close to the demand as possible, and to keep the level of work-in-process inventory as low as possible. By relating the starvation and blockage to the system capacity, the buffer sizes and the target buffer levels are chosen according to the demands and machine parameters. The processing time for each operation is deterministic. Failure and repair times are random. Whenever a machine fails or is starved or blocked, the scheduling system recalculates short term production rates. To begin with, we study a very simple case, a two machine and one part type system, to get insight into the buffer effects and production control policies. Using the relationship between system capacity and starvation or blockage, we find desirable buffer levels and buffer sizes. The production control policy is determined to meet the system performance requirements concerning low WIP inventory and tardiness. The results from the simple case are extended to N-machine, one-part-type systems
    corecore