392,709 research outputs found

    Reservation-Based Federated Scheduling for Parallel Real-Time Tasks

    Full text link
    This paper considers the scheduling of parallel real-time tasks with arbitrary-deadlines. Each job of a parallel task is described as a directed acyclic graph (DAG). In contrast to prior work in this area, where decomposition-based scheduling algorithms are proposed based on the DAG-structure and inter-task interference is analyzed as self-suspending behavior, this paper generalizes the federated scheduling approach. We propose a reservation-based algorithm, called reservation-based federated scheduling, that dominates federated scheduling. We provide general constraints for the design of such systems and prove that reservation-based federated scheduling has a constant speedup factor with respect to any optimal DAG task scheduler. Furthermore, the presented algorithm can be used in conjunction with any scheduler and scheduling analysis suitable for ordinary arbitrary-deadline sporadic task sets, i.e., without parallelism

    Semi-Federated Scheduling of Parallel Real-Time Tasks on Multiprocessors

    Full text link
    Federated scheduling is a promising approach to schedule parallel real-time tasks on multi-cores, where each heavy task exclusively executes on a number of dedicated processors, while light tasks are treated as sequential sporadic tasks and share the remaining processors. However, federated scheduling suffers resource waste since a heavy task with processing capacity requirement x+ϵx + \epsilon (where xx is an integer and 0<ϵ<10 < \epsilon < 1) needs x+1x + 1 dedicated processors. In the extreme case, almost half of the processing capacity is wasted. In this paper we propose the semi-federate scheduling approach, which only grants xx dedicated processors to a heavy task with processing capacity requirement x+ϵx + \epsilon, and schedules the remaining ϵ\epsilon part together with light tasks on shared processors. Experiments with randomly generated task sets show the semi-federated scheduling approach significantly outperforms not only federated scheduling, but also all existing approaches for scheduling parallel real-time tasks on multi-cores

    Configurable Strategies for Work-stealing

    Full text link
    Work-stealing systems are typically oblivious to the nature of the tasks they are scheduling. For instance, they do not know or take into account how long a task will take to execute or how many subtasks it will spawn. Moreover, the actual task execution order is typically determined by the underlying task storage data structure, and cannot be changed. There are thus possibilities for optimizing task parallel executions by providing information on specific tasks and their preferred execution order to the scheduling system. We introduce scheduling strategies to enable applications to dynamically provide hints to the task-scheduling system on the nature of specific tasks. Scheduling strategies can be used to independently control both local task execution order as well as steal order. In contrast to conventional scheduling policies that are normally global in scope, strategies allow the scheduler to apply optimizations on individual tasks. This flexibility greatly improves composability as it allows the scheduler to apply different, specific scheduling choices for different parts of applications simultaneously. We present a number of benchmarks that highlight diverse, beneficial effects that can be achieved with scheduling strategies. Some benchmarks (branch-and-bound, single-source shortest path) show that prioritization of tasks can reduce the total amount of work compared to standard work-stealing execution order. For other benchmarks (triangle strip generation) qualitatively better results can be achieved in shorter time. Other optimizations, such as dynamic merging of tasks or stealing of half the work, instead of half the tasks, are also shown to improve performance. Composability is demonstrated by examples that combine different strategies, both within the same kernel (prefix sum) as well as when scheduling multiple kernels (prefix sum and unbalanced tree search)

    Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing

    Get PDF
    With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications

    Replica determinism and flexible scheduling in hard real-time dependable systems

    Get PDF
    Fault-tolerant real-time systems are typically based on active replication where replicated entities are required to deliver their outputs in an identical order within a given time interval. Distributed scheduling of replicated tasks, however, violates this requirement if on-line scheduling, preemptive scheduling, or scheduling of dissimilar replicated task sets is employed. This problem of inconsistent task outputs has been solved previously by coordinating the decisions of the local schedulers such that replicated tasks are executed in an identical order. Global coordination results either in an extremely high communication effort to agree on each schedule decision or in an overly restrictive execution model where on-line scheduling, arbitrary preemptions, and nonidentically replicated task sets are not allowed. To overcome these restrictions, a new method, called timed messages, is introduced. Timed messages guarantee deterministic operation by presenting consistent message versions to the replicated tasks. This approach is based on simulated common knowledge and a sparse time base. Timed messages are very effective since they neither require communication between the local scheduler nor do they restrict usage of on-line flexible scheduling, preemptions and nonidentically replicated task sets

    Evaluation of scheduling techniques for payload activity planning

    Get PDF
    Two tasks related to payload activity planning and scheduling were performed. The first task involved making a comparison of space mission activity scheduling problems with production scheduling problems. The second task consisted of a statistical analysis of the output of runs of the Experiment Scheduling Program (ESP). Details of the work which was performed on these two tasks are presented

    The Epistemology of scheduling problems

    Get PDF
    Scheduling is a knowledge-intensive task spanning over many activities in day-to-day life. It deals with the temporally-bound assignment of jobs to resources. Although scheduling has been extensively researched in the AI community for the past 30 years, efforts have primarily focused on specific applications, algorithms, or 'scheduling shells' and no comprehensive analysis exists on the nature of scheduling problems, which provides a formal account of what scheduling is, independently of the way scheduling problems can be approached. Research on KBS development by reuse makes use of ontologies, to provide knowledge-level specifications of reusable KBS components. In this paper we describe a task ontology, which formally characterises the nature of scheduling problems, independently of particular application domains and in-dependently of how the problems can be solved. Our results provide a comprehensive, domain-independent and formally specified refer-ence model for scheduling applications. This can be used as the ba-sis for further analyses of the class of scheduling problems and also as a concrete reusable resource to support knowledge acquisition and system development in scheduling applications
    corecore