1,614 research outputs found

    Replica determinism and flexible scheduling in hard real-time dependable systems

    Get PDF
    Fault-tolerant real-time systems are typically based on active replication where replicated entities are required to deliver their outputs in an identical order within a given time interval. Distributed scheduling of replicated tasks, however, violates this requirement if on-line scheduling, preemptive scheduling, or scheduling of dissimilar replicated task sets is employed. This problem of inconsistent task outputs has been solved previously by coordinating the decisions of the local schedulers such that replicated tasks are executed in an identical order. Global coordination results either in an extremely high communication effort to agree on each schedule decision or in an overly restrictive execution model where on-line scheduling, arbitrary preemptions, and nonidentically replicated task sets are not allowed. To overcome these restrictions, a new method, called timed messages, is introduced. Timed messages guarantee deterministic operation by presenting consistent message versions to the replicated tasks. This approach is based on simulated common knowledge and a sparse time base. Timed messages are very effective since they neither require communication between the local scheduler nor do they restrict usage of on-line flexible scheduling, preemptions and nonidentically replicated task sets

    Fair Resource Sharing for Dynamic Scheduling of Workflows on Heterogeneous Systems

    Get PDF
    International audienceScheduling independent workflows on shared resources in a way that satisfy users Quality of Service is a significant challenge. In this study, we describe methodologies for off-line scheduling, where a schedule is generated for a set of knownworkflows, and on-line scheduling, where users can submit workflows at any moment in time. We consider the on-line scheduling problem in more detail and present performance comparisons of state-of-the-art algorithms for a realistic model of a heterogeneous system

    Bandwidth-Aware On-Line Scheduling in SMT Multicores

    Full text link
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The memory hierarchy plays a critical role on the performance of current chip multiprocessors. Main memory is shared by all the running processes, which can cause important bandwidth contention. In addition, when the processor implements SMT cores, the L1 bandwidth becomes shared among the threads running on each core. In such a case, bandwidth-aware schedulers emerge as an interesting approach to mitigate the contention. This work investigates the performance degradation that the processes suffer due to memory bandwidth constraints. Experiments show that main memory and L1 bandwidth contention negatively impact the process performance; in both cases, performance degradation can grow up to 40 percent for some of applications. To deal with contention, we devise a scheduling algorithm that consists of two policies guided by the bandwidth consumption gathered at runtime. The process selection policy balances the number of memory requests over the execution time to address main memory bandwidth contention. The process allocation policy tackles L1 bandwidth contention by balancing the L1 accesses among the L1 caches. The proposal is evaluated on a Xeon E5645 platform using a wide set of multiprogrammed workloads, achieving performance benefits up to 6.7 percent with respect to the Linux scheduler.This work was supported by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2012-38341-C04-01, and by the Intel Early Career Faculty Honor Program Award.Feliu-Pérez, J.; Sahuquillo Borrás, J.; Petit Martí, SV.; Duato Marín, JF. (2016). Bandwidth-Aware On-Line Scheduling in SMT Multicores. IEEE Transactions on Computers. 65(2):422-434. https://doi.org/10.1109/TC.2015.2428694S42243465

    On-line scheduling to minimize average completion time revisited

    Get PDF

    When Queueing Meets Coding: Optimal-Latency Data Retrieving Scheme in Storage Clouds

    Full text link
    In this paper, we study the problem of reducing the delay of downloading data from cloud storage systems by leveraging multiple parallel threads, assuming that the data has been encoded and stored in the clouds using fixed rate forward error correction (FEC) codes with parameters (n, k). That is, each file is divided into k equal-sized chunks, which are then expanded into n chunks such that any k chunks out of the n are sufficient to successfully restore the original file. The model can be depicted as a multiple-server queue with arrivals of data retrieving requests and a server corresponding to a thread. However, this is not a typical queueing model because a server can terminate its operation, depending on when other servers complete their service (due to the redundancy that is spread across the threads). Hence, to the best of our knowledge, the analysis of this queueing model remains quite uncharted. Recent traces from Amazon S3 show that the time to retrieve a fixed size chunk is random and can be approximated as a constant delay plus an i.i.d. exponentially distributed random variable. For the tractability of the theoretical analysis, we assume that the chunk downloading time is i.i.d. exponentially distributed. Under this assumption, we show that any work-conserving scheme is delay-optimal among all on-line scheduling schemes when k = 1. When k > 1, we find that a simple greedy scheme, which allocates all available threads to the head of line request, is delay optimal among all on-line scheduling schemes. We also provide some numerical results that point to the limitations of the exponential assumption, and suggest further research directions.Comment: Original accepted by IEEE Infocom 2014, 9 pages. Some statements in the Infocom paper are correcte

    Trade-offs between speed and processor in hard-deadline scheduling

    Get PDF
    This paper revisits the problem of on-line scheduling of sequential jobs with hard deadlines in a preemptive, multiprocessor setting. An on-line scheduling algorithm is said to be optimal if it can schedule any set of jobs to meet their deadlines whenever it is feasible in the off-line sense. It is known that the earliest-deadline-first strategy (EDF) is optimal in a one-processor setting, and there is no optimal on-line algorithm in an m-processor setting where m≥2. Recent work however reveals that if the on-line algorithm is given faster processors, EDF is actually optimal for all m (e.g., when m = 2, it suffices to use processors 1.5 times as fast). This paper initiates the study of the trade-off between increasing the speed and using more processors in deriving optimal on-line scheduling algorithms. Several upper bound and lower bound results are presented. For example, the speed requirement of EDF can be reduced to 2-1+p/m+p when it is given p≥0 extra processors. The main result is a new on-line algorithm which demands less speedy processors so as to attain optimality (e.g., when m = 2, the speed requirement is 1 1/3) and admits a better speed-processor trade-off than EDF (e.g., when m = 2 and p = 1, the speed requirement is 1.2). In general, no optimal algorithm exists when the speed factor is less than 1/(2√2+p/m-2).published_or_final_versio

    Optimal On-Line Scheduling of Multiple Control Tasks: A Case Study

    Get PDF
    We study the problem of dynamically scheduling a set of state-feedback control tasks controlling a set of linear plants. We consider an on-line non-preemptive scheduling policy that is optimal in the sense that it minimizes a quadratic performance criterion for the overall system. The optimal scheduling decision at each point in time is a function of the states of the controlled plants. To be able to solve the scheduling problem for realistic examples, we use the technique of relaxed dynamic programming to compute suboptimal solutions with error bounds. The approach is compared to earlier approaches in a case study involving simultaneous control of one ball-and-beam process and two DC-servo processes. We also show how the scheduling policy can be modified to allow for background tasks to execute when the need for control is small. 1
    corecore