161 research outputs found

    Least space-time first scheduling algorithm : scheduling complex tasks with hard deadline on parallel machines

    Get PDF
    Both time constraints and logical correctness are essential to real-time systems and failure to specify and observe a time constraint may result in disaster. Two orthogonal issues arise in the design and analysis of real-time systems: one is the specification of the system, and the semantic model describing the properties of real-time programs; the other is the scheduling and allocation of resources that may be shared by real-time program modules. The problem of scheduling tasks with precedence and timing constraints onto a set of processors in a way that minimizes maximum tardiness is here considered. A new scheduling heuristic, Least Space Time First (LSTF), is proposed for this NP-Complete problem. Basic properties of LSTF are explored; for example, it is shown that (1) LSTF dominates Earliest-Deadline-First (EDF) for scheduling a set of tasks on a single processor (i.e., if a set of tasks are schedulable under EDF, they are also schedulable under LSTF); and (2) LSTF is more effective than EDF for scheduling a set of independent simple tasks on multiple processors. Within an idealized framework, theoretical bounds on maximum tardiness for scheduling algorithms in general, and tighter bounds for LSTF in particular, are proven for worst case behavior. Furthermore, simulation benchmarks are developed, comparing the performance of LSTF with other scheduling disciplines for average case behavior. Several techniques are introduced to integrate overhead (for example, scheduler and context switch) and more realistic assumptions (such as inter-processor communication cost) in various execution models. A workload generator and symbolic simulator have been implemented for comparing the performance of LSTF (and a variant -- LSTF+) with that of several standard scheduling algorithms. LSTF\u27s execution model, basic theories, and overhead considerations have been defined and developed. Based upon the evidence, it is proposed that LSTF is a good and practical scheduling algorithm for building predictable, analyzable, and reliable complex real-time systems. There remain some open issues to be explored, such as relaxing some current restrictions, discovering more properties and theorems of LSTF under different models, etc. We strongly believe that LSTF can be a practical scheduling algorithm in the near future

    Scheduling with processing set restrictions : a survey

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    OStrich: Fair Scheduling for Multiple Submissions

    Get PDF
    International audienceCampaign Scheduling is characterized by multiple job submissions issued from multiple users over time. This model perfectly suits today's systems since most available parallel environments have multiple users sharing a common infrastructure. When scheduling individually the jobs submitted by various users, one crucial issue is to ensure fairness. This work presents a new fair scheduling algorithm called OStrich whose principle is to maintain a virtual time-sharing schedule in which the same amount of processors is assigned to each user. The completion times in the virtual schedule determine the execution order on the physical processors. Then, the campaigns are interleaved in a fair way by OStrich. For independent sequential jobs, we show that OStrich guarantees the stretch of a campaign to be proportional to campaign's size and the total number of users. The stretch is used for measuring by what factor a workload is slowed down relative to the time it takes on an unloaded system. The theoretical performance of our solution is assessed by simulating OStrich compared to the classical FCFS algorithm, issued from synthetic workload traces generated by two different user profiles. This is done to demonstrate how OStrich benefits both types of users, in contrast to FCFS

    Shop Scheduling In The Presence Of Batching, Sequence-dependent Setups And Incompatible Job Families Minimizing Earliness And Tardiness Penalties

    Get PDF
    The motivation of this research investigation stems from a particular job shop production environment at a large international communications and information technology company in which electro-mechanical assemblies (EMAs) are produced. The production environment of the EMAs includes the continuous arrivals of the EMAs (generally called jobs), with distinct due dates, degrees of importance and routing sequences through the production workstations, to the job shop. Jobs are processed in batches at the workstations, and there are incompatible families of jobs, where jobs from different product families cannot be processed together in the same batch. In addition, there are sequence-dependent setups between batches at the workstations. Most importantly, it is imperative that all product deliveries arrive on time to their customers (internal and external) within their respective delivery time windows. Delivery is allowed outside a time window, but at the expense of a penalty. Completing a job and delivering the job before the start of its respective time window results in a penalty, i.e., inventory holding cost. Delivering a job after its respective time window also results in a penalty, i.e., delay cost or emergency shipping cost. This presents a unique scheduling problem where an earlinesstardiness composite objective is considered. This research approaches this scheduling problem by decomposing this complex job shop scheduling environment into bottleneck and non-bottleneck resources, with the primary focus on effectively scheduling the bottleneck resource. Specifically, the problem of scheduling jobs with unique due dates on a single workstation under the conditions of batching, sequence-dependent iii setups, incompatible job families in order to minimize weighted earliness and tardiness is formulated as an integer linear program. This scheduling problem, even in its simplest form, is NP-Hard, where no polynomial-time algorithm exists to solve this problem to optimality, especially as the number of jobs increases. As a result, the computational time to arrive at optimal solutions is not of practical use in industrial settings, where production scheduling decisions need to be made quickly. Therefore, this research explores and proposes new heuristic algorithms to solve this unique scheduling problem. The heuristics use order review and release strategies in combination with priority dispatching rules, which is a popular and more commonly-used class of scheduling algorithms in real-world industrial settings. A computational study is conducted to assess the quality of the solutions generated by the proposed heuristics. The computational results show that, in general, the proposed heuristics produce solutions that are competitive to the optimal solutions, yet in a fraction of the time. The results also show that the proposed heuristics are superior in quality to a set of benchmark algorithms within this same class of heuristic

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Cross-dock Scheduling with Known Shipment Unloading Order

    Get PDF
    Cross-docking is a logistics strategy which is widely used these days in different industries. Cross-docking takes place in a distribution docking centre and consists of trucks and dock doors on inbound and outbound sides.Products from suppliers get unloaded at inbound doors from incoming trucks, consolidated, transferred and loaded into outgoing trucks at outbound doors, with little or no storing them in between. We study two scenarios of cross-docking scheduling problem: scheduling inbound side with fixed outbound side scheduling and scheduling both inbound and outbound sides. In the former scenario, we introduce five mixed integer programming models with enhanced pre-processing and extensions to minimize the total number of tardy products. In the later scenario, we proposed new linear mixed integer programming models where transportation time between dock doors are considered. The objective in the second case is to minimize the maximum lateness of outgoing trucks. In both scenarios, we integrate the unloading order of shipments in incoming trucks into our models. Computational results show that taking advantage of that information helps improving the truck scheduling and assessing much more accurately the number of tardy products and lateness
    corecore