12 research outputs found

    Complexity of scheduling multiprocessor tasks with prespecified processor allocations

    Get PDF
    We investigate the computational complexity of scheduling multiprocessor tasks with prespecified processor allocations. We consider two criteria: minimizing schedule length and minimizing the sum of the task completion times. In addition, we investigate the complexity of problems when precedence constraints or release dates are involved

    Multiprocessor task scheduling in multistage hyrid flowshops: a genetic algorithm approach

    Get PDF
    This paper considers multiprocessor task scheduling in a multistage hybrid flow-shop environment. The objective is to minimize the make-span, that is, the completion time of all the tasks in the last stage. This problem is of practical interest in the textile and process industries. A genetic algorithm (GA) is developed to solve the problem. The GA is tested against a lower bound from the literature as well as against heuristic rules on a test bed comprising 400 problems with up to 100 jobs, 10 stages, and with up to five processors on each stage. For small problems, solutions found by the GA are compared to optimal solutions, which are obtained by total enumeration. For larger problems, optimum solutions are estimated by a statistical prediction technique. Computational results show that the GA is both effective and efficient for the current problem. Test problems are provided in a web site at www.benchmark.ibu.edu.tr/mpt-h; fsp

    A Polynomial Time Approximation Scheme for General Multiprocessor Job Scheduling

    Get PDF

    Preemptive open shop scheduling with multiprocessors: polynomial cases and applications

    Get PDF
    This paper addresses a multiprocessor generalization of the preemptive open-shop scheduling problem. The set of processors is partitioned into two groups and the operations of the jobs may require either single processors in either group or simultaneously all processors from the same group. We consider two variants depending on whether preemptions are allowed at any fractional time point or only at integral time points. We shall show that the former problem can be solved in polynomial time, and provide sufficient conditions under which the latter problem is tractable. Applications to course scheduling and hypergraph edge coloring are also discussed

    General multiprocessor task scheduling.

    Get PDF
    Abstract: Most papers in the scheduling field assume that a job can be processed by only one machine at a time. Namely, they use a one-job-on-one-machine model. In many industry settings, this may not be an adequate model. Motivated by human resource planning, diagnosable microprocessor systems, berth allocation, and manufacturing systems that may require several resources simultaneously to process a job, we study the problem with a one-job-on-multiplemachine model. In our model, there are several alternatives that can be used to process a job. In each alternative, several machines need to process simultaneously the job assigned. Our purpose is to select an alternative for each job and then to schedule jobs to minimize the completion time of all jobs. In this paper, we provide a pseudopolynomial algorithm to solve optimally the two-machine problem, and a combination of a fully polynomial scheme and a heuristic to solve the three-machine problem. We then extend the results to a general m-machine problem. Our algorithms also provide an effective lower bounding scheme which lays the foundation for solving optimally the general m-machine problem. Furthermore, our algorithms can also be applied to solve a special case of the three-machine problem in pseudopolynomial time. Both pseudopolynomial algorithms (for two-machine and three-machine problems) are much more efficient than those in the literature. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 57-74, 1999 INTRODUCTION Due to the popularity of just-in-time and total quality management concepts, scheduling has played an important role in satisfying customer's expectation for on-time delivery. In the last four decades, many papers have been published in the scheduling area. There is a common assumption in the scheduling literature of a one-job-on-one-machine pattern. That is, at a given time, each job can be processed on only one machine. In many industry applications, this may not be an adequate model. Namely, a job may be processed simultaneously by several machines. For example, in semiconductor circuit design workforce planning, a design project is to be processed by m persons (a team of people). The project contains n tasks, and each task can be worked on by one of a set of alternatives, where each alternative contains one or more persons working simultaneously on that particular task. Task 1 can either be handled by person 1 and person 2 together, by person 1 and person 3 together, or just handled by person 1 alone. The processing time of each task depends on the group being assigned to handle the task. A group is formed when a set of people is working on a particular task, but a person may not belong to a fixed group all the time. Our goal is to assign these n tasks to m persons in order to minimize the project finishing time. Other applications can be found in (i) the berth allocation problems, where a large vessel may occupy several berths for loading and/or unloading, (ii) diagnosable microprocessor systems (Krawczyk and Kubale [20]), where a job must be performed on parallel processors in order to detect faults, (iii) manufacturing, where a job may need machines, tools, and people simultaneously, and (iv) scheduling a sequence of meetings where each meeting requires a certain group of people (Dobson and Karmarker [13]). In all the above examples, one job may need to be processed by several machines simultaneously. In literature we call this multiprocessor task scheduling (Drozdowski [15]) or a one-job-on-multiplemachine problem (Lee and Cai [24]). We are interested in the problem with a one-job-on-multiple-machine pattern. To describe the problem concisely we introduce the notation first. There are n jobs to be processed on m machines, and N(i) : the number of alternative machine sets to which job J i can be assigned, t i,I : the processing time of job J i if it is assigned to the processors in set I, where I is a set of machines (for example, t i,12 is the processing time of J i assigned to Processors 1 and 2). In this paper, job and task will be used interchangeably, and machine and processor will also be used interchangeably. For example, we may have four jobs to be processed by three machines. The alternative machine sets that can process each job are shown in the following matrices. J 1 can be processed by one of six alternatives (N(1) ϭ 6): {M 1 ,M 2 ,M 3 }, {M 1 ,M 2 }, {M 1 ,M 3 }, {M 2 ,M 3 }, {M 2 } or {M 3 }, with corresponding processing times t 1,123 ϭ 2, t 1,12 ϭ 3, t 1,13 ϭ 3, t 1,23 ϭ 3, t 1,2 ϭ 6, and t 1,3 ϭ 8. Therefore, if J 1 is processed by the first alternative: {M 1 ,M 2 ,M 3 } (represented by the first column in the first matrix where an entry of 1 in row j means that M j belongs to alternative 1), then the processing time is t 1,123 ϭ 2. Our purpose is to schedule jobs with a particular objective function. Vol. 46 (1999) In general, for the m-machine problem, the maximum possible number of alternative sets for Actually, for notational convenience we can always assume that ͪ for all i and let t iI ϭ ϱ for those I such that job i cannot be processed in parallel by processors with indices in I. In the above example, t 2,12 , t 2,13 , and t 2,3 are all equal to ϱ. In this paper, we are particularly interested in m ϭ 2 and 3. Hence for all i, we have N(i) ϭ 3 and 7 for m ϭ 2 and 3 respectively. There are two special cases that have been studied in the scheduling literature. In the first special case, N(i) ϭ 1 for all i. Namely, for each job a specifically fixed set of machines is assigned to it. We call this problem a fixed multiprocessor task scheduling problem. The second class of problems assumes that each job may require a fixed number of processors working simultaneously, yet the machines required are not specified. In the example above, if only t 1,12 , t 1,13 , t 1,23 , t 2,123 , t 3,1 , t 3,2 , t 3,3 , t 4,1 , t 4,2 , and t 4,3 are finite numbers, then it is equivalent to the problem where J 1 needs to be processed by two machines (any two machines simultaneously with processing time 3), J 2 by three machines simultaneously (with processing time 4) and jobs J 3 and J 4 by only one machine (with processing times 8 and 4, respectively). We call this problem a sized multiprocessor task scheduling problem. Blazewicz, Weglarz, and Drabowski To refer to the problem under study more precisely, we follow the standard notation used in scheduling literature. We use Pm͉set j ͉C max to denote the general problem of minimizing the makespan of multiprocessor tasks in the m-parallel-machine problem, where each job can be processed by a set of alternatives, and each alternative contains one or more machines simultaneously. Also, we use Pm͉fix j ͉C max to denote the first special case where the alternative assigned to each job has been fixed in advance. The paper is organized in the following way. In Section 1, we study the problem characteristics and provide some optimality properties. Section 2 discusses the two-machine problem. In particular, we provide a pseudopolynomial algorithm with running time O(nT 0 ) to solve the 59 Chen and Lee: General Multiprocessor Task Scheduling problem optimally. Section 3 discusses the three-machine problem. We provide a pseudopolynomial algorithm with running time O(nT 0 2 ) to find an effective lower bound for the optimal makespan and to solve optimally a special case of the problem. Both of our pseudopolynomial algorithms significantly improve previous results in the literature. We also provide a combination of a fully polynomial scheme and a heuristic method, with time complexity O(n 3 / 2 ) and error bound (3/2)(1 ϩ ), to solve the general three-machine problem. Section 4 extends our results to the m-machine problem. Finally, Section 5 concludes with a summary and a discussion of some future research topics

    Least space-time first scheduling algorithm : scheduling complex tasks with hard deadline on parallel machines

    Get PDF
    Both time constraints and logical correctness are essential to real-time systems and failure to specify and observe a time constraint may result in disaster. Two orthogonal issues arise in the design and analysis of real-time systems: one is the specification of the system, and the semantic model describing the properties of real-time programs; the other is the scheduling and allocation of resources that may be shared by real-time program modules. The problem of scheduling tasks with precedence and timing constraints onto a set of processors in a way that minimizes maximum tardiness is here considered. A new scheduling heuristic, Least Space Time First (LSTF), is proposed for this NP-Complete problem. Basic properties of LSTF are explored; for example, it is shown that (1) LSTF dominates Earliest-Deadline-First (EDF) for scheduling a set of tasks on a single processor (i.e., if a set of tasks are schedulable under EDF, they are also schedulable under LSTF); and (2) LSTF is more effective than EDF for scheduling a set of independent simple tasks on multiple processors. Within an idealized framework, theoretical bounds on maximum tardiness for scheduling algorithms in general, and tighter bounds for LSTF in particular, are proven for worst case behavior. Furthermore, simulation benchmarks are developed, comparing the performance of LSTF with other scheduling disciplines for average case behavior. Several techniques are introduced to integrate overhead (for example, scheduler and context switch) and more realistic assumptions (such as inter-processor communication cost) in various execution models. A workload generator and symbolic simulator have been implemented for comparing the performance of LSTF (and a variant -- LSTF+) with that of several standard scheduling algorithms. LSTF\u27s execution model, basic theories, and overhead considerations have been defined and developed. Based upon the evidence, it is proposed that LSTF is a good and practical scheduling algorithm for building predictable, analyzable, and reliable complex real-time systems. There remain some open issues to be explored, such as relaxing some current restrictions, discovering more properties and theorems of LSTF under different models, etc. We strongly believe that LSTF can be a practical scheduling algorithm in the near future

    Task scheduling in VLSI circuit design: algorithm and bounds.

    Get PDF
    by Lam Shiu-chung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 107-113).Abstracts in English and Chinese.List of Figures --- p.vList of Tables --- p.viiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Task Scheduling Problem and Lower Bound --- p.3Chapter 1.3 --- Organization of the Thesis --- p.4Chapter 2 --- Teamwork-Task Scheduling Problem --- p.5Chapter 2.1 --- Problem Statement and Notations --- p.5Chapter 2.2 --- Classification of Scheduling --- p.7Chapter 2.3 --- Computational Complexity --- p.9Chapter 2.4 --- Literature Review --- p.12Chapter 2.4.1 --- Unrelated Machines Scheduling Environment --- p.12Chapter 2.4.2 --- Multiprocessors Scheduling Problem --- p.13Chapter 2.4.3 --- Search Algorithms --- p.14Chapter 2.4.4 --- Lower Bounds --- p.15Chapter 2.5 --- Summary --- p.17Chapter 3 --- Fundamentals of Genetic Algorithms --- p.18Chapter 3.1 --- Initial Inspiration --- p.18Chapter 3.2 --- An Elementary Genetic Algorithm --- p.20Chapter 3.2.1 --- "Genes, Chromosomes and Representations" --- p.20Chapter 3.2.2 --- Population Pool --- p.22Chapter 3.2.3 --- Evaluation Module --- p.22Chapter 3.2.4 --- Reproduction Module --- p.22Chapter 3.2.5 --- Genetic Operators: Crossover and Mutation --- p.23Chapter 3.2.6 --- Parameters --- p.24Chapter 3.3 --- A Brief Note to the Background Theory --- p.25Chapter 3.4 --- Key Factors for the Success --- p.27Chapter 4 --- Tasks Scheduling using Genetic Algorithms --- p.28Chapter 4.1 --- Details of Scheduling Problem --- p.28Chapter 4.2 --- Chromosome Coding --- p.32Chapter 4.2.1 --- Job Priority Sequence --- p.33Chapter 4.2.2 --- Engineer Priority Sequence --- p.33Chapter 4.2.3 --- An Example Chromosome Interpretation --- p.34Chapter 4.3 --- Fitness Evaluation --- p.37Chapter 4.4 --- Parent Selection --- p.38Chapter 4.5 --- Genetic Operators and Reproduction --- p.40Chapter 4.5.1 --- Job Priority Crossover (JOB-CRX) --- p.40Chapter 4.5.2 --- Job Priority Mutation (JOB-MUT) --- p.40Chapter 4.5.3 --- Engineer Priority Mutation (ENG-MUT) --- p.42Chapter 4.5.4 --- Reproduction: New Population --- p.42Chapter 4.6 --- Replacement Strategy --- p.43Chapter 4.7 --- The Complete Genetic Algorithm --- p.44Chapter 5 --- Lower Bound on Optimal Makespan --- p.46Chapter 5.1 --- Introduction --- p.46Chapter 5.2 --- Definitions and Assumptions --- p.48Chapter 5.2.1 --- Task Graph --- p.48Chapter 5.2.2 --- Graph Partitioning --- p.49Chapter 5.2.3 --- Activity and Load Density --- p.51Chapter 5.2.4 --- Assumptions --- p.52Chapter 5.3 --- Concepts of Lower Bound on the Minimal Time (LBMT) --- p.53Chapter 5.3.1 --- Previous Bound (LBMTF) --- p.53Chapter 5.3.2 --- Bound in other form --- p.54Chapter 5.3.3 --- Improved Bound (LBMTJR) --- p.56Chapter 5.4 --- Lower bound: Task graph reconstruction + LBMTJR --- p.59Chapter 5.4.1 --- Problem reduction and Assumptions --- p.60Chapter 5.4.2 --- Scenario I --- p.61Chapter 5.4.3 --- Scenario II --- p.63Chapter 5.4.4 --- An Example --- p.67Chapter 6 --- Computational Results and Discussions --- p.73Chapter 6.1 --- Parameterization of the GA --- p.73Chapter 6.2 --- Computational Results --- p.75Chapter 6.3 --- Performance Evaluation --- p.81Chapter 6.3.1 --- Solution Quality --- p.81Chapter 6.3.2 --- Computational Complexity --- p.86Chapter 6.4 --- Effects of Machines Eligibility --- p.88Chapter 6.5 --- Future Direction --- p.90Chapter 7 --- Conclusion --- p.92Chapter A --- Tasks data of problem sets in section 6.2 --- p.94Chapter A.l --- Problem 1: 19 tasks --- p.95Chapter A.2 --- Problem 2: 21 tasks --- p.97Chapter A.3 --- Problem 3: 19 tasks --- p.99Chapter A.4 --- Problem 4: 23 tasks --- p.101Chapter A.5 --- Problem 5: 27 tasks --- p.104Bibliography --- p.10

    Generalized job shop scheduling : complexity and local search

    Get PDF

    Scheduling multiprocessor tasks on three dedicated processors

    No full text
    4nonenoneBLAZEWICZ J; DELL'OLMO P; DROZDOWSKI M; M. SPERANZABlazewicz, J; Dell'Olmo, P; Drozdowski, M; Speranza, Maria Grazi
    corecore