342 research outputs found

    A DECOMPOSITION-BASED HEURISTIC ALGORITHM FOR PARALLEL BATCH PROCESSING PROBLEM WITH TIME WINDOW CONSTRAINT

    Get PDF
    This study considers a parallel batch processing problem to minimize the makespan under constraints of arbitrary lot sizes, start time window and incompatible families. We first formulate the problem with a mixed-integer programming model. Due to the NP-hardness of the problem, we develop a decomposition-based heuristic to obtain a near-optimal solution for large-scale problems when computational time is a concern. A two-dimensional saving function is introduced to quantify the value of time and capacity space wasted. Computational experiments show that the proposed heuristic performs well and can deal with large-scale problems efficiently within a reasonable computational time. For the small-size problems, the percentage of achieving optimal solutions by the DH is 94.17%, which indicates that the proposed heuristic is very good in solving small-size problems. For large-scale problems, our proposed heuristic outperforms an existing heuristic from the literature in terms of solution quality

    Serial-batch scheduling – the special case of laser-cutting machines

    Get PDF
    The dissertation deals with a problem in the field of short-term production planning, namely the scheduling of laser-cutting machines. The object of decision is the grouping of production orders (batching) and the sequencing of these order groups on one or more machines (scheduling). This problem is also known in the literature as "batch scheduling problem" and belongs to the class of combinatorial optimization problems due to the interdependencies between the batching and the scheduling decisions. The concepts and methods used are mainly from production planning, operations research and machine learning

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load

    A survey of scheduling problems with setup times or costs

    Get PDF
    Author name used in this publication: C. T. NgAuthor name used in this publication: T. C. E. Cheng2007-2008 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    A multi objective volleyball premier league algorithm for green scheduling identical parallel machines with splitting jobs

    Get PDF
    Parallel machine scheduling is one of the most common studied problems in recent years, however, this classic optimization problem has to achieve two conflicting objectives, i.e. minimizing the total tardiness and minimizing the total wastes, if the scheduling is done in the context of plastic injection industry where jobs are splitting and molds are important constraints. This paper proposes a mathematical model for scheduling parallel machines with splitting jobs and resource constraints. Two minimization objectives - the total tardiness and the number of waste - are considered, simultaneously. The obtained model is a bi-objective integer linear programming model that is shown to be of NP-hard class optimization problems. In this paper, a novel Multi-Objective Volleyball Premier League (MOVPL) algorithm is presented for solving the aforementioned problem. This algorithm uses the crowding distance concept used in NSGA-II as an extension of the Volleyball Premier League (VPL) that we recently introduced. Furthermore, the results are compared with six multi-objective metaheuristic algorithms of MOPSO, NSGA-II, MOGWO, MOALO, MOEA/D, and SPEA2. Using five standard metrics and ten test problems, the performance of the Pareto-based algorithms was investigated. The results demonstrate that in general, the proposed algorithm has supremacy than the other four algorithms

    Minimizing Cumulative Batch Processing Time for an Industrial Oven Scheduling Problem

    Get PDF
    We introduce the Oven Scheduling Problem (OSP), a new parallel batch scheduling problem that arises in the area of electronic component manufacturing. Jobs need to be scheduled to one of several ovens and may be processed simultaneously in one batch if they have compatible requirements. The scheduling of jobs must respect several constraints concerning eligibility and availability of ovens, release dates of jobs, setup times between batches as well as oven capacities. Running the ovens is highly energy-intensive and thus the main objective, besides finishing jobs on time, is to minimize the cumulative batch processing time across all ovens. This objective distinguishes the OSP from other batch processing problems which typically minimize objectives related to makespan, tardiness or lateness. We propose to solve this NP-hard scheduling problem via constraint programming (CP) and integer linear programming (ILP) and present corresponding CP- and ILP-models. For an experimental evaluation, we introduce a multi-parameter random instance generator to provide a diverse set of problem instances. Using state-of-the-art solvers, we evaluate the quality and compare the performance of our CP- and ILP-models, which could find optimal solutions for many instances. Furthermore, using our models we are able to provide upper bounds for the whole benchmark set including large-scale instances

    Lagrangian approach to minimize makespan of non-identical parallel batch processing machines

    Get PDF
    Advisors: Purushothaman Damodaran.Committee members: Omar Ghrayeb; Murali Krishnamurthi; Christine Nguyen.Batch Processing Machines (BPMs) are commonly used in electronics manufacturing, semi-conductor manufacturing, and metal-working - to name a few. Scheduling these machines are not an easy task; practical considerations and the exponential number of decision variables involved impede schedulers (or decision makers) from making good decisions. This research focuses on minimizing the makespan of a set of non-identical parallel batch processing machines. In order to schedule jobs on these machines, two decisions are to be made. The first decision is to group jobs to form batches such that the machine capacity is not exceeded. The second decision is to sequence the batches formed on the machines such that the makespan is minimized. Both the decisions are intertwined as the processing time of the batch is determined by the composition of the jobs in the batch. The problem under study is shown to be NP-hard. A mathematical model from the literature is adopted to develop a solution approach which would help the decision maker to make meaningful decisions.Lagrangian Relaxation approach has been shown to be very effective in solving scheduling problems. Using this decomposition approach, the mathematical model is decomposed and a sub-gradient approach was used to update the multipliers. Two sets of constraints were relaxed to consider two Lagrangian Relaxation models. Experiments were conducted with data sets from the literature. The solution quality of the proposed approach was compared with meta-heuristics (i.e. Particle Swarm Optimization (PSO) and Random Key Genetic Algorithm (RKGA)) published in the literature and a commercial solver (i.e. IBM ILOG CPLEX). On smaller instances (i.e. 10 and 20 jobs), the proposed approach outperformed PSO and RKGA. However, the proposed approach and CPLEX report the same results. On larger instances (i.e. 50, 100 and 200 job instances) with two and four-machines, the proposed approach was better than PSO whenever the variability in the processing times were smaller. The proposed approach generally outperformed RKGA and CPLEX on larger problem instances. Out of 200 experiments conducted, the proposed approach helped to find new improved solution on 34 instances and comparable on 105 instances when compared to PSO. The PSO approach was much faster than all other approaches on larger problem instances. The experimental study clearly identifies the problem instances on which the proposed approach can find a better solution. The proposed Lagrangian Relaxation solution approach helps the schedulers to make more informed decisions. Minor modifications can be made to use the proposed solution approach for other practical considerations (e.g. job ready times, tardiness objective, etc.) The main contribution of this research is the proposed solution approach which is effective in solving a class of non-identical batch processing machine problems with better solution quality when compared to existing meta-heuristics.M.S. (Master of Science

    Column generation for minimizing total completion time in a parallel-batching environment

    Get PDF
    This paper deals with the 1 | p- batch , sj≤ b| ∑ Cj scheduling problem, where jobs are scheduled in batches on a single machine in order to minimize the total completion time. A size is given for each job, such that the total size of each batch cannot exceed a fixed capacity b. A graph-based model is proposed for computing a very effective lower bound based on linear programming; the model, with an exponential number of variables, is solved by column generation and embedded into both a heuristic price and branch algorithm and an exact branch and price algorithm. The same model is able to handle parallel-machine problems like Pm| p- batch , sj≤ b| ∑ Cj very efficiently. Computational results show that the new lower bound strongly dominates the bounds currently available in the literature, and the proposed heuristic algorithm is able to achieve high-quality solutions on large problems in a reasonable computation time. For the single-machine case, the exact branch and price algorithm is able to solve all the tested instances with 30 jobs and a good amount of 40-job examples
    corecore