5,954 research outputs found
A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems
Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job
shop scheduling problems (JSP) is proposed.
Design/methodology/approach: In the algorithm, a number of sub-problems are
constructed by iteratively decomposing the large-scale JSP according to the process route of
each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the
sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality,
a detection method for multi-bottleneck machines based on critical path is proposed. Therewith
the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck
operations. According to the principle of âBottleneck leads the performance of the whole
manufacturing systemâ in TOC (Theory Of Constraints), the bottleneck operations are
scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are
scheduled by dispatching rules for the improvement of the solving efficiency.
Findings: In the process of the subproblems' construction, partial operations in the previous
scheduled sub-problem are divided into the successive sub-problem for re-optimization. This
strategy can improve the solution quality of the algorithm. In the process of solving the sub problems, the strategy that evaluating the chromosome's fitness by predicting the global
scheduling objective value can improve the solution quality.
Research limitations/implications: In this research, there are some assumptions which
reduce the complexity of the large-scale scheduling problem. They are as follows: The
processing route of each job is predetermined, and the processing time of each operation is
fixed. There is no machine breakdown, and no preemption of the operations is allowed. The
assumptions should be considered if the algorithm is used in the actual job shop.
Originality/value: The research provides an efficient scheduling method for the large-scale
job shops, and will be helpful for the discrete manufacturing industry for improving the
production efficiency and effectiveness.Peer Reviewe
A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems
Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job
shop scheduling problems (JSP) is proposed.
Design/methodology/approach: In the algorithm, a number of sub-problems are
constructed by iteratively decomposing the large-scale JSP according to the process route of
each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the
sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality,
a detection method for multi-bottleneck machines based on critical path is proposed. Therewith
the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck
operations. According to the principle of âBottleneck leads the performance of the whole
manufacturing systemâ in TOC (Theory Of Constraints), the bottleneck operations are
scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are
scheduled by dispatching rules for the improvement of the solving efficiency.
Findings: In the process of the subproblems' construction, partial operations in the previous
scheduled sub-problem are divided into the successive sub-problem for re-optimization. This
strategy can improve the solution quality of the algorithm. In the process of solving the sub problems, the strategy that evaluating the chromosome's fitness by predicting the global
scheduling objective value can improve the solution quality.
Research limitations/implications: In this research, there are some assumptions which
reduce the complexity of the large-scale scheduling problem. They are as follows: The
processing route of each job is predetermined, and the processing time of each operation is
fixed. There is no machine breakdown, and no preemption of the operations is allowed. The
assumptions should be considered if the algorithm is used in the actual job shop.
Originality/value: The research provides an efficient scheduling method for the large-scale
job shops, and will be helpful for the discrete manufacturing industry for improving the
production efficiency and effectiveness.Peer Reviewe
Experimental Analysis of Algorithms for Coflow Scheduling
Modern data centers face new scheduling challenges in optimizing job-level
performance objectives, where a significant challenge is the scheduling of
highly parallel data flows with a common performance goal (e.g., the shuffle
operations in MapReduce applications). Chowdhury and Stoica introduced the
coflow abstraction to capture these parallel communication patterns, and
Chowdhury et al. proposed effective heuristics to schedule coflows efficiently.
In our previous paper, we considered the strongly NP-hard problem of minimizing
the total weighted completion time of coflows with release dates, and developed
the first polynomial-time scheduling algorithms with O(1)-approximation ratios.
In this paper, we carry out a comprehensive experimental analysis on a
Facebook trace and extensive simulated instances to evaluate the practical
performance of several algorithms for coflow scheduling, including the
approximation algorithms developed in our previous paper. Our experiments
suggest that simple algorithms provide effective approximations of the optimal,
and that the performance of our approximation algorithms is relatively robust,
near optimal, and always among the best compared with the other algorithms, in
both the offline and online settings.Comment: 29 pages, 8 figures, 11 table
Asymptotically Optimal Approximation Algorithms for Coflow Scheduling
Many modern datacenter applications involve large-scale computations composed
of multiple data flows that need to be completed over a shared set of
distributed resources. Such a computation completes when all of its flows
complete. A useful abstraction for modeling such scenarios is a {\em coflow},
which is a collection of flows (e.g., tasks, packets, data transmissions) that
all share the same performance goal.
In this paper, we present the first approximation algorithms for scheduling
coflows over general network topologies with the objective of minimizing total
weighted completion time. We consider two different models for coflows based on
the nature of individual flows: circuits, and packets. We design
constant-factor polynomial-time approximation algorithms for scheduling
packet-based coflows with or without given flow paths, and circuit-based
coflows with given flow paths. Furthermore, we give an -approximation polynomial time algorithm for scheduling circuit-based
coflows where flow paths are not given (here is the number of network
edges).
We obtain our results by developing a general framework for coflow schedules,
based on interval-indexed linear programs, which may extend to other coflow
models and objective functions and may also yield improved approximation bounds
for specific network scenarios. We also present an experimental evaluation of
our approach for circuit-based coflows that show a performance improvement of
at least 22% on average over competing heuristics.Comment: Fixed minor typo
Optimizing production scheduling of steel plate hot rolling for economic load dispatch under time-of-use electricity pricing
Time-of-Use (TOU) electricity pricing provides an opportunity for industrial
users to cut electricity costs. Although many methods for Economic Load
Dispatch (ELD) under TOU pricing in continuous industrial processing have been
proposed, there are still difficulties in batch-type processing since power
load units are not directly adjustable and nonlinearly depend on production
planning and scheduling. In this paper, for hot rolling, a typical batch-type
and energy intensive process in steel industry, a production scheduling
optimization model for ELD is proposed under TOU pricing, in which the
objective is to minimize electricity costs while considering penalties caused
by jumps between adjacent slabs. A NSGA-II based multi-objective production
scheduling algorithm is developed to obtain Pareto-optimal solutions, and then
TOPSIS based multi-criteria decision-making is performed to recommend an
optimal solution to facilitate filed operation. Experimental results and
analyses show that the proposed method cuts electricity costs in production,
especially in case of allowance for penalty score increase in a certain range.
Further analyses show that the proposed method has effect on peak load
regulation of power grid.Comment: 13 pages, 6 figures, 4 table
A GPU-accelerated Branch-and-Bound Algorithm for the Flow-Shop Scheduling Problem
Branch-and-Bound (B&B) algorithms are time intensive tree-based exploration
methods for solving to optimality combinatorial optimization problems. In this
paper, we investigate the use of GPU computing as a major complementary way to
speed up those methods. The focus is put on the bounding mechanism of B&B
algorithms, which is the most time consuming part of their exploration process.
We propose a parallel B&B algorithm based on a GPU-accelerated bounding model.
The proposed approach concentrate on optimizing data access management to
further improve the performance of the bounding mechanism which uses large and
intermediate data sets that do not completely fit in GPU memory. Extensive
experiments of the contribution have been carried out on well known FSP
benchmarks using an Nvidia Tesla C2050 GPU card. We compared the obtained
performances to a single and a multithreaded CPU-based execution. Accelerations
up to x100 are achieved for large problem instances
The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling
Over the past few years growing global competition has forced the manufacturing industries to upgrade their old production strategies with the modern day approaches. As a result, recent interest has been developed towards finding an appropriate policy that could enable them to compete with others, and facilitate them to emerge as a market winner. Keeping in mind the abovementioned facts, in this paper the authors have proposed an integrated process planning and scheduling model inheriting the salient features of outsourcing, and leagile principles to compete in the existing market scenario. The paper also proposes a model based on leagile principles, where the integrated planning management has been practiced. In the present work a scheduling problem has been considered and overall minimization of makespan has been aimed. The paper shows the relevance of both the strategies in performance enhancement of the industries, in terms of their reduced makespan. The authors have also proposed a new hybrid Enhanced Swift Converging Simulated Annealing (ESCSA) algorithm, to solve the complex real-time scheduling problems. The proposed algorithm inherits the prominent features of the Genetic Algorithm (GA), Simulated Annealing (SA), and the Fuzzy Logic Controller (FLC). The ESCSA algorithm reduces the makespan significantly in less computational time and number of iterations. The efficacy of the proposed algorithm has been shown by comparing the results with GA, SA, Tabu, and hybrid Tabu-SA optimization methods
An effective MILP-based decomposition algorithm for the scheduling and redesign of flexible job-shop plants
This paper presents a decomposition algorithm for the integrated scheduling and redesign problem of a multistage batch plant dealing with multipurpose units and heterogeneous recipes. First, the procedure solves the scheduling problem considering the existing plant configuration with the main goal of minimizing the makespan. Then, a second objective of minimizing the number of units utilized without worsen the makespan achieved in the first stage is considered. The units released can be reallocated to other compatible processing stages in order to minimize the initial makespan value. In order to tackle large industrial examples, both scheduling and redesign problems are solved through a decomposition algorithm, which has a MILP model as its core. The procedure is tested on several realistic instances, demonstrating its robustness and applicability.Fil: BasĂĄn, Natalia Paola. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - Santa Fe. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica. Universidad Nacional del Litoral. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica; ArgentinaFil: Coccola, Mariana Evangelina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - Santa Fe. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica. Universidad Nacional del Litoral. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica; ArgentinaFil: del Valle, Alejandro GarcĂa. Universidad da Coruña; EspañaFil: Mendez, Carlos Alberto. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - Santa Fe. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica. Universidad Nacional del Litoral. Instituto de Desarrollo TecnolĂłgico para la Industria QuĂmica; Argentin
- âŠ