143 research outputs found
Serial-batch scheduling – the special case of laser-cutting machines
The dissertation deals with a problem in the field of short-term production planning, namely the scheduling of laser-cutting machines. The object of decision is the grouping of production orders (batching) and the sequencing of these order groups on one or more machines (scheduling). This problem is also known in the literature as "batch scheduling problem" and belongs to the class of combinatorial optimization problems due to the interdependencies between the batching and the scheduling decisions. The concepts and methods used are mainly from production planning, operations research and machine learning
funcX: A Federated Function Serving Fabric for Science
Exploding data volumes and velocities, new computational methods and
platforms, and ubiquitous connectivity demand new approaches to computation in
the sciences. These new approaches must enable computation to be mobile, so
that, for example, it can occur near data, be triggered by events (e.g.,
arrival of new data), be offloaded to specialized accelerators, or run remotely
where resources are available. They also require new design approaches in which
monolithic applications can be decomposed into smaller components, that may in
turn be executed separately and on the most suitable resources. To address
these needs we present funcX---a distributed function as a service (FaaS)
platform that enables flexible, scalable, and high performance remote function
execution. funcX's endpoint software can transform existing clouds, clusters,
and supercomputers into function serving systems, while funcX's cloud-hosted
service provides transparent, secure, and reliable function execution across a
federated ecosystem of endpoints. We motivate the need for funcX with several
scientific case studies, present our prototype design and implementation, show
optimizations that deliver throughput in excess of 1 million functions per
second, and demonstrate, via experiments on two supercomputers, that funcX can
scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and
Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap
with arXiv:1908.0490
Guest editors’ preface to the special issue devoted to the 2nd International Conference “Numerical Computations: Theory and Algorithms”, June 19–25, 2016, Pizzo Calabro, Italy
This special issue of the Journal of Global Optimization contains twelve high-quality research papers devoted to different aspects of global optimization such as theory, numerical methods and real-life applications. The papers included in this special issue are based on the presentations carefully selected by the guest editors among the talks delivered at the 2nd International Conference “Numerical Computations: Theory and Algorithms (NUMTA)” held in June 19–25, 2016 in Pizzo Calabro, Italy (the first NUMTA conference took place in Falerna, Italy in 2013). The NUMTA 2016 has been organized by the University of Calabria, Rende (CS), Italy, in cooperation with the Society for Industrial and Applied Mathematics, USA. The guest editors actively participated in the organization of the conference: the Program Committee of the NUMTA 2016 was chaired by Yaroslav D. Sergeyev, in their turn, Renato De Leone and Anatoly Zhigljavsky took part in the Program Committee.
The goal of the NUMTA 2016 was creation of a multidisciplinary round table for an open discussion on numerical modeling nature by using traditional and emerging computational paradigms. Participants of this conference discussed several aspects of numerical computations and modeling from foundations of mathematics and computer science to advanced numerical techniques. A large part of presentations has been dedicated to optimization. Selected papers presented at the conference in the field of numerical analysis and respective applications have been published in the special issue of the international journal Applied Mathematics and Computation, Volume 318 (2018). In its turn, the present special issue contains articles dealing with global optimization. Let us give a brief description of the papers included in this special issue
A survey of scheduling problems with setup times or costs
Author name used in this publication: C. T. NgAuthor name used in this publication: T. C. E. Cheng2007-2008 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe
Modeling Industrial Lot Sizing Problems: A Review
In this paper we give an overview of recent developments in the field of modeling single-level dynamic lot sizing problems. The focus of this paper is on the modeling various industrial extensions and not on the solution approaches. The timeliness of such a review stems from the growing industry need to solve more realistic and comprehensive production planning problems. First, several different basic lot sizing problems are defined. Many extensions of these problems have been proposed and the research basically expands in two opposite directions. The first line of research focuses on modeling the operational aspects in more detail. The discussion is organized around five aspects: the set ups, the characteristics of the production process, the inventory, demand side and rolling horizon. The second direction is towards more tactical and strategic models in which the lot sizing problem is a core substructure, such as integrated production-distribution planning or supplier selection. Recent advances in both directions are discussed. Finally, we give some concluding remarks and point out interesting areas for future research
Planning and Scheduling Optimization
Although planning and scheduling optimization have been explored in the literature for many years now, it still remains a hot topic in the current scientific research. The changing market trends, globalization, technical and technological progress, and sustainability considerations make it necessary to deal with new optimization challenges in modern manufacturing, engineering, and healthcare systems. This book provides an overview of the recent advances in different areas connected with operations research models and other applications of intelligent computing techniques used for planning and scheduling optimization. The wide range of theoretical and practical research findings reported in this book confirms that the planning and scheduling problem is a complex issue that is present in different industrial sectors and organizations and opens promising and dynamic perspectives of research and development
A Maintenance Planning Framework using Online and Offline Deep Reinforcement Learning
Cost-effective asset management is an area of interest across several
industries. Specifically, this paper develops a deep reinforcement learning
(DRL) solution to automatically determine an optimal rehabilitation policy for
continuously deteriorating water pipes. We approach the problem of
rehabilitation planning in an online and offline DRL setting. In online DRL,
the agent interacts with a simulated environment of multiple pipes with
distinct lengths, materials, and failure rate characteristics. We train the
agent using deep Q-learning (DQN) to learn an optimal policy with minimal
average costs and reduced failure probability. In offline learning, the agent
uses static data, e.g., DQN replay data, to learn an optimal policy via a
conservative Q-learning algorithm without further interactions with the
environment. We demonstrate that DRL-based policies improve over standard
preventive, corrective, and greedy planning alternatives. Additionally,
learning from the fixed DQN replay dataset in an offline setting further
improves the performance. The results warrant that the existing deterioration
profiles of water pipes consisting of large and diverse states and action
trajectories provide a valuable avenue to learn rehabilitation policies in the
offline setting, which can be further fine-tuned using the simulator.Comment: Published Neural Comput & Applic (2023), 12 pages, 8 Figur
Recommended from our members
Bi-Criteria Batching and Scheduling in Hybrid Flow Shops
In this research, a bi-criteria batching and scheduling problem is investigated in hybrid flow shop environments, where unrelated-parallel machines are run simultaneously with different capacities and eligibilities in processing, in some stages. The objective is to simultaneously minimize a linear combination of the total weighted completion time and total weighted tardiness. The first favors the producer’s interest by minimizing work-in-process inventory, inventory holding cost, and energy consumption as well as maximizing machine utilization, while the second favors the customers’ interest by maximizing customers’ service level and delivery speed. In particular, it disregards the group technology assumptions (GTAs) by allowing for the possibility of splitting pre-determined groups of jobs into inconsistent batches in order to improve the operational efficiency. A comparison between the group scheduling and batch scheduling approaches reveals the outstanding performance of the batch scheduling approach. As a result, contrary to the GTAs, jobs belonging to a group might be processed on more than one machine as batches, but not all machines may be capable of processing all jobs. A sequence- and machine-dependent setup time is required between each of two consecutively scheduled batches belonging to different groups. Based on manufacturing company policy, the desired lower bounds on batch sizes are considered for the number of jobs assigned to batches. Although, the direction in which all jobs move through production line is the same, some jobs may skip some stages. Furthermore, to reflect real industry requirements, the job release times and the machine availability times are considered to be dynamic, which means not all machines and jobs are available at the beginning of the planning horizon.The problem is formulated with the help of four mixed-integer linear programming (MILP) models. Two out of four MILP models are formulated as two integrated phases, i.e., batching and scheduling phases, with respect to the precedence constraints between each pair of jobs batches and or the position concept within batches. The optimal combination between batch compositions of groups are determined in the batching phase, while the optimal assignment and sequence of batches on machines and sequence of jobs within batches are determined in the scheduling phase, with respect to a set of operational constraints. A batch composition of a group corresponding to a particular stage, determined in the batching phase of the MILP model, represents the number of batches assigned to the group as well as the number and type of jobs belonging to each batch of that group. Since the first and second MILP models lead to unmanageable solution space, the relaxed MILP model, which allocates one and only one job to each batch of each group in each stage, can be developed to focus on the non-dominated solution space. The optimal solutions of MILP models and relaxed MILP model are equal, if and only if the optimal solution of the relaxed MILP model does not violate the desired lower bounds on batch sizes. Since the relaxed MILP model cannot guarantee the optimal solution of the MILP models, a third MILP model is developed by integrating batching and scheduling phases. This MILP model eliminates an exhaustive combination enumeration between batch compositions of all groups in all stages. Although the third MILP model converges to the optimal solution slower than the relaxed MILP model, it guarantees finding the optimal solution of the first and second MILP models. A comparison between four MILP models shows the superior performance of the third MILP model. However, since the problem is strongly NP-hard, it is not possible to find its optimal solution within a reasonable time as the problem size increases from small to medium to large, even by the relaxed MILP model or the fourth MILP model. Therefore, several meta-heuristic algorithms based upon basic local search, basic population-based search, and hybridization of local search and population-based searches are developed, which move back and forth between batching and scheduling phases. Tabu Search (TS) is implemented as a basic local search algorithm, while Tabu Search Path-Relinking (TS PR) is implemented as a local search algorithm enhanced with a population-based structure. TS is incorporated into the framework of path-relinking to exploit the information on good solutions. The TS PR algorithm comprises several distinguishing features including relinking procedures to effectively explore trajectories connecting elite solutions and the methods used to choose the reference solution. Particle Swarm Optimization (PSO) is implemented as a basic population-based algorithm, while Particle Swarm Optimization enhanced with a local search algorithm (PSO LSA) is developed to realize the benefits of batching and, consequently, enhance the quality of solutions.Since there is interdependency between positions of a job in different stages of a hybrid flow shop in batch scheduling, a meta-heuristic algorithm is not capable of capturing these interdependencies and, subsequently, its efficacy can be diminished. In order to capture this interdependency, the non-, partial- complete-, and stage-based interdependency strategy are developed. In the stage-based-interdependency strategy, a complete sequence related to all of the stages is gradually determined, stage by stage. An initial solution finding mechanism is developed to trigger the search into the solution space and generate an initial population. The performances of these algorithms are compared to each other in order to identify which algorithm(s) outperforms the others. Nevertheless, the performances of the best algorithm(s) are evaluated with respect to a tight lower bound obtained from a branch-and-price (B&P) algorithm. The B&P algorithm uses Dantzig-Wolfe decomposition (DWD) to divide the original problem into a master problem and several sub-problems (SPs) corresponding to each stage. The original problem is decomposed into the SPs by three DWDs corresponding to the three MILP models. Although, by applying DWD technique in the first and second MILP models, an exhaustive combination enumeration between batch compositions of all groups in all stages is excluded and, as a result, the SPs are easier to solve than the original problem, they are still strongly NP-hard because of an enormous number of combinations between batch compositions of all groups in each stage. However, the DWD technique corresponding to the relaxed MILP model not only drastically reduces the number of variables and constraints in the SPs, but also eliminates the batching phase of the first and second MILP models. Decomposing the original problem based on the relaxed MILP model and implementing the B&P algorithm cannot guarantee optimal solutions or tight lower bounds of problems unless the number of violations in the desired lower bounds on batch sizes is not significant. Therefore, the third MILP model is decomposed by DWD so that the B&P algorithm is capable of finding tight lower bounds even for large-size instances of the problem. A comparison between the lower bounds obtained from the B&P algorithm and CPLEX reveals the impressive performance of the B&P algorithm, particularly for large-size problems. The evaluation of the best algorithms based upon these tight lower bounds developed by the B&P algorithm, uncovers the outstanding performance of hybrid algorithms compared to the results obtained from CPLEX.Keywords: Dantzig-Wolfe Decomposition, Mixed-Integer Linear Programming Model, Branch-and-Price Optimization Algorithm, Sequence- and Machine-Dependent Setup Time, Column Generation, Group Scheduling, Particle Swarm Optimization, Batching and Scheduling, Hybrid Flow Shop, Tabu Search, Desired Lower Bounds on Batch Sizes, Bi-Criteria Objective, Path-Relinkin
Recommended from our members
Bi-Criteria Batching and Scheduling in Hybrid Flow Shops
In this research, a bi-criteria batching and scheduling problem is investigated in hybrid flow shop environments, where unrelated-parallel machines are run simultaneously with different capacities and eligibilities in processing, in some stages. The objective is to simultaneously minimize a linear combination of the total weighted completion time and total weighted tardiness. The first favors the producer’s interest by minimizing work-in-process inventory, inventory holding cost, and energy consumption as well as maximizing machine utilization, while the second favors the customers’ interest by maximizing customers’ service level and delivery speed. In particular, it disregards the group technology assumptions (GTAs) by allowing for the possibility of splitting pre-determined groups of jobs into inconsistent batches in order to improve the operational efficiency. A comparison between the group scheduling and batch scheduling approaches reveals the outstanding performance of the batch scheduling approach. As a result, contrary to the GTAs, jobs belonging to a group might be processed on more than one machine as batches, but not all machines may be capable of processing all jobs. A sequence- and machine-dependent setup time is required between each of two consecutively scheduled batches belonging to different groups. Based on manufacturing company policy, the desired lower bounds on batch sizes are considered for the number of jobs assigned to batches. Although, the direction in which all jobs move through production line is the same, some jobs may skip some stages. Furthermore, to reflect real industry requirements, the job release times and the machine availability times are considered to be dynamic, which means not all machines and jobs are available at the beginning of the planning horizon.The problem is formulated with the help of four mixed-integer linear programming (MILP) models. Two out of four MILP models are formulated as two integrated phases, i.e., batching and scheduling phases, with respect to the precedence constraints between each pair of jobs batches and or the position concept within batches. The optimal combination between batch compositions of groups are determined in the batching phase, while the optimal assignment and sequence of batches on machines and sequence of jobs within batches are determined in the scheduling phase, with respect to a set of operational constraints. A batch composition of a group corresponding to a particular stage, determined in the batching phase of the MILP model, represents the number of batches assigned to the group as well as the number and type of jobs belonging to each batch of that group. Since the first and second MILP models lead to unmanageable solution space, the relaxed MILP model, which allocates one and only one job to each batch of each group in each stage, can be developed to focus on the non-dominated solution space. The optimal solutions of MILP models and relaxed MILP model are equal, if and only if the optimal solution of the relaxed MILP model does not violate the desired lower bounds on batch sizes. Since the relaxed MILP model cannot guarantee the optimal solution of the MILP models, a third MILP model is developed by integrating batching and scheduling phases. This MILP model eliminates an exhaustive combination enumeration between batch compositions of all groups in all stages. Although the third MILP model converges to the optimal solution slower than the relaxed MILP model, it guarantees finding the optimal solution of the first and second MILP models. A comparison between four MILP models shows the superior performance of the third MILP model. However, since the problem is strongly NP-hard, it is not possible to find its optimal solution within a reasonable time as the problem size increases from small to medium to large, even by the relaxed MILP model or the fourth MILP model. Therefore, several meta-heuristic algorithms based upon basic local search, basic population-based search, and hybridization of local search and population-based searches are developed, which move back and forth between batching and scheduling phases. Tabu Search (TS) is implemented as a basic local search algorithm, while Tabu Search Path-Relinking (TS PR) is implemented as a local search algorithm enhanced with a population-based structure. TS is incorporated into the framework of path-relinking to exploit the information on good solutions. The TS PR algorithm comprises several distinguishing features including relinking procedures to effectively explore trajectories connecting elite solutions and the methods used to choose the reference solution. Particle Swarm Optimization (PSO) is implemented as a basic population-based algorithm, while Particle Swarm Optimization enhanced with a local search algorithm (PSO LSA) is developed to realize the benefits of batching and, consequently, enhance the quality of solutions.Since there is interdependency between positions of a job in different stages of a hybrid flow shop in batch scheduling, a meta-heuristic algorithm is not capable of capturing these interdependencies and, subsequently, its efficacy can be diminished. In order to capture this interdependency, the non-, partial- complete-, and stage-based interdependency strategy are developed. In the stage-based-interdependency strategy, a complete sequence related to all of the stages is gradually determined, stage by stage. An initial solution finding mechanism is developed to trigger the search into the solution space and generate an initial population. The performances of these algorithms are compared to each other in order to identify which algorithm(s) outperforms the others. Nevertheless, the performances of the best algorithm(s) are evaluated with respect to a tight lower bound obtained from a branch-and-price (B&P) algorithm. The B&P algorithm uses Dantzig-Wolfe decomposition (DWD) to divide the original problem into a master problem and several sub-problems (SPs) corresponding to each stage. The original problem is decomposed into the SPs by three DWDs corresponding to the three MILP models. Although, by applying DWD technique in the first and second MILP models, an exhaustive combination enumeration between batch compositions of all groups in all stages is excluded and, as a result, the SPs are easier to solve than the original problem, they are still strongly NP-hard because of an enormous number of combinations between batch compositions of all groups in each stage. However, the DWD technique corresponding to the relaxed MILP model not only drastically reduces the number of variables and constraints in the SPs, but also eliminates the batching phase of the first and second MILP models. Decomposing the original problem based on the relaxed MILP model and implementing the B&P algorithm cannot guarantee optimal solutions or tight lower bounds of problems unless the number of violations in the desired lower bounds on batch sizes is not significant. Therefore, the third MILP model is decomposed by DWD so that the B&P algorithm is capable of finding tight lower bounds even for large-size instances of the problem. A comparison between the lower bounds obtained from the B&P algorithm and CPLEX reveals the impressive performance of the B&P algorithm, particularly for large-size problems. The evaluation of the best algorithms based upon these tight lower bounds developed by the B&P algorithm, uncovers the outstanding performance of hybrid algorithms compared to the results obtained from CPLEX.Keywords: Bi-Criteria Objective, Column Generation, Batch Scheduling, Tabu Search, Batching and Scheduling, Desired Lower Bounds on Batch Sizes, Path-Relinking, Branch-and-Price Optimization Algorithm, Particle Swarm Optimization, Group Scheduling, Hybrid Flow Shop, Dantzig-Wolfe Decomposition, Mixed-Integer Linear Programming Model, Sequence- and Machine-Dependent Setup Tim
Lot-Sizing Problem for a Multi-Item Multi-level Capacitated Batch Production System with Setup Carryover, Emission Control and Backlogging using a Dynamic Program and Decomposition Heuristic
Wagner and Whitin (1958) develop an algorithm to solve the dynamic Economic Lot-Sizing Problem (ELSP), which is widely applied in inventory control, production planning, and capacity planning. The original algorithm runs in O(T^2) time, where T is the number of periods of the problem instance. Afterward few linear-time algorithms have been developed to solve the Wagner-Whitin (WW) lot-sizing problem; examples include the ELSP and equivalent Single Machine Batch-Sizing Problem (SMBSP). This dissertation revisits the algorithms for ELSPs and SMBSPs under WW cost structure, presents a new efficient linear-time algorithm, and compares the developed algorithm against comparable ones in the literature. The developed algorithm employs both lists and stacks data structure, which is completely a different approach than the rest of the algorithms for ELSPs and SMBSPs. Analysis of the developed algorithm shows that it executes fewer number of basic actions throughout the algorithm and hence it improves the CPU time by a maximum of 51.40% for ELSPs and 29.03% for SMBSPs. It can be concluded that the new algorithm is faster than existing algorithms for both ELSPs and SMBSPs. Lot-sizing decisions are crucial because these decisions help the manufacturer determine the quantity and time to produce an item with a minimum cost. The efficiency and productivity of a system is completely dependent upon the right choice of lot-sizes. Therefore, developing and improving solution procedures for lot-sizing problems is key. This dissertation addresses the classical Multi-Level Capacitated Lot-Sizing Problem (MLCLSP) and an extension of the MLCLSP with a Setup Carryover, Backlogging and Emission control. An item Dantzig Wolfe (DW) decomposition technique with an embedded Column Generation (CG) procedure is used to solve the problem. The original problem is decomposed into a master problem and a number of subproblems, which are solved using dynamic programming approach. Since the subproblems are solved independently, the solution of the subproblems often becomes infeasible for the master problem. A multi-step iterative Capacity Allocation (CA) heuristic is used to tackle this infeasibility. A Linear Programming (LP) based improvement procedure is used to refine the solutions obtained from the heuristic method. A comparative study of the proposed heuristic for the first problem (MLCLSP) is conducted and the results demonstrate that the proposed heuristic provide less optimality gap in comparison with that obtained in the literature. The Setup Carryover Assignment Problem (SCAP), which consists of determining the setup carryover plan of multiple items for a given lot-size over a finite planning horizon is modelled as a problem of finding Maximum Weighted Independent Set (MWIS) in a chain of cliques. The SCAP is formulated using a clique constraint and it is proved that the incidence matrix of the SCAP has totally unimodular structure and the LP relaxation of the proposed SCAP formulation always provides integer optimum solution. Moreover, an alternative proof that the relaxed ILP guarantees integer solution is presented in this dissertation. Thus, the SCAP and the special case of the MWIS in a chain of cliques are solvable in polynomial time
- …