116 research outputs found

    Sheet-Metal Production Scheduling Using AlphaGo Zero

    Get PDF
    This work investigates the applicability of a reinforcement learning (RL) approach, specifically AlphaGo Zero (AZ), for optimizing sheet-metal (SM) production schedules with respect to tardiness and material waste. SM production scheduling is a complex job shop scheduling problem (JSSP) with dynamic operation times, routing flexibility and supplementary constraints. SM production systems are capable of processing a large number of highly heterogeneous jobs simultaneously. While very large relative to the JSSP literature, the SM-JSSP instances investigated in this work are small relative to the SM production reality. Given the high dimensionality of the SM-JSSP, computation of an optimal schedule is not tractable. Simple heuristic solutions often deliver bad results. We use AZ to selectively search the solution space. To this end, a single player AZ version is pretrained using supervised learning on schedules generated by a heuristic, fine-tuned using RL and evaluated through comparison with a heuristic baseline and Monte Carlo Tree Search. It will be shown that AZ outperforms the other approaches. The work’s scientific contribution is twofold: On the one hand, a novel scheduling problem is formalized such that it can be tackled using RL approaches. On the other hand, it is proved that AZ can be successfully modified to provide a solution for the problem at hand, whereby a new line of research into real-world applications of AZ is opened

    Scheduling Algorithms: Challenges Towards Smart Manufacturing

    Get PDF
    Collecting, processing, analyzing, and driving knowledge from large-scale real-time data is now realized with the emergence of Artificial Intelligence (AI) and Deep Learning (DL). The breakthrough of Industry 4.0 lays a foundation for intelligent manufacturing. However, implementation challenges of scheduling algorithms in the context of smart manufacturing are not yet comprehensively studied. The purpose of this study is to show the scheduling No.s that need to be considered in the smart manufacturing paradigm. To attain this objective, the literature review is conducted in five stages using publish or perish tools from different sources such as Scopus, Pubmed, Crossref, and Google Scholar. As a result, the first contribution of this study is a critical analysis of existing production scheduling algorithms\u27 characteristics and limitations from the viewpoint of smart manufacturing. The other contribution is to suggest the best strategies for selecting scheduling algorithms in a real-world scenario

    Reinforcement Learning on Job Shop Scheduling Problems Using Graph Networks

    Full text link
    This paper presents a novel approach for job shop scheduling problems using deep reinforcement learning. To account for the complexity of production environment, we employ graph neural networks to model the various relations within production environments. Furthermore, we cast the JSSP as a distributed optimization problem in which learning agents are individually assigned to resources which allows for higher flexibility with respect to changing production environments. The proposed distributed RL agents used to optimize production schedules for single resources are running together with a co-simulation framework of the production environment to obtain the required amount of data. The approach is applied to a multi-robot environment and a complex production scheduling benchmark environment. The initial results underline the applicability and performance of the proposed method.Comment: 8 pages, pre-prin

    Artificial Intelligence to Solve Production Scheduling Problems in Real Industrial Settings: Systematic Literature Review

    Get PDF
    This literature review examines the increasing use of artificial intelligence (AI) in manufacturing systems, in line with the principles of Industry 4.0 and the growth of smart factories. AI is essential for managing the complexities in modern manufacturing, including machine failures, variable orders, and unpredictable work arrivals. This study, conducted using Scopus and Web of Science databases and bibliometric tools, has two main objectives. First, it identifies trends in AI-based scheduling solutions and the most common AI techniques. Second, it assesses the real impact of AI on production scheduling in real industrial settings. This study shows that particle swarm optimization, neural networks, and reinforcement learning are the most widely used techniques to solve scheduling problems. AI solutions have reduced production costs, increased energy efficiency, and improved scheduling in practical applications. AI is increasingly critical in addressing the evolving challenges in contemporary manufacturing environments

    Reinforcement Learning Approach for Multi-Agent Flexible Scheduling Problems

    Full text link
    Scheduling plays an important role in automated production. Its impact can be found in various fields such as the manufacturing industry, the service industry and the technology industry. A scheduling problem (NP-hard) is a task of finding a sequence of job assignments on a given set of machines with the goal of optimizing the objective defined. Methods such as Operation Research, Dispatching Rules, and Combinatorial Optimization have been applied to scheduling problems but no solution guarantees to find the optimal solution. The recent development of Reinforcement Learning has shown success in sequential decision-making problems. This research presents a Reinforcement Learning approach for scheduling problems. In particular, this study delivers an OpenAI gym environment with search-space reduction for Job Shop Scheduling Problems and provides a heuristic-guided Q-Learning solution with state-of-the-art performance for Multi-agent Flexible Job Shop Problems

    Smart digital twin for ZDM-based job-shop scheduling

    Full text link
    [EN] The growing digitization of manufacturing processes is revolutionizing the production job-shop by leading it toward the Smart Manufacturing (SM) paradigm. For a process to be smart, it is necessary to combine a given blend of data technologies, information and knowledge that enable it to perceive its environment and to autonomously perform actions that maximize its success possibilities in its assigned tasks. Of all the different ways leading to this transformation, both the generation of virtual replicas of processes and applying artificial intelligence (AI) techniques provide a wide range of possibilities whose exploration is today a far from negligible sources of opportunities to increase industrial companies¿ competitiveness. As a complex manufacturing process, production order scheduling in the job-shop is a necessary scenario to act by implementing these technologies. This research work considers an initial conceptual smart digital twin (SDT) framework for scheduling job-shop orders in a zero-defect manufacturing (ZDM) environment. The SDT virtually replicates the job-shop scheduling issue to simulate it and, based on the deep reinforcement learning (DRL) methodology, trains a prescriber agent and a process monitor. This simulation and training setting will facilitate analyses, optimization, defect and failure avoidance and, in short, decision making, to improve job-shop scheduling.The research that led to these results received funding from the European Union H2020 Programme with grant agreement No. 825631 Zero-Defect Manufacturing Platform (ZDMP) and Grant agreement No. 958205 Industrial Data Services for Quality Control in Smart Manufacturing (i4Q), and from the Spanish Ministry of Science, Innovation and Universities with Grant Agreement RTI2018-101344-B-I00 "Optimisation of zero-defects production technologies enabling supply chains 4.0 (CADS4.0)"Serrano Ruiz, JC.; Mula, J.; Poler, R. (2021). Smart digital twin for ZDM-based job-shop scheduling. IEEE. 510-515. https://doi.org/10.1109/MetroInd4.0IoT51437.2021.948847351051

    An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling Problems Based on Constraint Programming

    Full text link
    Constraint Programming (CP) is a declarative programming paradigm that allows for modeling and solving combinatorial optimization problems, such as the Job-Shop Scheduling Problem (JSSP). While CP solvers manage to find optimal or near-optimal solutions for small instances, they do not scale well to large ones, i.e., they require long computation times or yield low-quality solutions. Therefore, real-world scheduling applications often resort to fast, handcrafted, priority-based dispatching heuristics to find a good initial solution and then refine it using optimization methods. This paper proposes a novel end-to-end approach to solving scheduling problems by means of CP and Reinforcement Learning (RL). In contrast to previous RL methods, tailored for a given problem by including procedural simulation algorithms, complex feature engineering, or handcrafted reward functions, our neural-network architecture and training algorithm merely require a generic CP encoding of some scheduling problem along with a set of small instances. Our approach leverages existing CP solvers to train an agent learning a Priority Dispatching Rule (PDR) that generalizes well to large instances, even from separate datasets. We evaluate our method on seven JSSP datasets from the literature, showing its ability to find higher-quality solutions for very large instances than obtained by static PDRs and by a CP solver within the same time limit.Comment: To be published at ICAPS 202

    Generating Dispatching Rules for the Interrupting Swap-Allowed Blocking Job Shop Problem Using Graph Neural Network and Reinforcement Learning

    Full text link
    The interrupting swap-allowed blocking job shop problem (ISBJSSP) is a complex scheduling problem that is able to model many manufacturing planning and logistics applications realistically by addressing both the lack of storage capacity and unforeseen production interruptions. Subjected to random disruptions due to machine malfunction or maintenance, industry production settings often choose to adopt dispatching rules to enable adaptive, real-time re-scheduling, rather than traditional methods that require costly re-computation on the new configuration every time the problem condition changes dynamically. To generate dispatching rules for the ISBJSSP problem, we introduce a dynamic disjunctive graph formulation characterized by nodes and edges subjected to continuous deletions and additions. This formulation enables the training of an adaptive scheduler utilizing graph neural networks and reinforcement learning. Furthermore, a simulator is developed to simulate interruption, swapping, and blocking in the ISBJSSP setting. Employing a set of reported benchmark instances, we conduct a detailed experimental study on ISBJSSP instances with a range of machine shutdown probabilities to show that the scheduling policies generated can outperform or are at least as competitive as existing dispatching rules with predetermined priority. This study shows that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled efficiently with the proposed method when production interruptions occur with random machine shutdowns.Comment: 14 pages, 10 figures. Supplementary Material not include

    Solving large flexible job shop scheduling instances by generating a diverse set of scheduling policies with deep reinforcement learning

    Full text link
    The Flexible Job Shop Scheduling Problem (FJSSP) has been extensively studied in the literature, and multiple approaches have been proposed within the heuristic, exact, and metaheuristic methods. However, the industry's demand to be able to respond in real-time to disruptive events has generated the necessity to be able to generate new schedules within a few seconds. Among these methods, under this constraint, only dispatching rules (DRs) are capable of generating schedules, even though their quality can be improved. To improve the results, recent methods have been proposed for modeling the FJSSP as a Markov Decision Process (MDP) and employing reinforcement learning to create a policy that generates an optimal solution assigning operations to machines. Nonetheless, there is still room for improvement, particularly in the larger FJSSP instances which are common in real-world scenarios. Therefore, the objective of this paper is to propose a method capable of robustly solving large instances of the FJSSP. To achieve this, we propose a novel way of modeling the FJSSP as an MDP using graph neural networks. We also present two methods to make inference more robust: generating a diverse set of scheduling policies that can be parallelized and limiting them using DRs. We have tested our approach on synthetically generated instances and various public benchmarks and found that our approach outperforms dispatching rules and achieves better results than three other recent deep reinforcement learning methods on larger FJSSP instances
    corecore