29 research outputs found
Optimising discrete event simulation models using a reinforcement learning agent
A reinforcement learning agent has been developed to determine optimal operating policies in a multi-part serial line. The agent interacts with a discrete event simulation model of a stochastic production facility. This study identifies issues important to the simulation developer who wishes to optimise a complex simulation or develop a robust operating policy. Critical parameters pertinent to \u27tuning\u27 an agent quickly and enabling it to rapidly learn the system were investigated.<br /
Evolving Neural Networks to Solve a Two-Stage Hybrid Flow Shop Scheduling Problem with Family Setup Times
We present a novel strategy to solve a two-stage hybrid flow shop scheduling problem with family setup times. The problem is derived from an industrial case. Our strategy involves the application of NeuroEvolution of Augmenting Topologies - a genetic algorithm, which generates arbitrary neural networks being able to estimate job sequences. The algorithm is coupled with a discrete-event simulation model, which evaluates different network configurations and provides training signals. We compare the performance and computational efficiency of the proposed concept with other solution approaches. Our investigations indicate that NeuroEvolution of Augmenting Topologies can possibly compete with state-of-the-art approaches in terms of solution quality and outperform them in terms of computational efficiency
ÇOK TEMSİLCİLİ SİSTEMLER VE KARGO TAŞIMA PROBLEMLERİ ÜZERİNE BİR UYGULAMA
MultiAgent Systems (MAS) is a subfield of Artificial Intelligence which is designed by communicating two or more agent in an environment. In this paper, MAS which have been applied for industrial problems for ten years successfully, its sub fields and search algorithms are introduced. Literature review about application of MAS is explained. A model for solving cargo transportation problem with MAS is developed. For this purpose, randomly forming cargo transportation problems have been modeled with MAS. Finally, it is seen that MAS gives an optimum solutions in a short computation times. Çok Temsilcili Sistemler (ÇTS), iki ya da daha fazla ajanın (temsilci) aralarında iletişim kurarak bir çevre içerisinde etkileşim göstermesi ile oluşan, yapay zekânın bir alt dalıdır. Bu çalışmada; son on yılda endüstriyel problemlerin çözümünde başarılı olarak kullanılan ÇTS ve alt sistemleri üzerinde durulmuştur. ÇTS’de kullanılan arama algoritmalarına değinilmiştir. ÇTS ile ilgili son yıllarda yapılan çalışmalar literatür özeti olarak sunulmuştur. Uygulama bölümünde de kargo taşıma problemlerinin ÇTS ile çözümüne ilişkin bir model önerilmiştir. Bu amaç için, farklı ölçeklerde rastsal olarak oluşturulan kargo taşıma problemleri, ÇTS yardımı ile modellenmiştir. Sonuç olarak, ÇTS ile çok kısa sürelerde en iyi çözümlere ulaşıldığı görülmüştür.
A deep reinforcement learning based scheduling policy for reconfigurable manufacturing systems
Reconfigurable manufacturing systems (RMS) is one of the trending paradigms toward a digitalised factory. With its rapid reconfiguring capability, finding a far-sighted scheduling policy is challenging. Reinforcement learning is well-equipped for finding highly efficient production plans that would bring near-optimal future rewards. For minimising reconfiguring actions, this paper uses a deep reinforcement learning agent to make autonomous decision with a built-in discrete event simulation model of a generic RMS. Aiming at the completion of the assigned order lists while minimising the reconfiguration actions, the agent outperforms the conventional first-in-first-out dispatching rule after self-learning
A Reinforcement Learning Motivated Algorithm for Process Optimization
In process scheduling problems there are several processes and resources. Any process consists of several tasks, and there may be precedence constraints among them. In our paper we consider a special case, where the precedence constraints form short disjoint (directed) paths. This model occurs frequently in practice, but as far as we know it is considered very rarely in the literature. The goal is to find a good resource allocation (schedule) to minimize the makespan. The problem is known to be strongly NP-hard, and such hard problems are often solved by heuristic methods. We found only one paper which is closely related to our topic, this paper proposes the heuristic method HH. We propose a new heuristic called QLM which is inspired by reinforcement learning methods from the area of machine learning. As we did not find appropriate benchmark problems for the investigated model. We have created such inputs and we have made exhaustive comparisons, comparing the results of HH and QLM, and an exact solver using CPLEX. We note that a heuristic method can give a “near optimal” solution very fast while an exact solver provides the optimal solution, but it may need a huge amount of time to find it. In our computational evaluation we experienced that our heuristic is more effective than HH and finds the optimal solution in many cases and very fast
Job Shop Scheduling Problem: an Overview
The Job-shop scheduling is one of the most important industrial activities, especially in manufacturing planning. The problem complexity has increased along with the increase in the complexity of operations and product-mix. To solve this problem, numerous approaches have been developed incorporating discrete event simulation methodology. The scope and the purpose of this paper is to present a survey which covers most of the solving techniques of Job Shop Scheduling (JSS) problem. A classification of these techniques has been proposed: Traditional Techniques and Advanced Techniques. The traditional techniques to solve JSS could not fully satisfy the global competition and rapidly changing in customer requirements. Simulation and Artificial Intelligence (AI) have proven to be excellent strategic tool for scheduling problems in general and JSS in particular. The paper defined some AI techniques used by manufacturing systems. Finally, the future trends are proposed briefly
Using a reinforcement learning approach in a discrete event manufacturing system
Ergänzung der gedruckten Ausgabe. Diese ist nur online verfügbar ist
Generating Dispatching Rules for the Interrupting Swap-Allowed Blocking Job Shop Problem Using Graph Neural Network and Reinforcement Learning
The interrupting swap-allowed blocking job shop problem (ISBJSSP) is a
complex scheduling problem that is able to model many manufacturing planning
and logistics applications realistically by addressing both the lack of storage
capacity and unforeseen production interruptions. Subjected to random
disruptions due to machine malfunction or maintenance, industry production
settings often choose to adopt dispatching rules to enable adaptive, real-time
re-scheduling, rather than traditional methods that require costly
re-computation on the new configuration every time the problem condition
changes dynamically. To generate dispatching rules for the ISBJSSP problem, we
introduce a dynamic disjunctive graph formulation characterized by nodes and
edges subjected to continuous deletions and additions. This formulation enables
the training of an adaptive scheduler utilizing graph neural networks and
reinforcement learning. Furthermore, a simulator is developed to simulate
interruption, swapping, and blocking in the ISBJSSP setting. Employing a set of
reported benchmark instances, we conduct a detailed experimental study on
ISBJSSP instances with a range of machine shutdown probabilities to show that
the scheduling policies generated can outperform or are at least as competitive
as existing dispatching rules with predetermined priority. This study shows
that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled
efficiently with the proposed method when production interruptions occur with
random machine shutdowns.Comment: 14 pages, 10 figures. Supplementary Material not include
Reconfigurable manufacturing system scheduling: a deep reinforcement learning approach
Reconfigurable Manufacturing Systems (RMS) bring new possibilities toward meeting demand fluctuations while, at the same time, challenges scheduling efficiency. This paper presents a novel approach that, for the scheduling problem of RMS on multiple products, finds a dynamic control policy via a group of deep reinforcement learning agents. These teamed agents, embedded with a shared value decomposition network, aim on minimising the make-span of a constant updating order group by guiding a group of automated guided vehicles to move modules of machine, raw materials, and finished products inside the system