6,312 research outputs found
Reinforcement Learning on Job Shop Scheduling Problems Using Graph Networks
This paper presents a novel approach for job shop scheduling problems using
deep reinforcement learning. To account for the complexity of production
environment, we employ graph neural networks to model the various relations
within production environments. Furthermore, we cast the JSSP as a distributed
optimization problem in which learning agents are individually assigned to
resources which allows for higher flexibility with respect to changing
production environments. The proposed distributed RL agents used to optimize
production schedules for single resources are running together with a
co-simulation framework of the production environment to obtain the required
amount of data. The approach is applied to a multi-robot environment and a
complex production scheduling benchmark environment. The initial results
underline the applicability and performance of the proposed method.Comment: 8 pages, pre-prin
Cooperative Multi-Agent Systems from the Reinforcement Learning Perspective -- Challenges, Algorithms, and an Application
Reinforcement Learning has established as a framework that
allows an autonomous agent for automatically acquiring -- in a
trial and error-based manner -- a behavior policy based on a
specification of the desired behavior of the system.
In a multi-agent system, however, the decentralization of the
control and observation of the system among independent agents
has a significant impact on learning and it complexity.
In this survey talk, we briefly review the foundations of
single-agent reinforcement learning, point to the merits and
challenges when applied in a multi-agent setting, and illustrate
its potential in the context of an application from the field
of manufacturing control and scheduling
Application of Reinforcement Learning to Multi-Agent Production Scheduling
Reinforcement learning (RL) has received attention in recent years from agent-based researchers because it can be applied to problems where autonomous agents learn to select proper actions for achieving their goals based on interactions with their environment. Each time an agent performs an action, the environment¡Šs response, as indicated by its new state, is used by the agent to reward or penalize its action. The agent¡Šs goal is to maximize the total amount of reward it receives over the long run. Although there have been several successful examples demonstrating the usefulness of RL, its application to manufacturing systems has not been fully explored. The objective of this research is to develop a set of guidelines for applying the Q-learning algorithm to enable an individual agent to develop a decision making policy for use in agent-based production scheduling applications such as dispatching rule selection and job routing. For the dispatching rule selection problem, a single machine agent employs the Q-learning algorithm to develop a decision-making policy on selecting the appropriate dispatching rule from among three given dispatching rules. In the job routing problem, a simulated job shop system is used for examining the implementation of the Q-learning algorithm for use by job agents when making routing decisions in such an environment. Two factorial experiment designs for studying the settings used to apply Q-learning to the single machine dispatching rule selection problem and the job routing problem are carried out. This study not only investigates the main effects of this Q-learning application but also provides recommendations for factor settings and useful guidelines for future applications of Q-learning to agent-based production scheduling
Artificial Intelligence to Solve Production Scheduling Problems in Real Industrial Settings: Systematic Literature Review
This literature review examines the increasing use of artificial intelligence (AI) in manufacturing systems, in line with the principles of Industry 4.0 and the growth of smart factories. AI is essential for managing the complexities in modern manufacturing, including machine failures, variable orders, and unpredictable work arrivals. This study, conducted using Scopus and Web of Science databases and bibliometric tools, has two main objectives. First, it identifies trends in AI-based scheduling solutions and the most common AI techniques. Second, it assesses the real impact of AI on production scheduling in real industrial settings. This study shows that particle swarm optimization, neural networks, and reinforcement learning are the most widely used techniques to solve scheduling problems. AI solutions have reduced production costs, increased energy efficiency, and improved scheduling in practical applications. AI is increasingly critical in addressing the evolving challenges in contemporary manufacturing environments
Scheduling Algorithms: Challenges Towards Smart Manufacturing
Collecting, processing, analyzing, and driving knowledge from large-scale real-time data is now realized with the emergence of Artificial Intelligence (AI) and Deep Learning (DL). The breakthrough of Industry 4.0 lays a foundation for intelligent manufacturing. However, implementation challenges of scheduling algorithms in the context of smart manufacturing are not yet comprehensively studied. The purpose of this study is to show the scheduling No.s that need to be considered in the smart manufacturing paradigm. To attain this objective, the literature review is conducted in five stages using publish or perish tools from different sources such as Scopus, Pubmed, Crossref, and Google Scholar. As a result, the first contribution of this study is a critical analysis of existing production scheduling algorithms\u27 characteristics and limitations from the viewpoint of smart manufacturing. The other contribution is to suggest the best strategies for selecting scheduling algorithms in a real-world scenario
Coordination of Supply Webs Based on Dispositive Protocols
A lot of curricula in information systems, also at master level, exists today. However, the strong need in new approaches and new curricula still exists, especially, in European area. The paper discusses the modern curriculum in information systems at master level that is currently under development in the Socrates/Erasmus project MOCURIS. The curriculum is oriented to the students of engineering schools of technical universities. The proposed approach takes into account integration trends in European area as well as the transformation of industrial economics into knowledge-based digital economics The paper presents main characteristics of the proposed curriculum, discuses curriculum development techniques used in the project MOCURIS, describes the architecture of the proposed curriculum and the body of knowledge provided by it
Towards Standardising Reinforcement Learning Approaches for Production Scheduling Problems
Recent years have seen a rise in interest in terms of using machine learning,
particularly reinforcement learning (RL), for production scheduling problems of
varying degrees of complexity. The general approach is to break down the
scheduling problem into a Markov Decision Process (MDP), whereupon a simulation
implementing the MDP is used to train an RL agent. Since existing studies rely
on (sometimes) complex simulations for which the code is unavailable, the
experiments presented are hard, or, in the case of stochastic environments,
impossible to reproduce accurately. Furthermore, there is a vast array of RL
designs to choose from. To make RL methods widely applicable in production
scheduling and work out their strength for the industry, the standardisation of
model descriptions - both production setup and RL design - and validation
scheme are a prerequisite. Our contribution is threefold: First, we standardize
the description of production setups used in RL studies based on established
nomenclature. Secondly, we classify RL design choices from existing
publications. Lastly, we propose recommendations for a validation scheme
focusing on reproducibility and sufficient benchmarking
- …