7,133 research outputs found

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    Multiobjective scheduling for semiconductor manufacturing plants

    Get PDF
    Scheduling of semiconductor wafer manufacturing system is identified as a complex problem, involving multiple and conflicting objectives (minimization of facility average utilization, minimization of waiting time and storage, for instance) to simultaneously satisfy. In this study, we propose an efficient approach based on an artificial neural network technique embedded into a multiobjective genetic algorithm for multi-decision scheduling problems in a semiconductor wafer fabrication environment

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    Artificial intelligence for superconducting transformers

    Get PDF
    Artificial intelligence (AI) techniques are currently widely used in different parts of the electrical engineering sector due to their privileges for being used in smarter manufacturing and accurate and efficient operating of electric devices. Power transformers are a vital and expensive asset in the power network, where their consistent and fault-free operation greatly impacts the reliability of the whole system. The superconducting transformer has the potential to fully modernize the power network in the near future with its invincible advantages, including much lighter weight, more compact size, much lower loss, and higher efficiency compared with conventional oil-immersed counterparts. In this article, we have looked into the perspective of using AI for revolutionizing superconducting transformer technology in many aspects related to their design, operation, condition monitoring, maintenance, and asset management. We believe that this article offers a roadmap for what could be and needs to be done in the current decade 2020-2030 to integrate AI into superconducting transformer technology

    Autonomous order dispatching in the semiconductor industry using reinforcement learning

    Get PDF
    Cyber Physical Production Systems (CPPS) provide a huge amount of data. Simultaneously, operational decisions are getting ever more complex due to smaller batch sizes, a larger product variety and complex processes in production systems. Production engineers struggle to utilize the recorded data to optimize production processes effectively because of a rising level of complexity. This paper shows the successful implementation of an autonomous order dispatching system that is based on a Reinforcement Learning (RL) algorithm. The real-world use case in the semiconductor industry is a highly suitable example of a cyber physical and digitized production system

    ANN Modelling to Optimize Manufacturing Process

    Get PDF
    Neural network (NN) model is an efficient and accurate tool for simulating manufacturing processes. Various authors adopted artificial neural networks (ANNs) to optimize multiresponse parameters in manufacturing processes. In most cases the adoption of ANN allows to predict the mechanical proprieties of processed products on the basis of given technological parameters. Therefore the implementation of ANN is hugely beneficial in industrial applications in order to save cost and material resources. In this chapter, following an introduction on the application of the ANN to the manufacturing process, it will be described an important study that has been published on international journals and that has investigated the use of the ANNs for the monitoring, controlling and optimization of the process. Experimental observations were collected in order to train the network and establish numerical relationships between process-related factors and mechanical features of the welded joints. Finally, an evaluation of time-costs parameters of the process, using the control of the ANN model, is conducted in order to identify the costs and the benefits of the prediction model adopted

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
    • 

    corecore