176 research outputs found

    A survey of AI in operations management from 2005 to 2009

    Get PDF
    Purpose: the use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing body of publications over the last two decades means that it can be difficult to keep track of what has been done previously, what has worked, and what really needs to be addressed. Hence this paper presents a survey of the use of AI in operations management aimed at presenting the key research themes, trends and directions of research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the ten-year period 1995-2004. Like the previous survey, it uses Elsevier’s Science Direct database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus, the application categories adopted are: design; scheduling; process planning and control; and quality, maintenance and fault diagnosis. Research on utilising neural networks, case-based reasoning (CBR), fuzzy logic (FL), knowledge-Based systems (KBS), data mining, and hybrid AI in the four application areas are identified. Findings: the survey categorises over 1,400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: the trends for design and scheduling show a dramatic increase in the use of genetic algorithms since 2003 that reflect recognition of their success in these areas; there is a significant decline in research on use of KBS, reflecting their transition into practice; there is an increasing trend in the use of FL in quality, maintenance and fault diagnosis; and there are surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the 10 year period 1995 to 2004 (Kobbacy et al. 2007). Like the previous survey, it uses the Elsevier’s ScienceDirect database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus the application categories adopted are: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Research on utilising neural networks, case based reasoning, fuzzy logic, knowledge based systems, data mining, and hybrid AI in the four application areas are identified. Findings: The survey categorises over 1400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: (a) The trends for Design and Scheduling show a dramatic increase in the use of GAs since 2003-04 that reflect recognition of their success in these areas, (b) A significant decline in research on use of KBS, reflecting their transition into practice, (c) an increasing trend in the use of fuzzy logic in Quality, Maintenance and Fault Diagnosis, (d) surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Originality/value: This is the largest and most comprehensive study to classify research on the use of AI in operations management to date. The survey and trends identified provide a useful reference point and directions for future research

    An Intelligent Customization Framework for Tourist Trip Design Problems

    Get PDF
    In the era of the experience economy, “customized tours” and “self-guided tours” have become mainstream. This paper proposes an end-to-end framework for solving the tourist trip design problems (TTDP) using deep reinforcement learning (DRL) and data analysis. The proposed approach considers heterogeneous tourist preferences, customized requirements, and stochastic traffic times in real applications. With various heuristics methods, our approach is scalable without retraining for every new problem instance, which can automatically adapt the solution when the problem constraint changes slightly. We aim to provide websites or users with software tools that make it easier to solve TTDP, promoting the development of smart tourism and customized tourism

    Hybrid Ant Colony Optimization For Two Satisfiability Programming In Hopfield Neural Network

    Get PDF
    The representation of 2 Satisfiability problem or 2SAT is increasingly viewed as a significant logical rule in order to synthesize many real life applications. Although there were many researchers proposed the solution of 2SAT, little attention has been paid to the significance of the 2SAT logical rule itself. It can be hypothesized that 2SAT property can be used as a logical rule in the intelligent system. To verify this claim, 2 Satisfiability logic programming was embedded to Hopfield neural network (HNN) as a single unit. Learning in HNN will be inspired by Wan Abdullah method since the conventional Hebbian learning is inefficient when dealing with large number of constraints. As the number of 2SAT clauses increased, the efficiency and effectiveness of the learning phase in HNN deteriorates. Swarm intelligence metaheuristic algorithm has been introduced to reduce the learning complexity of the network. The newly proposed metaheuristic algorithm was enhanced ant colony optimization (ACO) algorithm

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    Neural Networks for Fast Optimisation in Model Predictive Control: A Review

    Full text link
    Model Predictive Control (MPC) is an optimal control algorithm with strong stability and robustness guarantees. Despite its popularity in robotics and industrial applications, the main challenge in deploying MPC is its high computation cost, stemming from the need to solve an optimisation problem at each control interval. There are several methods to reduce this cost. This survey focusses on approaches where a neural network is used to approximate an existing controller. Herein, relevant and unique neural approximation methods for linear, nonlinear, and robust MPC are presented and compared. Comparisons are based on the theoretical guarantees that are preserved, the factor by which the original controller is sped up, and the size of problem that a framework is applicable to. Research contributions include: a taxonomy that organises existing knowledge, a summary of literary gaps, discussion on promising research directions, and simple guidelines for choosing an approximation framework. The main conclusions are that (1) new benchmarking tools are needed to help prove the generalisability and scalability of approximation frameworks, (2) future breakthroughs most likely lie in the development of ties between control and learning, and (3) the potential and applicability of recently developed neural architectures and tools remains unexplored in this field.Comment: 34 pages, 6 figures 3 tables. Submitted to ACM Computing Survey

    Overløpskontroll i avløpsnett med forskjellige modelleringsteknikker og internet of things

    Get PDF
    Increased urbanization and extreme rainfall events are causing more frequent instances of sewer overflow, leading to the pollution of water resources and negative environmental, health, and fiscal impacts. At the same time, the treatment capacity of wastewater treatment plants is seriously affected. The main aim of this Ph.D. thesis is to use the Internet of Things and various modeling techniques to investigate the use of real-time control on existing sewer systems to mitigate overflow. The role of the Internet of Things is to provide continuous monitoring and real-time control of sewer systems. Data collected by the Internet of Things are also useful for model development and calibration. Models are useful for various purposes in real-time control, and they can be distinguished as those suitable for simulation and those suitable for prediction. Models that are suitable for a simulation, which describes the important phenomena of a system in a deterministic way, are useful for developing and analyzing different control strategies. Meanwhile, models suitable for prediction are usually employed to predict future system states. They use measurement information about the system and must have a high computational speed. To demonstrate how real-time control can be used to manage sewer systems, a case study was conducted for this thesis in Drammen, Norway. In this study, a hydraulic model was used as a model suitable for simulation to test the feasibility of different control strategies. Considering the recent advances in artificial intelligence and the large amount of data collected through the Internet of Things, the study also explored the possibility of using artificial intelligence as a model suitable for prediction. A summary of the results of this work is presented through five papers. Paper I demonstrates that one mainstream artificial intelligence technique, long short-term memory, can precisely predict the time series data from the Internet of Things. Indeed, the Internet of Things and long short-term memory can be powerful tools for sewer system managers or engineers, who can take advantage of real-time data and predictions to improve decision-making. In Paper II, a hydraulic model and artificial intelligence are used to investigate an optimal in-line storage control strategy that uses the temporal storage volumes in pipes to reduce overflow. Simulation results indicate that during heavy rainfall events, the response behavior of the sewer system differs with respect to location. Overflows at a wastewater treatment plant under different control scenarios were simulated and compared. The results from the hydraulic model show that overflows were reduced dramatically through the intentional control of pipes with in-line storage capacity. To determine available in-line storage capacity, recurrent neural networks were employed to predict the upcoming flow coming into the pipes that were to be controlled. Paper III and Paper IV describe a novel inter-catchment wastewater transfer solution. The inter-catchment wastewater transfer method aims at redistributing spatially mismatched sewer flows by transferring wastewater from a wastewater treatment plant to its neighboring catchment. In Paper III, the hydraulic behaviors of the sewer system under different control scenarios are assessed using the hydraulic model. Based on the simulations, inter-catchment wastewater transfer could efficiently reduce total overflow from a sewer system and wastewater treatment plant. Artificial intelligence was used to predict inflow to the wastewater treatment plant to improve inter-catchment wastewater transfer functioning. The results from Paper IV indicate that inter-catchment wastewater transfer might result in an extra burden for a pump station. To enhance the operation of the pump station, long short-term memory was employed to provide multi-step-ahead water level predictions. Paper V proposes a DeepCSO model based on large and high-resolution sensors and multi-task learning techniques. Experiments demonstrated that the multi-task approach is generally better than single-task approaches. Furthermore, the gated recurrent unit and long short-term memory-based multi-task learning models are especially suitable for capturing the temporal and spatial evolution of combined sewer overflow events and are superior to other methods. The DeepCSO model could help guide the real-time operation of sewer systems at a citywide level.publishedVersio

    Vision-based Navigation Using an Associative Memory

    Get PDF

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain
    corecore