58 research outputs found

    A survey of AI in operations management from 2005 to 2009

    Get PDF
    Purpose: the use of AI for operations management, with its ability to evolve solutions, handle uncertainty and perform optimisation continues to be a major field of research. The growing body of publications over the last two decades means that it can be difficult to keep track of what has been done previously, what has worked, and what really needs to be addressed. Hence this paper presents a survey of the use of AI in operations management aimed at presenting the key research themes, trends and directions of research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the ten-year period 1995-2004. Like the previous survey, it uses Elsevier’s Science Direct database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus, the application categories adopted are: design; scheduling; process planning and control; and quality, maintenance and fault diagnosis. Research on utilising neural networks, case-based reasoning (CBR), fuzzy logic (FL), knowledge-Based systems (KBS), data mining, and hybrid AI in the four application areas are identified. Findings: the survey categorises over 1,400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: the trends for design and scheduling show a dramatic increase in the use of genetic algorithms since 2003 that reflect recognition of their success in these areas; there is a significant decline in research on use of KBS, reflecting their transition into practice; there is an increasing trend in the use of FL in quality, maintenance and fault diagnosis; and there are surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Design/methodology/approach: the paper builds upon our previous survey of this field which was carried out for the 10 year period 1995 to 2004 (Kobbacy et al. 2007). Like the previous survey, it uses the Elsevier’s ScienceDirect database as a source. The framework and methodology adopted for the survey is kept as similar as possible to enable continuity and comparison of trends. Thus the application categories adopted are: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Research on utilising neural networks, case based reasoning, fuzzy logic, knowledge based systems, data mining, and hybrid AI in the four application areas are identified. Findings: The survey categorises over 1400 papers, identifying the uses of AI in the four categories of operations management and concludes with an analysis of the trends, gaps and directions for future research. The findings include: (a) The trends for Design and Scheduling show a dramatic increase in the use of GAs since 2003-04 that reflect recognition of their success in these areas, (b) A significant decline in research on use of KBS, reflecting their transition into practice, (c) an increasing trend in the use of fuzzy logic in Quality, Maintenance and Fault Diagnosis, (d) surprising gaps in the use of CBR and hybrid methods in operations management that offer opportunities for future research. Originality/value: This is the largest and most comprehensive study to classify research on the use of AI in operations management to date. The survey and trends identified provide a useful reference point and directions for future research

    Big Data Analytics for Complex Systems

    Get PDF
    The evolution of technology in all fields led to the generation of vast amounts of data by modern systems. Using data to extract information, make predictions, and make decisions is the current trend in artificial intelligence. The advancement of big data analytics tools made accessing and storing data easier and faster than ever, and machine learning algorithms help to identify patterns in and extract information from data. The current tools and machines in health, computer technologies, and manufacturing can generate massive raw data about their products or samples. The author of this work proposes a modern integrative system that can utilize big data analytics, machine learning, super-computer resources, and industrial health machines’ measurements to build a smart system that can mimic the human intelligence skills of observations, detection, prediction, and decision-making. The applications of the proposed smart systems are included as case studies to highlight the contributions of each system. The first contribution is the ability to utilize big data revolutionary and deep learning technologies on production lines to diagnose incidents and take proper action. In the current digital transformational industrial era, Industry 4.0 has been receiving researcher attention because it can be used to automate production-line decisions. Reconfigurable manufacturing systems (RMS) have been widely used to reduce the setup cost of restructuring production lines. However, the current RMS modules are not linked to the cloud for online decision-making to take the proper decision; these modules must connect to an online server (super-computer) that has big data analytics and machine learning capabilities. The online means that data is centralized on cloud (supercomputer) and accessible in real-time. In this study, deep neural networks are utilized to detect the decisive features of a product and build a prediction model in which the iFactory will make the necessary decision for the defective products. The Spark ecosystem is used to manage the access, processing, and storing of the big data streaming. This contribution is implemented as a closed cycle, which for the best of our knowledge, no one in the literature has introduced big data analysis using deep learning on real-time applications in the manufacturing system. The code shows a high accuracy of 97% for classifying the normal versus defective items. The second contribution, which is in Bioinformatics, is the ability to build supervised machine learning approaches based on the gene expression of patients to predict proper treatment for breast cancer. In the trial, to personalize treatment, the machine learns the genes that are active in the patient cohort with a five-year survival period. The initial condition here is that each group must only undergo one specific treatment. After learning about each group (or class), the machine can personalize the treatment of a new patient by diagnosing the patients’ gene expression. The proposed model will help in the diagnosis and treatment of the patient. The future work in this area involves building a protein-protein interaction network with the selected genes for each treatment to first analyze the motives of the genes and target them with the proper drug molecules. In the learning phase, a couple of feature-selection techniques and supervised standard classifiers are used to build the prediction model. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges around 100%. The third contribution is the ability to build semi-supervised learning for the breast cancer survival treatment that advances the second contribution. By understanding the relations between the classes, we can design the machine learning phase based on the similarities between classes. In the proposed research, the researcher used the Euclidean matrix distance among each survival treatment class to build the hierarchical learning model. The distance information that is learned through a non-supervised approach can help the prediction model to select the classes that are away from each other to maximize the distance between classes and gain wider class groups. The performance measurement of this approach shows a slight improvement from the second model. However, this model reduced the number of discriminative genes from 47 to 37. The model in the second contribution studies each class individually while this model focuses on the relationships between the classes and uses this information in the learning phase. Hierarchical clustering is completed to draw the borders between groups of classes before building the classification models. Several distance measurements are tested to identify the best linkages between classes. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges from 90% to 100%. All the case study models showed high-performance measurements in the prediction phase. These modern models can be replicated for different problems within different domains. The comprehensive models of the newer technologies are reconfigurable and modular; any newer learning phase can be plugged-in at both ends of the learning phase. Therefore, the output of the system can be an input for another learning system, and a newer feature can be added to the input to be considered for the learning phase

    Semantic HELM: A Human-Readable Memory for Reinforcement Learning

    Full text link
    Reinforcement learning agents deployed in the real world often have to cope with partially observable environments. Therefore, most agents employ memory mechanisms to approximate the state of the environment. Recently, there have been impressive success stories in mastering partially observable environments, mostly in the realm of computer games like Dota 2, StarCraft II, or MineCraft. However, existing methods lack interpretability in the sense that it is not comprehensible for humans what the agent stores in its memory. In this regard, we propose a novel memory mechanism that represents past events in human language. Our method uses CLIP to associate visual inputs with language tokens. Then we feed these tokens to a pretrained language model that serves the agent as memory and provides it with a coherent and human-readable representation of the past. We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component, while mostly attaining performance on-par with strong baselines on tasks that do not. On a challenging continuous recognition task, where memorizing the past is crucial, our memory mechanism converges two orders of magnitude faster than prior methods. Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored. This significantly enhances troubleshooting and paves the way toward more interpretable agents.Comment: To appear at NeurIPS 2023, 10 pages (+ references and appendix), Code: https://github.com/ml-jku/hel

    Scheduling in assembly type job-shops

    Get PDF
    Assembly type job-shop scheduling is a generalization of the job-shop scheduling problem to include assembly operations. In the assembly type job-shops scheduling problem, there are n jobs which are to be processed on in workstations and each job has a due date. Each job visits one or more workstations in a predetermined route. The primary difference between this new problem and the classical job-shop problem is that two or more jobs can merge to foul\u27 a new job at a specified workstation, that is job convergence is permitted. This feature cannot be modeled by existing job-shop techniques. In this dissertation, we develop scheduling procedures for the assembly type job-shop with the objective of minimizing total weighted tardiness. Three types of workstations are modeled: single machine, parallel machine, and batch machine. We label this new scheduling procedure as SB. The SB procedure is heuristic in nature and is derived from the shifting bottleneck concept. SB decomposes the assembly type job-shop scheduling problem into several workstation scheduling sub-problems. Various types of techniques are used in developing the scheduling heuristics for these sub-problems including the greedy method, beam search, critical path analysis, local search, and dynamic programming. The performance of SB is validated on a set of test problems and compared with priority rules that are normally used in practice. The results show that SB outperforms the priority rules by an average of 19% - 36% for the test problems. SB is extended to solve scheduling problems with other objectives including minimizing the maximum completion time, minimizing weighted flow time and minimizing maximum weighted lateness. Comparisons with the test problems, indicate that SB outperforms the priority rules for these objectives as well. The SB procedure and its accompanying logic is programmed into an object oriented scheduling system labeled as LEKIN. The LEKIN program includes a standard library of scheduling rules and hence can be used as a platform for the development of new scheduling heuristics. In industrial applications LEKIN allows schedulers to obtain effective machine schedules rapidly. The results from this research allow us to increase shop utilization, improve customer satisfaction, and lower work-in-process inventory without a major capital investment

    Hierarchical workflow management system for life science applications

    Get PDF
    In modern laboratories, an increasing number of automated stations and instruments are applied as standalone automated systems such as biological high throughput screening systems, chemical parallel reactors etc. At the same time, the mobile robot transportation solution becomes popular with the development of robotic technologies. In this dissertation, a new superordinate control system, called hierarchical workflow management system (HWMS) is presented to manage and to handle both, automated laboratory systems and logistics systems.In modernen Labors werden immer mehr automatisierte Stationen und Instrumente als eigenständige automatisierte Systeme eingesetzt, wie beispielsweise biologische High-Throughput-Screening-Systeme und chemische Parallelreaktoren. Mit der Entwicklung der Robotertechnologien wird gleichzeitig die mobile Robotertransportlösung populär. In der vorliegenden Arbeit wurde ein hierarchisches Verwaltungssystem für Abeitsablauf, welches auch als HWMS bekannt ist, entwickelt. Das neue übergeordnete Kontrollsystem kann sowohl automatisierte Laborsysteme als auch Logistiksysteme verwalten und behandeln

    Heuristic Procedures to Solve Sequencing and Scheduling Problems in Automobile Industry

    Get PDF
    With the growing trend for greater product variety, mixed-model assembly nowadays is commonly employed in many industries, which can enable just-in-time production for a production system with high variety. Efficient production scheduling and sequencing is important to achieve the overall material supply, production, and distribution efficiency around the mixed-model assembly line. This research addresses production scheduling and sequencing on a mixed-model assembly line for products with multiple product options, considering multiple objectives with regard to material supply, manufacturing, and product distribution. This research also addresses plant assignment for a product with multiple product options as a prior step to scheduling and sequencing for a mixed-model assembly line. This dissertation is organized into three parts based on three papers. Introduction and literature review Part 1. In an automobile assembly plant many product options often need to be considered in sequencing an assembly line with which multiple objectives often need to be considered. A general heuristic procedure is developed for sequencing automobile assembly lines considering multiple options. The procedure uses the construction, swapping, and re-sequencing steps, and a limited search for sequencing automobile assembly lines considering multiple options. Part 2. In a supply chain, production scheduling and finished goods distribution have been increasingly considered in an integrated manner to achieve an overall best efficiency. This research presents a heuristic procedure to achieve an integrated consideration of production scheduling and product distribution with production smoothing for the automobile just-in-time production assembly line. A meta-heuristic procedure is also developed for improving the heuristic solution. Part 3. For a product that can be manufactured in multiple facilities, assigning orders to the facility is a common problem faced by industry considering production, material constraints, and other supply-chain related constraints. This paper addresses products with multiple product options for plant assignment with regard to multiple constraints at individual plants in order to minimize transportation costs and costs of assignment infeasibility. A series of binary- and mixed-integer programming models are presented, and a decision support tool based on optimization models is presented with a case study. Summary and conclusion

    Joint University Program for Air Transportation Research, 1991-1992

    Get PDF
    This report summarizes the research conducted during the academic year 1991-1992 under the FAA/NASA sponsored Joint University Program for Air Transportation Research. The year end review was held at Ohio University, Athens, Ohio, June 18-19, 1992. The Joint University Program is a coordinated set of three grants sponsored by the Federal Aviation Administration and NASA Langley Research Center, one each with the Massachusetts Institute of Technology (NGL-22-009-640), Ohio University (NGR-36-009-017), and Princeton University (NGL-31-001-252). Completed works, status reports, and annotated bibliographies are presented for research topics, which include navigation, guidance and control theory and practice, intelligent flight control, flight dynamics, human factors, and air traffic control processes. An overview of the year's activities for each university is also presented

    Investigation of a Neural Network Methodology to Predict Transient Performance in Fms

    Get PDF
    Most rapid analytical evaluative models for Flexible Manufacturing Systems (FMSs) are based on the steady-state performance. There is a practical need to develop robust, easy to construct, and transportable transient-state evaluative models for FMSs. This study proposes an ANN based metamodeling framework that can capture various post disruption system behaviors of FMS. The proposed ANN based meta-modeling scheme consists of a hierarchical taxonomy of mutilple ANNs. Each set of ANNs collectively represents a different part of the underlying system modeling domain. The taxonomical arrangement of multiple ANNs overcomes shortcomings often found in single ANN based meta-modeling schemes. These shortcomings are generally related to the limited knowledge acquisition capability of these schemes. The study uses an Extend based discrete simulation model that is built after an experimental FMS with a limited disruption trigger and handling capabilities. The simulation model is used to study various post-disruption behaviors by a given FMS and to study the feasibility of the proposed modeling scheme as a viable means to provide "lookahead" capability for a low level controller.Findings and Conclusions: The proposed ANN based metamodeling approach using multiple ANNs, in a taxonomically organized modeling structure, is an efficient way to capture multiple target performance index observation processes with a similar overall post-disruption behavior pattern. Despite its accuracy issues, this methodology was proven especially effective when it has to deal with noisy time series such as TIS at observation under a data rich environment. The study is to prove that the proposed methodology could be a viable means to model transient system behaviors. As long as individual observation processes of the selected performance index can keep their variances smaller among themselves, the accuracy of the overall model would be acceptable. This non-parametric performance modeling technique using hierarchically organized multiple ANNs, is worth further investigation.Industrial Engineering & Managemen
    • …
    corecore