124 research outputs found

    Intelligent shop scheduling for semiconductor manufacturing

    Get PDF
    Semiconductor market sales have expanded massively to more than 200 billion dollars annually accompanied by increased pressure on the manufacturers to provide higher quality products at lower cost to remain competitive. Scheduling of semiconductor manufacturing is one of the keys to increasing productivity, however the complexity of manufacturing high capacity semiconductor devices and the cost considerations mean that it is impossible to experiment within the facility. There is an immense need for effective decision support models, characterizing and analyzing the manufacturing process, allowing the effect of changes in the production environment to be predicted in order to increase utilization and enhance system performance. Although many simulation models have been developed within semiconductor manufacturing very little research on the simulation of the photolithography process has been reported even though semiconductor manufacturers have recognized that the scheduling of photolithography is one of the most important and challenging tasks due to complex nature of the process. Traditional scheduling techniques and existing approaches show some benefits for solving small and medium sized, straightforward scheduling problems. However, they have had limited success in solving complex scheduling problems with stochastic elements in an economic timeframe. This thesis presents a new methodology combining advanced solution approaches such as simulation, artificial intelligence, system modeling and Taguchi methods, to schedule a photolithography toolset. A new structured approach was developed to effectively support building the simulation models. A single tool and complete toolset model were developed using this approach and shown to have less than 4% deviation from actual production values. The use of an intelligent scheduling agent for the toolset model shows an average of 15% improvement in simulated throughput time and is currently in use for scheduling the photolithography toolset in a manufacturing plant

    Adaptive Order Dispatching based on Reinforcement Learning: Application in a Complex Job Shop in the Semiconductor Industry

    Get PDF
    Heutige Produktionssysteme tendieren durch die Marktanforderungen getrieben zu immer kleineren LosgrĂ¶ĂŸen, höherer Produktvielfalt und grĂ¶ĂŸerer KomplexitĂ€t der Materialflusssysteme. Diese Entwicklungen stellen bestehende Produktionssteuerungsmethoden in Frage. Im Zuge der Digitalisierung bieten datenbasierte Algorithmen des maschinellen Lernens einen alternativen Ansatz zur Optimierung von ProduktionsablĂ€ufen. Aktuelle Forschungsergebnisse zeigen eine hohe LeistungsfĂ€higkeit von Verfahren des Reinforcement Learning (RL) in einem breiten Anwendungsspektrum. Im Bereich der Produktionssteuerung haben sich jedoch bisher nur wenige Autoren damit befasst. Eine umfassende Untersuchung verschiedener RL-AnsĂ€tze sowie eine Anwendung in der Praxis wurden noch nicht durchgefĂŒhrt. Unter den Aufgaben der Produktionsplanung und -steuerung gewĂ€hrleistet die Auftragssteuerung (order dispatching) eine hohe LeistungsfĂ€higkeit und FlexibilitĂ€t der ProduktionsablĂ€ufe, um eine hohe KapazitĂ€tsauslastung und kurze Durchlaufzeiten zu erreichen. Motiviert durch komplexe Werkstattfertigungssysteme, wie sie in der Halbleiterindustrie zu finden sind, schließt diese Arbeit die ForschungslĂŒcke und befasst sich mit der Anwendung von RL fĂŒr eine adaptive Auftragssteuerung. Die Einbeziehung realer Systemdaten ermöglicht eine genauere Erfassung des Systemverhaltens als statische Heuristiken oder mathematische Optimierungsverfahren. ZusĂ€tzlich wird der manuelle Aufwand reduziert, indem auf die InferenzfĂ€higkeiten des RL zurĂŒckgegriffen wird. Die vorgestellte Methodik fokussiert die Modellierung und Implementierung von RL-Agenten als Dispatching-Entscheidungseinheit. Bekannte Herausforderungen der RL-Modellierung in Bezug auf Zustand, Aktion und Belohnungsfunktion werden untersucht. Die Modellierungsalternativen werden auf der Grundlage von zwei realen Produktionsszenarien eines Halbleiterherstellers analysiert. Die Ergebnisse zeigen, dass RL-Agenten adaptive Steuerungsstrategien erlernen können und bestehende regelbasierte Benchmarkheuristiken ĂŒbertreffen. Die Erweiterung der ZustandsreprĂ€sentation verbessert die Leistung deutlich, wenn ein Zusammenhang mit den Belohnungszielen besteht. Die Belohnung kann so gestaltet werden, dass sie die Optimierung mehrerer ZielgrĂ¶ĂŸen ermöglicht. Schließlich erreichen spezifische RL-Agenten-Konfigurationen nicht nur eine hohe Leistung in einem Szenario, sondern weisen eine Robustheit bei sich Ă€ndernden Systemeigenschaften auf. Damit stellt die Forschungsarbeit einen wesentlichen Beitrag in Richtung selbstoptimierender und autonomer Produktionssysteme dar. Produktionsingenieure mĂŒssen das Potenzial datenbasierter, lernender Verfahren bewerten, um in Bezug auf FlexibilitĂ€t wettbewerbsfĂ€hig zu bleiben und gleichzeitig den Aufwand fĂŒr den Entwurf, den Betrieb und die Überwachung von Produktionssteuerungssystemen in einem vernĂŒnftigen Gleichgewicht zu halten

    A new perspective on Workload Control by measuring operating performances through an economic valorization

    Get PDF
    Workload Control (WLC) is a production planning and control system conceived to reduce queuing times of job-shop systems, and to offer a solution to the lead time syndrome; a critical issue that often bewilders make-to-order manufacturers. Nowadays, advantages of WLC are unanimously acknowledged, but real successful stories are still limited. This paper starts from the lack of a consistent way to assess performance of WLC, an important burden for its acceptance in the industry. As researchers often put more focus on the performance measures that better confirm their hypotheses, many measures, related to different WLC features, have emerged over years. However, this excess of measures may even mislead practitioners, in the evaluation of alternative production planning and control systems. To close this gap, we propose quantifying the main benefit of WLC in economic terms, as this is the easiest, and probably only way, to compare different and even conflicting performance measures. Costs and incomes are identified and used to develop an overall economic measure that can be used to evaluate, or even to fine tune, the operating features of WLC. The quality of our approach is finally demonstrated via simulation, considering the 6-machines job-shop scenario typically adopted as benchmark in technical literature

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    Modeling, design and scheduling of computer integrated manufacturing and demanufacturing systems

    Get PDF
    This doctoral dissertation work aims to provide a discrete-event system-based methodology for design, implementation, and operation of flexible and agile manufacturing and demanufacturing systems. After a review of the current academic and industrial activities in these fields, a Virtual Production Lines (VPLs) design methodology is proposed to facilitate a Manufacturing Execution System integrated with a shop floor system. A case study on a back-end semiconductor line is performed to demonstrate that the proposed methodology is effective to increase system throughput and decrease tardiness. An adaptive algorithm is proposed to deal with the machine failure and maintenance. To minimize the environmental impacts caused by end-of-life or faulty products, this research addresses the fundamental design and implementation issues of an integrated flexible demanufacturing system (IFDS). In virtue of the success of the VPL design and differences between disassembly and assembly, a systematic approach is developed for disassembly line design. This thesis presents a novel disassembly planning and demanufacturing scheduling method for such a system. Case studies on the disassembly of personal computers are performed illustrating how the proposed approaches work

    Active Learning of Piecewise Gaussian Process Surrogates

    Full text link
    Active learning of Gaussian process (GP) surrogates has been useful for optimizing experimental designs for physical/computer simulation experiments, and for steering data acquisition schemes in machine learning. In this paper, we develop a method for active learning of piecewise, Jump GP surrogates. Jump GPs are continuous within, but discontinuous across, regions of a design space, as required for applications spanning autonomous materials design, configuration of smart factory systems, and many others. Although our active learning heuristics are appropriated from strategies originally designed for ordinary GPs, we demonstrate that additionally accounting for model bias, as opposed to the usual model uncertainty, is essential in the Jump GP context. Toward that end, we develop an estimator for bias and variance of Jump GP models. Illustrations, and evidence of the advantage of our proposed methods, are provided on a suite of synthetic benchmarks, and real-simulation experiments of varying complexity.Comment: The main algorithm of this work is protected by a provisional patent pending with application number 63/386,82

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment

    Index to 1984 NASA Tech Briefs, volume 9, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1984 Tech B Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Predictive Modeling for Intelligent Maintenance in Complex Semiconductor Manufacturing Processes.

    Full text link
    Semiconductor fabrication is one of the most complicated manufacturing processes, in which the current prevailing maintenance practices are preventive maintenance, using either time-based or wafer-based scheduling strategies, which may lead to the tools being either “over-maintained” or “under-maintained”. In literature, there rarely exists condition-based maintenance, which utilizes machine conditions to schedule maintenance, and almost no truly predictive maintenance that assesses remaining useful lives of machines and plans maintenance actions proactively. The research presented in this thesis is aimed at developing predictive modeling methods for intelligent maintenance in semiconductor manufacturing processes, using the in-process tool performance as well as the product quality information. In order to achieve an improved maintenance decision-making, a method for integrating data from different domains to predict process yield is proposed. The self-organizing maps have been utilized to discretize continuous data into discrete values, which will tremendously reduce the computational cost of Bayesian network learning process that can discover the stochastic dependences among process parameters and product quality. This method enables one to make more proactive product quality prediction that is different from traditional methods based on solely inspection results. Furthermore, a method of using observable process information to estimate stratified tool degradation levels has been proposed. Single hidden Markov model (HMM) has been employed to represent the tool degradation process under a single recipe; and the concatenation of multiple HMMs can be used to model the tool degradation under multiple recipes. To validate the proposed method, a simulation study has been conducted, which shows that HMMs are able to model the stratified unobservable degradation process under variable operating conditions. This method enables one to estimate the condition of in-chamber particle contamination so that maintenance actions can be initiated accordingly. With these two novel methods, a methodological framework to perform better maintenance in complex manufacturing processes is established. The simulation study shows that the maintenance cost can be reduced by performing predictive maintenance properly while highest possible yield is retained. This framework provides a possibility of using abundant equipment monitoring data and product quality information to coordinate maintenance actions in a complex manufacturing environment.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58530/1/yangliu_1.pd
    • 

    corecore