1,829 research outputs found

    A review of data mining applications in semiconductor manufacturing

    Get PDF
    The authors acknowledge Fundacao para a Ciencia e a Tecnologia (FCT-MCTES) for its financial support via the project UIDB/00667/2020 (UNIDEMI).For decades, industrial companies have been collecting and storing high amounts of data with the aim of better controlling and managing their processes. However, this vast amount of information and hidden knowledge implicit in all of this data could be utilized more efficiently. With the help of data mining techniques unknown relationships can be systematically discovered. The production of semiconductors is a highly complex process, which entails several subprocesses that employ a diverse array of equipment. The size of the semiconductors signifies a high number of units can be produced, which require huge amounts of data in order to be able to control and improve the semiconductor manufacturing process. Therefore, in this paper a structured review is made through a sample of 137 papers of the published articles in the scientific community regarding data mining applications in semiconductor manufacturing. A detailed bibliometric analysis is also made. All data mining applications are classified in function of the application area. The results are then analyzed and conclusions are drawn.publishersversionpublishe

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Entwicklung und Einführung von Produktionssteuerungsverbesserungen für die kundenorientierte Halbleiterfertigung

    Get PDF
    Production control in a semiconductor production facility is a very complex and timeconsuming task. Different demands regarding facility performance parameters are defined by customer and facility management. These requirements are usually opponents, and an efficient strategy is not simple to define. In semiconductor manufacturing, the available production control systems often use priorities to define the importance of each production lot. The production lots are ranked according to the defined priorities. This process is called dispatching. The priority allocation is carried out by special algorithms. In literature, a huge variety of different strategies and rules is available. For the semiconductor foundry business, there is a need for a very flexible and adaptable policy taking the facility state and the defined requirements into account. At our case the production processes are characterized by a low-volume high-mix product portfolio. This portfolio causes additional stability problems and performance lags. The unstable characteristic increases the influence of reasonable production control logic. This thesis offers a very flexible and adaptable production control policy. This policy is based on a detailed facility model with real-life production data. The data is extracted from a real high-mix low-volume semiconductor facility. The dispatching strategy combines several dispatching rules. Different requirements like line balance, throughput optimization and on-time delivery targets can be taken into account. An automated detailed facility model calculates a semi-optimal combination of the different dispatching rules under a defined objective function. The objective function includes different demands from the management and the customer. The optimization is realized by a genetic heuristic for a fast and efficient finding of a close-to-optimal solution. The strategy is evaluated with real-life production data. The analysis with the detailed facility model of this fab shows an average improvement of 5% to 8% for several facility performance parameters like cycle time per mask layer. Finally the approach is realized and applied at a typical high-mix low-volume semiconductor facility. The system realization bases on a JAVA implementation. This implementation includes common state-of-the-art technologies such as web services. The system replaces the older production control solution. Besides the dispatching algorithm, the production policy includes the possibility to skip several metrology operations under defined boundary conditions. In a real-life production process, not all metrology operations are necessary for each lot. The thesis evaluates the influence of the sampling mechanism to the production process. The solution is included into the system implementation as a framework to assign different sampling rules to different metrology operations. Evaluations show greater improvements at bottleneck situations. After the productive introduction and usage of both systems, the practical results are evaluated. The staff survey offers good acceptance and response to the system. Furthermore positive effects on the performance measures are visible. The implemented system became part of the daily tools of a real semiconductor facility.Produktionssteuerung im Bereich der kundenorientierten Halbleiterfertigung ist heutzutage eine sehr komplexe und zeitintensive Aufgabe. Verschiedene Anforderungen bezüglich der Fabrikperformance werden seitens der Kunden als auch des Fabrikmanagements definiert. Diese Anforderungen stehen oftmals in Konkurrenz. Dadurch ist eine effiziente Strategie zur Kompromissfindung nicht einfach zu definieren. Heutige Halbleiterfabriken mit ihren verfügbaren Produktionssteuerungssystemen nutzen oft prioritätsbasierte Lösungen zur Definition der Wichtigkeit eines jeden Produktionsloses. Anhand dieser Prioritäten werden die Produktionslose sortiert und bearbeitet. In der Literatur existiert eine große Bandbreite verschiedener Algorithmen. Im Bereich der kundenorientierten Halbleiterfertigung wird eine sehr flexible und anpassbare Strategie benötigt, die auch den aktuellen Fabrikzustand als auch die wechselnden Kundenanforderungen berücksichtigt. Dies gilt insbesondere für den hochvariablen geringvolumigen Produktionsfall. Diese Arbeit behandelt eine flexible Strategie für den hochvariablen Produktionsfall einer solchen Produktionsstätte. Der Algorithmus basiert auf einem detaillierten Fabriksimulationsmodell mit Rückgriff auf Realdaten. Neben synthetischen Testdaten wurde der Algorithmus auch anhand einer realen Fertigungsumgebung geprüft. Verschiedene Steuerungsregeln werden hierbei sinnvoll kombiniert und gewichtet. Wechselnde Anforderungen wie Linienbalance, Durchsatz oder Liefertermintreue können adressiert und optimiert werden. Mittels einer definierten Zielfunktion erlaubt die automatische Modellgenerierung eine Optimierung anhand des aktuellen Fabrikzustandes. Die Optimierung basiert auf einen genetischen Algorithmus für eine flexible und effiziente Lösungssuche. Die Strategie wurde mit Realdaten aus der Fertigung einer typischen hochvariablen geringvolumigen Halbleiterfertigung geprüft und analysiert. Die Analyse zeigt ein Verbesserungspotential von 5% bis 8% für die bekannten Performancekriterien wie Cycletime im Vergleich zu gewöhnlichen statischen Steuerungspolitiken. Eine prototypische Implementierung realisiert diesen Ansatz zur Nutzung in der realen Fabrikumgebung. Die Implementierung basiert auf der JAVA-Programmiersprache. Aktuelle Implementierungsmethoden erlauben den flexiblen Einsatz in der Produktionsumgebung. Neben der Fabriksteuerung wurde die Möglichkeit der Reduktion von Messoperationszeit (auch bekannt unter Sampling) unter gegebenen Randbedingungen einer hochvariablen geringvolumigen Fertigung untersucht und geprüft. Oftmals ist aufgrund stabiler Prozesse in der Fertigung die Messung aller Lose an einem bestimmten Produktionsschritt nicht notwendig. Diese Arbeit untersucht den Einfluss dieses gängigen Verfahrens aus der Massenfertigung für die spezielle geringvolumige Produktionsumgebung. Die Analysen zeigen insbesondere in Ausnahmesituationen wie Anlagenausfällen und Kapazitätsengpässe einen positiven Effekt, während der Einfluss unter normalen Produktionsbedingungen aufgrund der hohen Produktvariabilität als gering angesehen werden kann. Nach produktiver Einführung in einem typischen Vertreter dieser Halbleiterfabriken zeigten sich schnell positive Effekte auf die Fabrikperformance als auch eine breite Nutzerakzeptanz. Das implementierte System wurde Bestandteil der täglichen genutzten Werkzeuglandschaft an diesem Standort

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Virtual metrology for semiconductor manufacturing applications

    Get PDF
    Per essere competitive nel mercato, le industrie di semiconduttori devono poter raggiungere elevati standard di produzione a un prezzo ragionevole. Per motivi legati tanto ai costi quanto ai tempi di esecuzione, una strategia di controllo della qualità che preveda la misurazione completa del prodotto non è attuabile; i test sono eettuati su un ristretto campione dei dati originali. Il traguardo del presente lavoro di Tesi è lo studio e l'implementazione, attraverso metodologie di modellistica tipo non lineare, di un algoritmo di metrologia virtuale (Virtual Metrology) d'ausilio al controllo di processo nella produzione di semiconduttori. Infatti, la conoscenza di una stima delle misure non realmente eseguite (misure virtuali) può rappresentare un primo passo verso la costruzione di sistemi di controllo di processo e controllo della qualità sempre più ranati ed ecienti. Da un punto di vista operativo, l'obiettivo è fornire la più accurata stima possibile delle dimensioni critiche a monte della fase di etching, a partire dai dati disponibili (includendo misurazioni da fasi di litograa e deposizione e dati di processo - ove disponibili). Le tecniche statistiche allo stato dell'arte analizzate in questo lavoro comprendono: - multilayer feedforward networks; Confronto e validazione degli algoritmi presi in esame sono stati possibili grazie ai data-set forniti da un'industria manifatturiera di semiconduttori. In conclusione, questo lavoro di Tesi rappresenta un primo passo verso la creazione di un sistema di controllo di processo e controllo della qualità evoluto e essibile, che abbia il ne ultimo di migliorare la qualità della produzione.ope

    Analyzing sampling methodologies in semiconductor manufacturing

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.Includes bibliographical references (p. 81-83).This thesis describes work completed during an internship assignment at Intel Corporation's process development and wafer fabrication manufacturing facility in Santa Clara, California. At the highest level, this work relates to the importance of adequately creating and maintaining data within IT solutions in order to receive the full business benefit expected through the use of these systems. More specifically, the project uses, as a case example, the sampling methodology used in the fab for metrology data collection to show that significant issues exist relating to the software Various recommendations were undertaken to improve the application's effectiveness. As part of this effort, plans for an online reporting tool were developed allowing much greater visibility into the system's ongoing performance. Initial data updates and other improvements resulted in a reduction in both product cycle times and required labor hours for metrology operations. application database and business processes concerning data accuracy and completeness. The organizational challenges contributing to this problem will also be discussed. Without a rigorous focus on the accuracy and completeness of data within manufacturing execution systems, the results of continuous improvement activities will be less than expected. Furthermore, sharing information relating to these projects across geographical boundaries and business units is vital to the success of manufacturing organizations.by Richard M. Anthony.S.M.M.B.A

    제조 시스템에서의 예측 모델링을 위한 지능적 데이터 획득

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 산업공학과, 2021. 2. 조성준.Predictive modeling is a type of supervised learning to find the functional relationship between the input variables and the output variable. Predictive modeling is used in various aspects in manufacturing systems, such as automation of visual inspection, prediction of faulty products, and result estimation of expensive inspection. To build a high-performance predictive model, it is essential to secure high quality data. However, in manufacturing systems, it is practically impossible to acquire enough data of all kinds that are needed for the predictive modeling. There are three main difficulties in the data acquisition in manufacturing systems. First, labeled data always comes with a cost. In many problems, labeling must be done by experienced engineers, which is costly. Second, due to the inspection cost, not all inspections can be performed on all products. Because of time and monetary constraints in the manufacturing system, it is impossible to obtain all the desired inspection results. Third, changes in the manufacturing environment make data acquisition difficult. A change in the manufacturing environment causes a change in the distribution of generated data, making it impossible to obtain enough consistent data. Then, the model have to be trained with a small amount of data. In this dissertation, we overcome this difficulties in data acquisition through active learning, active feature-value acquisition, and domain adaptation. First, we propose an active learning framework to solve the high labeling cost of the wafer map pattern classification. This makes it possible to achieve higher performance with a lower labeling cost. Moreover, the cost efficiency is further improved by incorporating the cluster-level annotation into active learning. For the inspection cost for fault prediction problem, we propose a active inspection framework. By selecting products to undergo high-cost inspection with the novel uncertainty estimation method, high performance can be obtained with low inspection cost. To solve the recipe transition problem that frequently occurs in faulty wafer prediction in semiconductor manufacturing, a domain adaptation methods are used. Through sequential application of unsupervised domain adaptation and semi-supervised domain adaptation, performance degradation due to recipe transition is minimized. Through experiments on real-world data, it was demonstrated that the proposed methodologies can overcome the data acquisition problems in the manufacturing systems and improve the performance of the predictive models.예측 모델링은 지도 학습의 일종으로, 학습 데이터를 통해 입력 변수와 출력 변수 간의 함수적 관계를 찾는 과정이다. 이런 예측 모델링은 육안 검사 자동화, 불량 제품 사전 탐지, 고비용 검사 결과 추정 등 제조 시스템 전반에 걸쳐 활용된다. 높은 성능의 예측 모델을 달성하기 위해서는 양질의 데이터가 필수적이다. 하지만 제조 시스템에서 원하는 종류의 데이터를 원하는 만큼 획득하는 것은 현실적으로 거의 불가능하다. 데이터 획득의 어려움은 크게 세가지 원인에 의해 발생한다. 첫번째로, 라벨링이 된 데이터는 항상 비용을 수반한다는 점이다. 많은 문제에서, 라벨링은 숙련된 엔지니어에 의해 수행되어야 하고, 이는 큰 비용을 발생시킨다. 두번째로, 검사 비용 때문에 모든 검사가 모든 제품에 대해 수행될 수 없다. 제조 시스템에는 시간적, 금전적 제약이 존재하기 때문에, 원하는 모든 검사 결과값을 획득하는 것이 어렵다. 세번째로, 제조 환경의 변화가 데이터 획득을 어렵게 만든다. 제조 환경의 변화는 생성되는 데이터의 분포를 변형시켜, 일관성 있는 데이터를 충분히 획득하지 못하게 한다. 이로 인해 적은 양의 데이터만으로 모델을 재학습시켜야 하는 상황이 빈번하게 발생한다. 본 논문에서는 이런 데이터 획득의 어려움을 극복하기 위해 능동 학습, 능동 피쳐값 획득, 도메인 적응 방법을 활용한다. 먼저, 웨이퍼 맵 패턴 분류 문제의 높은 라벨링 비용을 해결하기 위해 능동학습 프레임워크를 제안한다. 이를 통해 적은 라벨링 비용으로 높은 성능의 분류 모델을 구축할 수 있다. 나아가, 군집 단위의 라벨링 방법을 능동학습에 접목하여 비용 효율성을 한차례 더 개선한다. 제품 불량 예측에 활용되는 검사 비용 문제를 해결하기 위해서는 능동 검사 방법을 제안한다. 제안하는 새로운 불확실성 추정 방법을 통해 고비용 검사 대상 제품을 선택함으로써 적은 검사 비용으로 높은 성능을 얻을 수 있다. 반도체 제조의 웨이퍼 불량 예측에서 빈번하게 발생하는 레시피 변경 문제를 해결하기 위해서는 도메인 적응 방법을 활용한다. 비교사 도메인 적응과 반교사 도메인 적응의 순차적인 적용을 통해 레시피 변경에 의한 성능 저하를 최소화한다. 본 논문에서는 실제 데이터에 대한 실험을 통해 제안된 방법론들이 제조시스템의 데이터 획득 문제를 극복하고 예측 모델의 성능을 높일 수 있음을 확인하였다.1. Introduction 1 2. Literature Review 9 2.1 Review of Related Methodologies 9 2.1.1 Active Learning 9 2.1.2 Active Feature-value Acquisition 11 2.1.3 Domain Adaptation 14 2.2 Review of Predictive Modelings in Manufacturing 15 2.2.1 Wafer Map Pattern Classification 15 2.2.2 Fault Detection and Classification 16 3. Active Learning for Wafer Map Pattern Classification 19 3.1 Problem Description 19 3.2 Proposed Method 21 3.2.1 System overview 21 3.2.2 Prediction model 25 3.2.3 Uncertainty estimation 25 3.2.4 Query wafer selection 29 3.2.5 Query wafer labeling 30 3.2.6 Model update 30 3.3 Experiments 31 3.3.1 Data description 31 3.3.2 Experimental design 31 3.3.3 Results and discussion 34 4. Active Cluster Annotation for Wafer Map Pattern Classification 42 4.1 Problem Description 42 4.2 Proposed Method 44 4.2.1 Clustering of unlabeled data 46 4.2.2 CNN training with labeled data 48 4.2.3 Cluster-level uncertainty estimation 49 4.2.4 Query cluster selection 50 4.2.5 Cluster-level annotation 50 4.3 Experiments 51 4.3.1 Data description 51 4.3.2 Experimental setting 51 4.3.3 Clustering results 53 4.3.4 Classification performance 54 4.3.5 Analysis for label noise 57 5. Active Inspection for Fault Prediction 60 5.1 Problem Description 60 5.2 Proposed Method 65 5.2.1 Active inspection framework 65 5.2.2 Acquisition based on Expected Prediction Change 68 5.3 Experiments 71 5.3.1 Data description 71 5.3.2 Fault prediction models 72 5.3.3 Experimental design 73 5.3.4 Results and discussion 74 6. Adaptive Fault Detection for Recipe Transition 76 6.1 Problem Description 76 6.2 Proposed Method 78 6.2.1 Overview 78 6.2.2 Unsupervised adaptation phase 81 6.2.3 Semi-supervised adaptation phase 83 6.3 Experiments 85 6.3.1 Data description 85 6.3.2 Experimental setting 85 6.3.3 Performance degradation caused by recipe transition 86 6.3.4 Effect of unsupervised adaptation 87 6.3.5 Effect of semi-supervised adaptation 88 7. Conclusion 91 7.1 Contributions 91 7.2 Future work 94Docto
    corecore