574 research outputs found
Recommended from our members
Integrated performance prediction and quality control in manufacturing systems
textPredicting the condition of a degrading dynamic system is critical for implementing successful control and designing the optimal operation and maintenance strategies throughout the lifetime of the system. In many situations, especially in manufacturing, systems experience multiple degradation cycles, failures, and maintenance events throughout their lifetimes. In such cases, historical records of sensor readings observed during the lifecycle of a machine can yield vital information about degradation patterns of the monitored machine, which can be used to formulate dynamic models for predicting its future performance. Besides the ability to predict equipment failures, another major component of cost effective and high-throughput manufacturing is tight control of product quality. Quality control is assured by taking periodic measurements of the products at various stages of production. Nevertheless, quality measurements of the product require time and are often executed on costly measurement equipment, which increases the cost of manufacturing and slows down production. One possible way to remedy this situation is to utilize the inherent link between the manufacturing equipment condition, mirrored in the readings of sensors mounted on that machine, and the quality of products coming out of it. The concept of Virtual Metrology (VM) addresses the quality control problem by using data-driven models that relate the product quality to the equipment sensors, enabling continuous estimation of the quality characteristics of the product, even when physical measurements of product quality are not available. VM can thus bring significant production benefits, including improved process control, reduced quality losses and higher productivity. In this dissertation, new methods are formulated that will combine long-term performance prediction of sensory signatures from a degrading manufacturing machine with VM quality estimation, which enables integration of predictive condition monitoring (prediction of sensory signatures) with predictive manufacturing process control (predictive VM model). The recently developed algorithm for prediction of sensory signatures is capable of predicting the system condition by comparing the similarity of the most recent performance signatures with the known degradation patterns available in the historical records. The method accomplishes the prediction of non-Gaussian and non-stationary time-series of relevant performance signatures with analytical tractability, which enables calculations of predicted signature distributions with significantly greater speeds than what can be found in literature. VM quality estimation is implemented using the recently introduced growing structure multiple model system paradigm (GSMMS), based on the use of local linear dynamic models. The concept of local models enables representation of complex, non-linear dependencies with non-Gaussian and non-stationary noise characteristics, using a locally tractable model representation. Localized modeling enables a VM that can detect situations when the VM model is not adequate and needs to be improved, which is one of the main challenges in VM. Finally, uncertainty propagation with Monte Carlo simulation is pursued in order to propagate the predicted distributions of equipment signatures through the VM model to enable prediction of distributions of the quality variables using the readily available sensor readings streaming from the monitored manufacturing machine. The newly developed methods are applied to long-term production data coming from an industrial plasma-enhanced chemical vapor deposition (PECVD) tool operating in a major semiconductor manufacturing fab.Mechanical Engineerin
Virtual metrology for plasma etch processes.
Plasma processes can present dicult control challenges due to time-varying dynamics
and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the
use of mathematical models with accessible measurements from an operating process to
estimate variables of interest. This thesis addresses the challenge of virtual metrology
for plasma processes, with a particular focus on semiconductor plasma etch.
Introductory material covering the essentials of plasma physics, plasma etching, plasma
measurement techniques, and black-box modelling techniques is rst presented for readers
not familiar with these subjects. A comprehensive literature review is then completed
to detail the state of the art in modelling and VM research for plasma etch processes.
To demonstrate the versatility of VM, a temperature monitoring system utilising a
state-space model and Luenberger observer is designed for the variable specic impulse
magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The
temperature monitoring system uses optical emission spectroscopy (OES) measurements
from the VASIMR engine plasma to correct temperature estimates in the presence of
modelling error and inaccurate initial conditions. Temperature estimates within 2% of
the real values are achieved using this scheme.
An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate
plasma etch rate for an industrial plasma etch process is presented. The VM
models estimate etch rate using measurements from the processing tool and a plasma
impedance monitor (PIM). A selection of modelling techniques are considered for VM
modelling, and Gaussian process regression (GPR) is applied for the rst time for VM
of plasma etch rate. Models with global and local scope are compared, and modelling
schemes that attempt to cater for the etch process dynamics are proposed. GPR-based
windowed models produce the most accurate estimates, achieving mean absolute percentage
errors (MAPEs) of approximately 1:15%. The consistency of the results presented
suggests that this level of accuracy represents the best accuracy achievable for
the plasma etch system at the current frequency of metrology.
Finally, a real-time VM and model predictive control (MPC) scheme for control of
plasma electron density in an industrial etch chamber is designed and tested. The VM
scheme uses PIM measurements to estimate electron density in real time. A predictive
functional control (PFC) scheme is implemented to cater for a time delay in the VM
system. The controller achieves time constants of less than one second, no overshoot,
and excellent disturbance rejection properties. The PFC scheme is further expanded by
adapting the internal model in the controller in real time in response to changes in the
process operating point
Doctor of Philosophy
dissertationIn order to ensure high production yield of semiconductor devices, it is desirable to characterize intermediate progress towards the final product by using metrology tools to acquire relevant measurements after each sequential processing step. The metrology data are commonly used in feedback and feed-forward loops of Run-to-Run (R2R) controllers to improve process capability and optimize recipes from lot-to-lot or batch-to-batch. In this dissertation, we focus on two related issues. First, we propose a novel non-threaded R2R controller that utilizes all available metrology measurements, even when the data were acquired during prior runs that differed in their contexts from the current fabrication thread. The developed controller is the first known implementation of a non-threaded R2R control strategy that was successfully deployed in the high-volume production semiconductor fab. Its introduction improved the process capability by 8% compared with the traditional threaded R2R control and significantly reduced out of control (OOC) events at one of the most critical steps in NAND memory manufacturing. The second contribution demonstrates the value of developing virtual metrology (VM) estimators using the insight gained from multiphysics models. Unlike the traditional statistical regression techniques, which lead to linear models that depend on a linear combination of the available measurements, we develop VM models, the structure of which and the functional interdependence between their input and output variables are determined from the insight provided by the multiphysics describing the operation of the processing step for which the VM system is being developed. We demonstrate this approach for three different processes, and describe the superior performance of the developed VM systems after their first-of-a-kind deployment in a high-volume semiconductor manufacturing environment
Virtual metrology for plasma etch processes.
Plasma processes can present dicult control challenges due to time-varying dynamics
and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the
use of mathematical models with accessible measurements from an operating process to
estimate variables of interest. This thesis addresses the challenge of virtual metrology
for plasma processes, with a particular focus on semiconductor plasma etch.
Introductory material covering the essentials of plasma physics, plasma etching, plasma
measurement techniques, and black-box modelling techniques is rst presented for readers
not familiar with these subjects. A comprehensive literature review is then completed
to detail the state of the art in modelling and VM research for plasma etch processes.
To demonstrate the versatility of VM, a temperature monitoring system utilising a
state-space model and Luenberger observer is designed for the variable specic impulse
magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The
temperature monitoring system uses optical emission spectroscopy (OES) measurements
from the VASIMR engine plasma to correct temperature estimates in the presence of
modelling error and inaccurate initial conditions. Temperature estimates within 2% of
the real values are achieved using this scheme.
An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate
plasma etch rate for an industrial plasma etch process is presented. The VM
models estimate etch rate using measurements from the processing tool and a plasma
impedance monitor (PIM). A selection of modelling techniques are considered for VM
modelling, and Gaussian process regression (GPR) is applied for the rst time for VM
of plasma etch rate. Models with global and local scope are compared, and modelling
schemes that attempt to cater for the etch process dynamics are proposed. GPR-based
windowed models produce the most accurate estimates, achieving mean absolute percentage
errors (MAPEs) of approximately 1:15%. The consistency of the results presented
suggests that this level of accuracy represents the best accuracy achievable for
the plasma etch system at the current frequency of metrology.
Finally, a real-time VM and model predictive control (MPC) scheme for control of
plasma electron density in an industrial etch chamber is designed and tested. The VM
scheme uses PIM measurements to estimate electron density in real time. A predictive
functional control (PFC) scheme is implemented to cater for a time delay in the VM
system. The controller achieves time constants of less than one second, no overshoot,
and excellent disturbance rejection properties. The PFC scheme is further expanded by
adapting the internal model in the controller in real time in response to changes in the
process operating point
Statistical Methods for Semiconductor Manufacturing
In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described.
In particular, algorithms have been developed for two major applications area:
- Virtual Metrology (VM) systems;
- Predictive Maintenance (PdM) systems.
Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs.
VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually โcostlyโ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead,
are always available and that can be used for modeling without further costs.
PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical
methods and on the availability of process/logistic data, is in contrast with other classical approaches:
- Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production;
- Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations.
Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process.
The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor
one, where lots of data are recorded and can be employed to build mathematical models.
We present several original contributions, both in the form of applications and methods.
The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems
and the major issues for engineers and statisticians working in this area will be presented.
Furthermore we will illustrate the methods and mathematical models used in the applications.
We will then discuss in details the following applications:
- A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems.
- A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed.
- A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated.
- Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements.
- A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process.
- A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described.
Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment
๋งค๊ฐ๋ถํฌ๊ทผ์ฌ๋ฅผ ํตํ ๊ณต์ ์์คํ ๊ณตํ์์์ ํ๋ฅ ๊ธฐ๊ณํ์ต ์ ๊ทผ๋ฒ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ํํ์๋ฌผ๊ณตํ๋ถ, 2021.8. ์ด์ข
๋ฏผ.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are โscarceโ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design.
First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved.
Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure.
Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.๊ณ์ธก๊ธฐ์ ์ ๋ฐ๋ฌ๋ก ์์ง์, ๊ทธ๋ฆฌ๊ณ ๋ฐฉ๋ํ ์์ ๊ณต์ ๋ฐ์ดํฐ์ ์ทจ๋์ด ๊ฐ๋ฅํด์ก๋ค. ๊ทธ๋ฌ๋ ๋ง์ ๊ฒฝ์ฐ ์์คํ
์ฐจ์์ ํฌ๊ธฐ์ ๋นํด์ ์ผ๋ถ ์ด์ ์กฐ๊ฑด์ ๊ณต์ ๋ฐ์ดํฐ๋ง์ด ์ทจ๋๋๊ธฐ ๋๋ฌธ์, ๊ณต์ ๋ฐ์ดํฐ๋ โํฌ์โํ๊ฒ ๋๋ค. ๋ฟ๋ง ์๋๋ผ, ๊ณต์ ๋ฐ์ดํฐ๋ ์์คํ
๊ฑฐ๋ ์์ฒด์ ๋๋ถ์ด ๊ณ์ธก์์ ๋ฐ์ํ๋ ๋
ธ์ด์ฆ๋ก ์ธํ ๋ณธ์ง์ ์ธ ํ๋ฅ ์ ๊ฑฐ๋์ ๋ณด์ธ๋ค. ๋ฐ๋ผ์ ์์คํ
์ ์์ธก๋ชจ๋ธ์ ์์ธก ๊ฐ์ ๋ํ ๋ถํ์ค์ฑ์ ์ ๋์ ์ผ๋ก ๊ธฐ์ ํ๋ ๊ฒ์ด ์๊ตฌ๋๋ฉฐ, ์ด๋ฅผ ํตํด ์ค์ง์ ์๋ฐฉํ๊ณ ์ ์ฌ์ ์ธ๋ช
ํผํด์ ๊ฒฝ์ ์ ์์ค์ ๋ฐฉ์งํ ์ ์๋ค. ์ด์ ๋ํ ๋ณดํธ์ ์ธ ์ ๊ทผ๋ฒ์ ํ๋ฅ ์ถ์ ๊ธฐ๋ฒ์ ์ฌ์ฉํ์ฌ ์ด๋ฌํ ๋ถํ์ค์ฑ์ ์ ๋ํ ํ๋ ๊ฒ์ด๋, ํ์กดํ๋ ์ถ์ ๊ธฐ๋ฒ๋ค์ ์ฌ๊ท์ ์ํ๋ง์ ์์กดํ๋ ํน์ฑ์ ๊ณ ์ฐจ์์ด๋ฉด์๋ ๋ค๋์ธ ๊ณต์ ๋ฐ์ดํฐ์ ์ ์ฉํ๊ธฐ ์ด๋ ต๋ค๋ ๊ทผ๋ณธ์ ์ธ ํ๊ณ๋ฅผ ๊ฐ์ง๋ค. ๋ณธ ํ์๋
ผ๋ฌธ์์๋ ๋งค๊ฐ๋ถํฌ๊ทผ์ฌ์ ๊ธฐ๋ฐํ ํ๋ฅ ๊ธฐ๊ณํ์ต์ ์ ์ฉํ์ฌ ์์คํ
์ ๋ด์ฌ๋ ๋ถํ์ค์ฑ์ ๋ชจ๋ธ๋งํ๋ฉด์๋ ๋์์ ๊ณ์ฐ ํจ์จ์ ์ธ ์ ๊ทผ ๋ฐฉ๋ฒ์ ์ ์ํ์๋ค.
๋จผ์ , ๊ณต์ ์ ๋ชจ๋ํฐ๋ง์ ์์ด ๊ฐ์ฐ์์ ํผํฉ ๋ชจ๋ธ (Gaussian mixture model)์ ๋ถ๋ฅ์๋ก ์ฌ์ฉํ๋ ํ๋ฅ ์ ๊ฒฐํจ ๋ถ๋ฅ ํ๋ ์์ํฌ๊ฐ ์ ์๋์๋ค. ์ด๋ ๋ถ๋ฅ์์ ํ์ต์์์ ๊ณ์ฐ ๋ณต์ก๋๋ฅผ ์ค์ด๊ธฐ ์ํ์ฌ ๋ฐ์ดํฐ๋ฅผ ์ ์ฐจ์์ผ๋ก ํฌ์์ํค๋๋ฐ, ์ด๋ฅผ ์ํ ํ๋ฅ ์ ๋ค์์ฒด ํ์ต (probabilistic manifold learn-ing) ๋ฐฉ๋ฒ์ด ์ ์๋์๋ค. ์ ์ํ๋ ๋ฐฉ๋ฒ์ ๋ฐ์ดํฐ์ ๋ค์์ฒด (manifold)๋ฅผ ๊ทผ์ฌํ์ฌ ๋ฐ์ดํฐ ํฌ์ธํธ ์ฌ์ด์ ์๋ณ ์ฐ๋ (pairwise likelihood)๋ฅผ ๋ณด์กดํ๋ ํฌ์๋ฒ์ด ์ฌ์ฉ๋๋ค. ์ด๋ฅผ ํตํ์ฌ ๋ฐ์ดํฐ์ ์ข
๋ฅ์ ์ฐจ์์ ์์กด๋๊ฐ ๋ฎ์ ์ง๋จ ๊ฒฐ๊ณผ๋ฅผ ์ป์๊ณผ ๋์์ ๋ฐ์ดํฐ ๋ ์ด๋ธ๊ณผ ๊ฐ์ ๋น๊ฑฐ๋ฆฌ์ (non-metric) ์ ๋ณด๋ฅผ ํจ์จ์ ์ผ๋ก ์ฌ์ฉํ์ฌ ๊ฒฐํจ ์ง๋จ ๋ฅ๋ ฅ์ ํฅ์์ํฌ ์ ์์์ ๋ณด์๋ค.
๋์งธ๋ก, ๋ฒ ์ด์ง์ ์ฌ์ธต ์ ๊ฒฝ๋ง(Bayesian deep neural networks)์ ์ฌ์ฉํ ๊ณต์ ์ ํ๋ฅ ์ ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ๋ก ์ด ์ ์๋์๋ค. ์ ๊ฒฝ๋ง์ ๊ฐ ๋งค๊ฐ๋ณ์๋ ๊ฐ์ฐ์ค ๋ถํฌ๋ก ์นํ๋๋ฉฐ, ๋ณ๋ถ์ถ๋ก (variational inference)์ ํตํ์ฌ ๊ณ์ฐ ํจ์จ์ ์ธ ํ๋ จ์ด ์งํ๋๋ค. ํ๋ จ์ด ๋๋ ํ ํ๋ผ๋ฏธํฐ์ ์ ํจ์ฑ์ ์ธก์ ํ์ฌ ๋ถํ์ํ ๋งค๊ฐ๋ณ์๋ฅผ ์๊ฑฐํ๋ ์ฌํ ๋ชจ๋ธ ์์ถ ๋ฐฉ๋ฒ์ด ์ฌ์ฉ๋์๋ค. ๋ฐ๋์ฒด ๊ณต์ ์ ๋ํ ์ฌ๋ก ์ฐ๊ตฌ๋ ์ ์ํ๋ ๋ฐฉ๋ฒ์ด ๊ณต์ ์ ๋ณต์กํ ๊ฑฐ๋์ ํจ๊ณผ์ ์ผ๋ก ๋ชจ๋ธ๋ง ํ ๋ฟ๋ง ์๋๋ผ ๋ชจ๋ธ์ ์ต์ ๊ตฌ์กฐ๋ฅผ ๋์ถํ ์ ์์์ ๋ณด์ฌ์ค๋ค.
๋ง์ง๋ง์ผ๋ก, ๋ถํฌํ ์ฌ์ธต ์ ๊ฒฝ๋ง์ ์ฌ์ฉํ ๊ฐํํ์ต์ ๊ธฐ๋ฐ์ผ๋ก ํ ํ๋ฅ ์ ๊ณต์ ์ค๊ณ ํ๋ ์์ํฌ๊ฐ ์ ์๋์๋ค. ์ต์ ์น๋ฅผ ์ฐพ๊ธฐ ์ํด ์ฌ๊ท์ ์ผ๋ก ๋ชฉ์ ํจ์ ๊ฐ์ ํ๊ฐํ๋ ๊ธฐ์กด์ ์ต์ ํ ๋ฐฉ๋ฒ๋ก ๊ณผ ๋ฌ๋ฆฌ, ๋ชฉ์ ํจ์ ๊ณก๋ฉด (objective function surface)์ ๋งค๊ฐํ ๋ ํ๋ฅ ๋ถํฌ๋ก ๊ทผ์ฌํ๋ ์ ๊ทผ๋ฒ์ด ์ ์๋์๋ค. ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์ด์ฐํ (discretization)๋ฅผ ์ฌ์ฉํ์ง ์๊ณ ์ฐ์์ ํ๋ ์ ์ฑ
์ ํ์ตํ๋ฉฐ, ํ์ค์ฑ (certainty)์ ๊ธฐ๋ฐํ ํ์ (exploration) ๋ฐ ํ์ฉ (exploi-tation) ๋น์จ์ ์ ์ด๊ฐ ํจ์จ์ ์ผ๋ก ์ด๋ฃจ์ด์ง๋ค. ์ฌ๋ก ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ ๊ณต์ ์ ์ค๊ณ์ ๋ํ ๊ฒฝํ์ง์ (heuristic)์ ํ์ตํ๊ณ ์ ์ฌํ ์ค๊ณ ๋ฌธ์ ์ ํด๋ฅผ ๊ตฌํ๋ ๋ฐ ์ด์ฉํ ์ ์์์ ๋ณด์ฌ์ค๋ค.Chapter 1 Introduction 1
1.1. Motivation 1
1.2. Outline of the thesis 5
Chapter 2 Backgrounds and preliminaries 9
2.1. Bayesian inference 9
2.2. Monte Carlo 10
2.3. Kullback-Leibler divergence 11
2.4. Variational inference 12
2.5. Riemannian manifold 13
2.6. Finite extended-pseudo-metric space 16
2.7. Reinforcement learning 16
2.8. Directed graph 19
Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20
3.1. Introduction 20
3.2. Methods 25
3.2.1. Uniform manifold approximation 27
3.2.2. Clusterization 28
3.2.3. Projection 31
3.2.4. Mapping of unknown data query 32
3.2.5. Inference 33
3.3. Verification study 38
3.3.1. Dataset description 38
3.3.2. Experimental setup 40
3.3.3. Process monitoring 43
3.3.4. Projection characteristics 47
3.3.5. Fault diagnosis 50
3.3.6. Computational Aspects 56
Chapter 4 Process system modeling with Bayesian neural networks 59
4.1. Introduction 59
4.2. Methods 63
4.2.1. Long Short-Term Memory (LSTM) 63
4.2.2. Bayesian LSTM (BLSTM) 66
4.3. Verification study 68
4.3.1. System description 68
4.3.2. Estimation of the plasma variables 71
4.3.3. Dataset description 72
4.3.4. Experimental setup 72
4.3.5. Weight regularization during training 78
4.3.6. Modeling complex behaviors of the system 80
4.3.7. Uncertainty quantification and model compression 85
Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89
5.1. Introduction 89
5.2. Methods 93
5.2.1. Flowsheet hashing 93
5.2.2. Behavioral cloning 99
5.2.3. Neural Monte Carlo tree search (N-MCTS) 100
5.2.4. Distributional actor-critic networks (DACN) 105
5.2.5. Action masking 110
5.3. Verification study 110
5.3.1. System description 110
5.3.2. Experimental setup 111
5.3.3. Result and discussions 115
Chapter 6 Concluding remarks 120
6.1. Summary of the contributions 120
6.2. Future works 122
Appendix 125
A.1. Proof of Lemma 1 125
A.2. Performance indices for dimension reduction 127
A.3. Model equations for process units 130
Bibliography 132
์ด ๋ก 149๋ฐ
Recommended from our members
Improving process monitoring and modeling of batch-type plasma etching tools
Manufacturing equipments in semiconductor factories (fabs) provide abundant data and opportunities for data-driven process monitoring and modeling. In particular, virtual metrology (VM) is an active area of research. Traditional monitoring techniques using univariate statistical process control charts do not provide immediate feedback to quality excursions, hindering the implementation of fab-wide advanced process control initiatives. VM models or inferential sensors aim to bridge this gap by predicting of quality measurements instantaneously using tool fault detection and classification (FDC) sensor measurements. The existing research in the field of inferential sensor and VM has focused on comparing regressions algorithms to demonstrate their feasibility in various applications. However, two important areas, data pretreatment and post-deployment model maintenance, are usually neglected in these discussions. Since it is well known that the industrial data collected is of poor quality, and that the semiconductor processes undergo drifts and periodic disturbances, these two issues are the roadblocks in furthering the adoption of inferential sensors and VM models. In data pretreatment, batch data collected from FDC systems usually contain inconsistent trajectories of various durations. Most analysis techniques requires the data from all batches to be of same duration with similar trajectory patterns. These inconsistencies, if unresolved, will propagate into the developed model and cause challenges in interpreting the modeling results and degrade model performance. To address this issue, a Constrained selective Derivative Dynamic Time Warping (CsDTW) method was developed to perform automatic alignment of trajectories. CsDTW is designed to preserve the key features that characterizes each batch and can be solved efficiently in polynomial time. Variable selection after trajectory alignment is another topic that requires improvement. To this end, the proposed Moving Window Variable Importance in Projection (MW-VIP) method yields a more robust set of variables with demonstrably more long-term correlation with the predicted output. In model maintenance, model adaptation has been the standard solution for dealing with drifting processes. However, most case studies have already preprocessed the model update data offline. This is an implicit assumption that the adaptation data is free of faults and outliers, which is often not true for practical implementations. To this end, a moving window scheme using Total Projection to Latent Structure (T-PLS) decomposition screens incoming updates to separate the harmless process noise from the outliers that negatively affects the model. The integrated approach was demonstrated to be more robust. In addition, model adaptation is very inefficient when there are multiplicities in the process, multiplicities could occur due to process nonlinearity, switches in product grade, or different operating conditions. A growing structure multiple model system using local PLS and PCA models have been proposed to improve model performance around process conditions with multiplicity. The use of local PLS and PCA models allows the method to handle a much larger set of inputs and overcome several challenges in mixture model systems. In addition, fault detection sensitivities are also improved by using the multivariate monitoring statistics of these local PLS/PCA models. These proposed methods are tested on two plasma etch data sets provided by Texas Instruments. In addition, a proof of concept using virtual metrology in a controller performance assessment application was also tested.Chemical Engineerin
2022 Review of Data-Driven Plasma Science
Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required
Unsupervised Feature Extraction Techniques for Plasma Semiconductor Etch Processes
As feature sizes on semiconductor chips continue to shrink plasma etching is becoming
a more and more critical process in achieving low cost high-volume manufacturing.
Due to the highly complex physics of plasma and chemical reactions between plasma
species, control of plasma etch processes is one of the most diยฑcult challenges facing the
integrated circuit industry. This is largely due to the diยฑculty with monitoring plasmas.
Optical Emission Spectroscopy (OES) technology can be used to produce rich plasma
chemical information in real time and is increasingly being considered in semiconductor
manufacturing for process monitoring and control of plasma etch processes. However,
OES data is complex and inherently highly redundant, necessitating the development
of advanced algorithms for eยฎective feature extraction.
In this thesis, three new unsupervised feature extraction algorithms have been proposed
for OES data analysis and the algorithm properties have been explored with the aid
of both artiยฏcial and industrial benchmark data sets. The ยฏrst algorithm, AWSPCA
(AdaptiveWeighting Sparse Principal Component Analysis), is developed for dimension
reduction with respect to variations in the analysed variables. The algorithm gener-
ates sparse principle components while retaining orthogonality and grouping correlated
variables together. The second algorithm, MSC (Max Separation Clustering), is devel-
oped for clustering variables with distinctive patterns and providing eยฎective pattern
representation by a small number of representative variables. The third algorithm,
SLHC (Single Linkage Hierarchical Clustering), is developed to achieve a complete and
detailed visualisation of the correlation between variables and across clusters in an OES
data set.
The developed algorithms open up opportunities for using OES data for accurate pro-
cess control applications. For example, MSC enables the selection of relevant OES
variables for better modeling and control of plasma etching processes. SLHC makes it
possible to understand and interpret patterns in OES spectra and how they relate to
the plasma chemistry. This in turns can help engineers to achieve an in-depth under-
standing of underlying plasma processes
- โฆ