115 research outputs found
Statistical Methods for Semiconductor Manufacturing
In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described.
In particular, algorithms have been developed for two major applications area:
- Virtual Metrology (VM) systems;
- Predictive Maintenance (PdM) systems.
Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs.
VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead,
are always available and that can be used for modeling without further costs.
PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical
methods and on the availability of process/logistic data, is in contrast with other classical approaches:
- Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production;
- Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations.
Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process.
The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor
one, where lots of data are recorded and can be employed to build mathematical models.
We present several original contributions, both in the form of applications and methods.
The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems
and the major issues for engineers and statisticians working in this area will be presented.
Furthermore we will illustrate the methods and mathematical models used in the applications.
We will then discuss in details the following applications:
- A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems.
- A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed.
- A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated.
- Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements.
- A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process.
- A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described.
Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment
Proximal Deterministic Policy Gradient
This paper introduces two simple techniques to improve off-policy
Reinforcement Learning (RL) algorithms. First, we formulate off-policy RL as a
stochastic proximal point iteration. The target network plays the role of the
variable of optimization and the value network computes the proximal operator.
Second, we exploits the two value functions commonly employed in
state-of-the-art off-policy algorithms to provide an improved action value
estimate through bootstrapping with limited increase of computational
resources. Further, we demonstrate significant performance improvement over
state-of-the-art algorithms on standard continuous-control RL benchmarks
Robot kinematic structure classification from time series of visual data
In this paper we present a novel algorithm to solve the robot kinematic
structure identification problem. Given a time series of data, typically
obtained processing a set of visual observations, the proposed approach
identifies the ordered sequence of links associated to the kinematic chain, the
joint type interconnecting each couple of consecutive links, and the input
signal influencing the relative motion. Compared to the state of the art, the
proposed algorithm has reduced computational costs, and is able to identify
also the joints' type sequence
Anomaly Detection Approaches for Semiconductor Manufacturing
Abstract Smart production monitoring is a crucial activity in advanced manufacturing for quality, control and maintenance purposes. Advanced Monitoring Systems aim to detect anomalies and trends; anomalies are data patterns that have different data characteristics from normal instances, while trends are tendencies of production to move in a particular direction over time. In this work, we compare state-of-the-art ML approaches (ABOD, LOF, onlinePCA and osPCA) to detect outliers and events in high-dimensional monitoring problems. The compared anomaly detection strategies have been tested on a real industrial dataset related to a Semiconductor Manufacturing Etching process
Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by Ranking Algorithms
Search Engines (SE) have been shown to perpetuate well-known gender
stereotypes identified in psychology literature and to influence users
accordingly. Similar biases were found encoded in Word Embeddings (WEs) learned
from large online corpora. In this context, we propose the Gender Stereotype
Reinforcement (GSR) measure, which quantifies the tendency of a SE to support
gender stereotypes, leveraging gender-related information encoded in WEs.
Through the critical lens of construct validity, we validate the proposed
measure on synthetic and real collections. Subsequently, we use GSR to compare
widely-used Information Retrieval ranking algorithms, including lexical,
semantic, and neural models. We check if and how ranking algorithms based on
WEs inherit the biases of the underlying embeddings. We also consider the most
common debiasing approaches for WEs proposed in the literature and test their
impact in terms of GSR and common performance measures. To the best of our
knowledge, GSR is the first specifically tailored measure for IR, capable of
quantifying representational harms.Comment: To appear in Information Processing & Managemen
a convolutional autoencoder approach for feature extraction in virtual metrology
Abstract Exploiting the huge amount of data collected by industries is definitely one of the main challenges of the so-called Big Data era. In this sense, Machine Learning has gained growing attention in the scientific community, as it allows to extract valuable information by means of statistical predictive models trained on historical process data. In Semiconductor Manufacturing, one of the most extensively employed data-driven applications is Virtual Metrology, where a costly or unmeasurable variable is estimated by means of cheap and easy to obtain measures that are already available in the system. Often, these measures are multi-dimensional, so traditional Machine Learning algorithms cannot handle them directly. Instead, they require feature extraction, that is a preliminary step where relevant information is extracted from raw data and converted into a design matrix. Features are often hand-engineered and based on specific domain knowledge. Moreover, they may be difficult to scale and prone to information loss, affecting the effectiveness and maintainability of machine learning procedures. In this paper, we present a Deep Learning method for semi-supervised feature extraction based on Convolutional Autoencoders that is able to overcome the aforementioned problems. The proposed method is tested on a real dataset for Etch rate estimation. Optical Emission Spectrometry data, that exhibit a complex bi-dimensional time and wavelength evolution, are used as input
a deep learning approach for anomaly detection with industrial time series data a refrigerators manufacturing case study
n/
- …