184 research outputs found
Advanced Process Monitoring for Industry 4.0
This book reports recent advances on Process Monitoring (PM) to cope with the many challenges raised by the new production systems, sensors and “extreme data” conditions that emerged with Industry 4.0. Concepts such as digital-twins and deep learning are brought to the PM arena, pushing forward the capabilities of existing methodologies to handle more complex scenarios. The evolution of classical paradigms such as Latent Variable modeling, Six Sigma and FMEA are also covered. Applications span a wide range of domains such as microelectronics, semiconductors, chemicals, materials, agriculture, as well as the monitoring of rotating equipment, combustion systems and membrane separation processes
Virtual metrology for semiconductor manufacturing applications
Per essere competitive nel mercato, le industrie di semiconduttori devono poter raggiungere elevati standard di produzione a un prezzo ragionevole. Per motivi legati tanto ai costi quanto ai tempi di esecuzione, una strategia di controllo della qualità che preveda la misurazione completa del prodotto non è attuabile; i test sono eettuati su un ristretto campione dei dati originali. Il traguardo del presente
lavoro di Tesi è lo studio e l'implementazione, attraverso metodologie di modellistica tipo non lineare, di un algoritmo di metrologia virtuale (Virtual Metrology) d'ausilio al controllo di processo nella produzione di semiconduttori. Infatti, la conoscenza di una stima delle misure non realmente eseguite (misure virtuali) può rappresentare un primo passo verso la costruzione di sistemi di controllo di processo e controllo della
qualità sempre più ranati ed ecienti. Da un punto di vista operativo, l'obiettivo è fornire la più accurata stima possibile delle dimensioni critiche a monte della fase di etching, a partire dai dati disponibili (includendo misurazioni da fasi di litograa e deposizione e dati di processo - ove disponibili). Le tecniche statistiche allo stato dell'arte analizzate in questo lavoro comprendono:
- multilayer feedforward networks;
Confronto e validazione degli algoritmi presi in esame sono stati possibili grazie ai data-set forniti da un'industria manifatturiera di semiconduttori.
In conclusione, questo lavoro di Tesi rappresenta un primo passo verso la creazione di un sistema di controllo di processo e controllo della qualitĂ evoluto e essibile, che abbia il ne ultimo di migliorare la qualitĂ della produzione.ope
Recommended from our members
Integrated performance prediction and quality control in manufacturing systems
textPredicting the condition of a degrading dynamic system is critical for implementing successful control and designing the optimal operation and maintenance strategies throughout the lifetime of the system. In many situations, especially in manufacturing, systems experience multiple degradation cycles, failures, and maintenance events throughout their lifetimes. In such cases, historical records of sensor readings observed during the lifecycle of a machine can yield vital information about degradation patterns of the monitored machine, which can be used to formulate dynamic models for predicting its future performance. Besides the ability to predict equipment failures, another major component of cost effective and high-throughput manufacturing is tight control of product quality. Quality control is assured by taking periodic measurements of the products at various stages of production. Nevertheless, quality measurements of the product require time and are often executed on costly measurement equipment, which increases the cost of manufacturing and slows down production. One possible way to remedy this situation is to utilize the inherent link between the manufacturing equipment condition, mirrored in the readings of sensors mounted on that machine, and the quality of products coming out of it. The concept of Virtual Metrology (VM) addresses the quality control problem by using data-driven models that relate the product quality to the equipment sensors, enabling continuous estimation of the quality characteristics of the product, even when physical measurements of product quality are not available. VM can thus bring significant production benefits, including improved process control, reduced quality losses and higher productivity. In this dissertation, new methods are formulated that will combine long-term performance prediction of sensory signatures from a degrading manufacturing machine with VM quality estimation, which enables integration of predictive condition monitoring (prediction of sensory signatures) with predictive manufacturing process control (predictive VM model). The recently developed algorithm for prediction of sensory signatures is capable of predicting the system condition by comparing the similarity of the most recent performance signatures with the known degradation patterns available in the historical records. The method accomplishes the prediction of non-Gaussian and non-stationary time-series of relevant performance signatures with analytical tractability, which enables calculations of predicted signature distributions with significantly greater speeds than what can be found in literature. VM quality estimation is implemented using the recently introduced growing structure multiple model system paradigm (GSMMS), based on the use of local linear dynamic models. The concept of local models enables representation of complex, non-linear dependencies with non-Gaussian and non-stationary noise characteristics, using a locally tractable model representation. Localized modeling enables a VM that can detect situations when the VM model is not adequate and needs to be improved, which is one of the main challenges in VM. Finally, uncertainty propagation with Monte Carlo simulation is pursued in order to propagate the predicted distributions of equipment signatures through the VM model to enable prediction of distributions of the quality variables using the readily available sensor readings streaming from the monitored manufacturing machine. The newly developed methods are applied to long-term production data coming from an industrial plasma-enhanced chemical vapor deposition (PECVD) tool operating in a major semiconductor manufacturing fab.Mechanical Engineerin
Recommended from our members
A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry
Electronics industry is one of the fastest evolving, innovative, and most competitive industries. In order to meet the high consumption demands on electronics components, quality standards of the products must be well-maintained. Automatic optical inspection (AOI) is one of the non-destructive techniques used in quality inspection of various products. This technique is considered robust and can replace human inspectors who are subjected to dull and fatigue in performing inspection tasks. A fully automated optical inspection system consists of hardware and software setups. Hardware setup include image sensor and illumination settings and is responsible to acquire the digital image, while the software part implements an inspection algorithm to extract the features of the acquired images and classify them into defected and non-defected based on the user requirements. A sorting mechanism can be used to separate the defective products from the good ones. This article provides a comprehensive review of the various AOI systems used in electronics, micro-electronics, and opto-electronics industries. In this review the defects of the commonly inspected electronic components, such as semiconductor wafers, flat panel displays, printed circuit boards and light emitting diodes, are first explained. Hardware setups used in acquiring images are then discussed in terms of the camera and lighting source selection and configuration. The inspection algorithms used for detecting the defects in the electronic components are discussed in terms of the preprocessing, feature extraction and classification tools used for this purpose. Recent articles that used deep learning algorithms are also reviewed. The article concludes by highlighting the current trends and possible future research directions.Framework of the IQONIC Project; European Union’s Horizon 2020 Research and Innovation Program
30th International Conference on Information Modelling and Knowledge Bases
Information modelling is becoming more and more important topic for researchers, designers, and users of information systems. The amount and complexity of information itself, the number of abstraction levels of information, and the size of databases and knowledge bases are continuously growing. Conceptual modelling is one of the sub-areas of information modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers
Recommended from our members
Data Analytics in Test: Recognizing and Reducing Subjectivity
Applying data analytics in production test has become a widely adopted industrial practice in recent years. As the complexity of semiconductor devices scales and the amounts of available test data continue to grow, the research direction in this field is forced to shift away from solving specific problems with ad hoc approaches and demands for deeper understanding of the fundamental issues. Two data-driven test applications where this shift is apparent are production yield optimization and defect screening, where the respective underlying data analytics approaches are correlation analysis and outlier analysis. A core issue present in these two approaches stems from the subjectivity that is inherent to data analytics. This dissertation delves into how subjectivity manifests itself and what can be done to reduce it with respect to the two test applications.Outlier analysis is an approach used for identifying anomalies. The main goal of outlier analysis in test is to capture statistically outlying parts with the hope that their abnormal behavior is attributed to some defectivity. During creation of an outlier model, the decisions about outlying behavior in the existing data are made by utilizing known failures and the test engineer's best judgment. In practice, outlier screening methods are simply used for transforming data into an outlier score space. Even if outlier analysis techniques are able to successfully classify a dataset into inliers and outliers, outlier models require thresholds to be decided. A concept called Consistency is introduced to provide an objective data-driven way to evaluate outlier models by utilizing all available data. The key observation underlying this concept is that outlier analysis should be immune to noise introduced by sources of systematic variation.Correlation analysis is a process comprising a search for related variables. The application of production yield optimization involves searching for correlation between the yield and various controllable parameters. The goal of this process is to uncover parameters that, when adjusted, can result in yield improvement. This analytics process is subjective to the perspective of the analyst and the quality of the result is highly dependent on the analyst’s previous experiences. In order to reduce the subjectivity in this application, a process mining methodology is introduced to learn from the experiences of analysts. The key advantage of this methodology is that in addition to having the capability to record and reproduce these analyses, it can also generalize to analytics processes not contained in the learned experiences
Recommended from our members
Characterising Peritumoural Progression of Glioblastoma using Multimodal MRI
Glioblastoma is a highly malignant tumor which mostly recurs locally around the resected contrast enhancement. However, it is difficult to identify tumor invasiveness pre-surgically, especially in non-enhancing areas. Thus, the aim of this thesis was to utilize multimodal MR technique to identify and characterize the peritumoral progression zone that eventually leads to tumor progression.
Patients with newly diagnosed cerebral glioblastoma were included consecutively from our cohort between 2010 and2014. The presurgical MRI sequences included volumetric T1-weighted with contrast, FLAIR, T2-weighted, diffusion-weighted imaging, diffusion tensor and perfusion MR imaging. Postsurgical and follow-up MRI included structural and ADC images.
Image deformation, caused by disease nature and surgical procedure, renders routine coregistration methods inadequate for MRIs comparison between different time points. Therefore, a two-staged non-linear semi-automatic coregistration method was developed from the modification of the linear FLIRT and non-linear FNIRT functions in FMRIB’s Software Library (FSL).
Utilising the above mentioned coregistration method, a volumetric study was conducted to analyse the extent of resection based on different MR techniques, including T1 weighted with contrast, FLAIR and DTI measures of isotropy (DTI-p) and anisotropy (DTI-q). The results showed that patients can have a better clinical outcome with a larger resection of the abnormal DTI q areas.
Further study of the imaging characteristics of abnormal peritumoural DTI-q areas, using MRS and DCS-MRI, showed a higher Choline/NAA ratio (p = 0.035), especially higher Choline (p = 0.022), in these areas when compared to normal DTI-q areas. This was indicative of tumour activity in the peritumoural abnormal DTI-q areas.
The peritumoural progression areas were found to have distinct imaging characteristics. In these progression areas, compared to non-progression areas within a 10 mm border around the contrast enhancing lesion, there was higher signal intensity in FLAIR (p = 0.02), and T1C (p < 0.001), and there were lower intensity in ADC (p = 0.029) and DTI-p (p < 0.001). Further applying radiomics features showed that 35 first order features and 77 second order features were significantly different between progression and non-progression areas. By using supervised convolutional neural network, there was an overall accuracy of 92.4% in the training set (n = 37) and 78.5% in the validation set (n=14).
In summary, multimodal MR imaging, particularly diffusion tensor imaging, can demonstrate distinct characteristics in areas of potential progression on preoperative MRI, which can be considered potential targets for treatment. Further application of radiomics and machine learning can be potentially useful when identifying the tumor invasive margin before the surgery.Chung Gung Medical Foundatio
Machine Learning for Cyber Physical Systems
This open access proceedings presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains selected papers from the fifth international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Berlin, March 12-13, 2020. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments
Development of Machine Learning based approach to predict fuel consumption and maintenance cost of Heavy-Duty Vehicles using diesel and alternative fuels
One of the major contributors of human-made greenhouse gases (GHG) namely carbon dioxide (CO2), methane (CH4), and nitrous oxide (NOX) in the transportation sector and heavy-duty vehicles (HDV) contributing to about 27% of the overall fraction. In addition to the rapid increase in global temperature, airborne pollutants from diesel vehicles also present a risk to human health. Even a small improvement that could potentially drive energy savings to the century-old mature diesel technology could yield a significant impact on minimizing greenhouse gas emissions. With the increasing focus on reducing emissions and operating costs, there is a need for efficient and effective methods to predict fuel consumption, maintenance costs, and total cost of ownership for heavy-duty vehicles. Every improvement so achieved in this direction is a direct contributor to driving the reduction in the total cost of ownership for a fleet owner, thereby bringing economic prosperity and reducing oil imports for the economy. Motivated by these crucial goals, the present research considers integrating data-driven techniques using machine learning algorithms on the historical data collected from medium- and heavy-duty vehicles. The primary motivation for this research is to address the challenges faced by the medium- and heavy-duty transportation industry in reducing emissions and operating costs. The development of a machine learning-based approach can provide a more accurate and reliable prediction of fuel consumption and maintenance costs for medium- and heavy-duty vehicles. This, in turn, can help fleet owners and operators to make informed decisions related to fuel type, route planning, and vehicle maintenance, leading to reduced emissions and lower operating costs. Artificial Intelligence (AI) in the automotive industry has witnessed massive growth in the last few years. Heavy-duty transportation research and commercial fleets are adopting machine learning (ML) techniques for applications such as autonomous driving, fuel economy/emissions, predictive maintenance, etc. However, to perform well, modern AI methods require a large amount of high-quality, diverse, and well-balanced data, something which is still not widely available in the automotive industry, especially in the division of medium- and heavy-duty trucks. The research methodology involves the collection of data at the West Virginia University (WVU) Center for Alternative Fuels, Engines, and Emissions (CAFEE) lab in collaboration with fleet management companies operating medium- and heavy-duty vehicles on diesel and alternative fuels, including compressed natural gas, liquefied propane gas, hydrogen fuel cells, and electric vehicles. The data collected is used to develop machine learning models that can accurately predict fuel consumption and maintenance costs based on various parameters such as vehicle weight, speed, route, fuel type, and engine type. The expected outcomes of this research include 1) the development of a neural network model 3 that can accurately predict the fuel consumed by a vehicle per trip given the parameters such as vehicle speed, engine speed, and engine load, and 2) the development of machine learning models for estimating the average cost-per-mile based on the historical maintenance data of goods movement trucks, delivery trucks, school buses, transit buses, refuse trucks, and vocational trucks using fuels such as diesel, natural gas, and propane. Due to large variations in maintenance data for vehicles performing various activities and using different fuel types, the regular machine learning or ensemble models do not generalize well. Hence, a mixed-effect random forest (MERF) is developed to capture the fixed and random effects that occur due to varying duty-cycle of vocational heavy-duty trucks that perform different tasks. The developed model helps in predicting the average maintenance cost given the vocation, fuel type, and region of operation, making it easy for fleet companies to make procurement decisions based on their requirement and total cost of ownership. Both the models can provide insights into the impact of various parameters and route planning on the total cost of ownership affected by the fuel cost and the maintenance and repairs cost. In conclusion, the development of a machine learning-based approach can provide a reliable and efficient solution to predict fuel consumption and maintenance costs impacting the total cost of ownership for heavy-duty vehicles. This, in turn, can help the transportation industry reduce emissions and operating costs, contributing to a more sustainable and efficient transportation system. These models can be optimized with more training data and deployed in a real-time environment such as cloud service or an onboard vehicle system as per the requirement of companies
Semantic Exploration of Text Documents with Multi-Faceted Metadata Employing Word Embeddings: The Patent Landscaping Use Case
Die Menge der Veröentlichungen, die den wissenschaftlichen Fortschritt dokumentieren, wächst kontinuierlich. Dies erfordert die Entwicklung der technologischen Hilfsmittel für eine eziente Analyse dieser Werke. Solche Dokumente kennzeichnen sich nicht nur durch ihren textuellen Inhalt, sondern auch durch eine Menge von Metadaten-Attributen verschiedenster Art, unter anderem Beziehungen zwischen den Dokumenten. Diese Komplexität macht die Entwicklung eines Visualisierungsansatzes, der eine Untersuchung der schriftlichen Werke unterstützt, zu einer notwendigen und anspruchsvollen Aufgabe. Patente sind beispielhaft für das beschriebene Problem, weil sie in großen Mengen von Firmen untersucht werden, die sich Wettbewerbsvorteile verschaffen oder eigene Forschung und Entwicklung steuern wollen.
Vorgeschlagen wird ein Ansatz fĂĽr eine explorative Visualisierung, der auf Metadaten und semantischen Embeddings von Patentinhalten basiert ist. Wortembeddings aus einem vortrainierten Word2vec-Modell werden genutzt, um Ă„hnlichkeiten zwischen Dokumenten zu bestimmen. DarĂĽber hinaus helfen hierarchische Clusteringmethoden dabei, mehrere semantische Detaillierungsgrade durch extrahierte relevante Stichworte anzubieten. Derzeit dĂĽrfte der vorliegende Visualisierungsansatz der erste sein, der semantische Embeddings mit einem hierarchischen Clustering verbindet und dabei diverse Interaktionstypen basierend auf Metadaten-Attributen unterstĂĽtzt.
Der vorgestellte Ansatz nimmt Nutzerinteraktionstechniken wie Brushing and Linking, Focus plus Kontext, Details-on-Demand und Semantic Zoom in Anspruch. Dadurch wird ermöglicht, Zusammenhänge zu entdecken, die aus dem Zusammenspiel von 1) Verteilungen der Metadatenwerten und 2) Positionen im semantischen Raum entstehen.
Das Visualisierungskonzept wurde durch Benutzerinterviews geprägt und durch eine Think-Aloud-Studie mit Patentenexperten evaluiert. Während der Evaluation wurde der vorgestellte Ansatz mit einem Baseline-Ansatz verglichen, der auf TF-IDF-Vektoren basiert. Die Benutzbarkeitsstudie ergab, dass die Visualisierungsmetaphern und die Interaktionstechniken angemessen gewählt wurden. Darüber hinaus zeigte sie, dass die Benutzerschnittstelle eine deutlich größere Rolle bei den Eindrücken der Probanden gespielt hat als die Art und Weise, wie die Patente platziert und geclustert waren. Tatsächlich haben beide Ansätze sehr ähnliche extrahierte Clusterstichworte ergeben. Dennoch wurden bei dem semantischen Ansatz die Cluster intuitiver platziert und deutlicher abgetrennt.
Das vorgeschlagene Visualisierungslayout sowie die Interaktionstechniken und semantischen Methoden können auch auf andere Arten von schriftlichen Werken erweitert werden, z. B. auf wissenschaftliche Publikationen. Andere Embeddingmethoden wie Paragraph2vec [61] oder BERT [32] können zudem verwendet werden, um kontextuelle Abhängigkeiten im Text über die Wortebene hinaus auszunutzen
- …