2,269 research outputs found

    Entwicklung einer Fully-Convolutional-Netzwerkarchitektur für die Detektion von defekten LED-Chips in Photolumineszenzbildern

    Get PDF
    Nowadays, light-emitting diodes (LEDs) can be found in a large variety of applications, from standard LEDs in domestic lighting solutions to advanced chip designs in automobiles, smart watches and video walls. The advances in chip design also affect the test processes, where the execution of certain contact measurements is exacerbated by ever decreasing chip dimensions or even rendered impossible due to the chip design. As an instance, wafer probing determines the electrical and optical properties of all LED chips on a wafer by contacting each and every chip with a prober needle. Chip designs without a contact pad on the surface, however, elude wafer probing and while electrical and optical properties can be determined by sample measurements, defective LED chips are distributed randomly over the wafer. Here, advanced data analysis methods provide a new approach to gather defect information from already available non-contact measurements. Photoluminescence measurements, for example, record a brightness image of an LED wafer, where conspicuous brightness values indicate defective chips. To extract these defect information from photoluminescence images, a computer-vision algorithm is required that transforms photoluminescence images into defect maps. In other words, each and every pixel of a photoluminescence image must be classifed into a class category via semantic segmentation, where so-called fully-convolutional-network algorithms represent the state-of-the-art method. However, the aforementioned task poses several challenges: on the one hand, each pixel in a photoluminescence image represents an LED chip and thus, pixel-fine output resolution is required. On the other hand, photoluminescence images show a variety of brightness values from wafer to wafer in addition to local areas of differing brightness. Additionally, clusters of defective chips assume various shapes, sizes and brightness gradients and thus, the algorithm must reliably recognise objects at multiple scales. Finally, not all salient brightness values correspond to defective LED chips, requiring the algorithm to distinguish salient brightness values corresponding to measurement artefacts, non-defect structures and defects, respectively. In this dissertation, a novel fully-convolutional-network architecture was developed that allows the accurate segmentation of defective LED chips in highly variable photoluminescence wafer images. For this purpose, the basic fully-convolutional-network architecture was modifed with regard to the given application and advanced architectural concepts were incorporated so as to enable a pixel-fine output resolution and a reliable segmentation of multiple scaled defect structures. Altogether, the developed dense ASPP Vaughan architecture achieved a pixel accuracy of 97.5 %, mean pixel accuracy of 96.2% and defect-class accuracy of 92.0 %, trained on a dataset of 136 input-label pairs and hereby showed that fully-convolutional-network algorithms can be a valuable contribution to data analysis in industrial manufacturing.Leuchtdioden (LEDs) werden heutzutage in einer Vielzahl von Anwendungen verbaut, angefangen bei Standard-LEDs in der Hausbeleuchtung bis hin zu technisch fortgeschrittenen Chip-Designs in Automobilen, Smartwatches und Videowänden. Die Weiterentwicklungen im Chip-Design beeinflussen auch die Testprozesse: Hierbei wird die Durchführung bestimmter Kontaktmessungen durch zunehmend verringerte Chip-Dimensionen entweder erschwert oder ist aufgrund des Chip-Designs unmöglich. Die sogenannteWafer-Prober-Messung beispielsweise ermittelt die elektrischen und optischen Eigenschaften aller LED-Chips auf einem Wafer, indem jeder einzelne Chip mit einer Messnadel kontaktiert und vermessen wird; Chip-Designs ohne Kontaktpad auf der Oberfläche können daher nicht durch die Wafer-Prober-Messung charakterisiert werden. Während die elektrischen und optischen Chip-Eigenschaften auch mittels Stichprobenmessungen bestimmt werden können, verteilen sich defekte LED-Chips zufällig über die Waferfläche. Fortgeschrittene Datenanalysemethoden ermöglichen hierbei einen neuen Ansatz, Defektinformationen aus bereits vorhandenen, berührungslosen Messungen zu gewinnen. Photolumineszenzmessungen, beispielsweise, erfassen ein Helligkeitsbild des LEDWafers, in dem auffällige Helligkeitswerte auf defekte LED-Chips hinweisen. Ein Bildverarbeitungsalgorithmus, der diese Defektinformationen aus Photolumineszenzbildern extrahiert und ein Defektabbild erstellt, muss hierzu jeden einzelnen Bildpunkt mittels semantischer Segmentation klassifizieren, eine Technik bei der sogenannte Fully-Convolutional-Netzwerke den Stand der Technik darstellen. Die beschriebene Aufgabe wird jedoch durch mehrere Faktoren erschwert: Einerseits entspricht jeder Bildpunkt eines Photolumineszenzbildes einem LED-Chip, so dass eine bildpunktfeine Auflösung der Netzwerkausgabe notwendig ist. Andererseits weisen Photolumineszenzbilder sowohl stark variierende Helligkeitswerte von Wafer zu Wafer als auch lokal begrenzte Helligkeitsabweichungen auf. Zusätzlich nehmen Defektanhäufungen unterschiedliche Formen, Größen und Helligkeitsgradienten an, weswegen der Algorithmus Objekte verschiedener Abmessungen zuverlässig erkennen können muss. Schlussendlich weisen nicht alle auffälligen Helligkeitswerte auf defekte LED-Chips hin, so dass der Algorithmus in der Lage sein muss zu unterscheiden, ob auffällige Helligkeitswerte mit Messartefakten, defekten LED-Chips oder defektfreien Strukturen korrelieren. In dieser Dissertation wurde eine neuartige Fully-Convolutional-Netzwerkarchitektur entwickelt, die die akkurate Segmentierung defekter LED-Chips in stark variierenden Photolumineszenzbildern von LED-Wafern ermöglicht. Zu diesem Zweck wurde die klassische Fully-Convolutional-Netzwerkarchitektur hinsichtlich der beschriebenen Anwendung angepasst und fortgeschrittene architektonische Konzepte eingearbeitet, um eine bildpunktfeine Ausgabeauflösung und eine zuverlässige Sementierung verschieden großer Defektstrukturen umzusetzen. Insgesamt erzielt die entwickelte dense-ASPP-Vaughan-Architektur eine Pixelgenauigkeit von 97,5 %, durchschnittliche Pixelgenauigkeit von 96,2% und eine Defektklassengenauigkeit von 92,0 %, trainiert mit einem Datensatz von 136 Bildern. Hiermit konnte gezeigt werden, dass Fully-Convolutional-Netzwerke eine wertvolle Erweiterung der Datenanalysemethoden sein können, die in der industriellen Fertigung eingesetzt werden

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Methods for the acquisition and analysis of volume electron microscopy data

    Get PDF

    Integration of Artificial Neural Networks and Simulation Modeling in a Decision Support System

    Get PDF
    A simulation based decision support system is developed for AT&T Microelectronics in Orlando. This system uses simulation modeling to capture the complex nature of semiconductor test operations. Simulation, however, is not a tool for optimization by itself. Numerous executions of the simulation model must generally be performed to narrow in on a set of proper decision parameters. As a means of alleviating this shortcoming, artificial neural networks are used in conjunction with simulation modeling to aid management in the decision making process. The integration of simulation and neural networks in a comprehensive decision support system, in effect, learns the reverse of the simulation process. That is, given a set of goals defined for performance measures, the decision support system suggests proper values for decision parameters to achieve those goals

    a convolutional autoencoder approach for feature extraction in virtual metrology

    Get PDF
    Abstract Exploiting the huge amount of data collected by industries is definitely one of the main challenges of the so-called Big Data era. In this sense, Machine Learning has gained growing attention in the scientific community, as it allows to extract valuable information by means of statistical predictive models trained on historical process data. In Semiconductor Manufacturing, one of the most extensively employed data-driven applications is Virtual Metrology, where a costly or unmeasurable variable is estimated by means of cheap and easy to obtain measures that are already available in the system. Often, these measures are multi-dimensional, so traditional Machine Learning algorithms cannot handle them directly. Instead, they require feature extraction, that is a preliminary step where relevant information is extracted from raw data and converted into a design matrix. Features are often hand-engineered and based on specific domain knowledge. Moreover, they may be difficult to scale and prone to information loss, affecting the effectiveness and maintainability of machine learning procedures. In this paper, we present a Deep Learning method for semi-supervised feature extraction based on Convolutional Autoencoders that is able to overcome the aforementioned problems. The proposed method is tested on a real dataset for Etch rate estimation. Optical Emission Spectrometry data, that exhibit a complex bi-dimensional time and wavelength evolution, are used as input

    A Machine Learning-based Test Program Quality Tool for Automotive Microcontrollers

    Get PDF
    In Infineon, production testing is an important aspect, during which thousands of data are stored, the purpose of this thesis is to make use of these data to build a quality gate tool based on machine learning techniques in order to improve testing quality. In fact, tests in the production flow involves two important sequential phases, the front-end and the back end-testing. In this thesis, we study the possibility of predicting the final BE label of the chips based on the FE tests

    Training Set Design for Test Removal Classication in IC Test

    Get PDF
    This thesis reports the performance of a simple classifier as a function of its training data set. The classifier is used to remove analog tests and is named the Test Removal Classifier (TRC). The thesis proposes seven different training data set designs that vary by the number of wafers in the data set, the source of the wafers and the replacement scheme of the wafers. The training data set size ranges from a single wafer to a maximum of five wafers. Three of the training data sets include wafers from the Lot Under Test (LUT). The training wafers in the data set are either fixed across all lots, partially replaced by wafers from the new LUT or fully replaced by wafers from the new LUT. The TRC\u27s training is based on rank correlation and selects a subset of tests that may be bypassed. After training, the TRC identifies the dies that bypass the selected tests. The TRC\u27s performance is measured by the reduction in over-testing and the number of test escapes after testing is completed. The comparison of the different training data sets on the TRC\u27s performance is evaluated using production data for a mixed-signal integrated circuit. The results show that the TRC\u27s performance is controlled by a single parameter- the rank correlation threshold

    Handling Discontinuous Effects in Modeling Spatial Correlation of Wafer-level Analog/RF Tests

    Get PDF
    Abstract-In an effort to reduce the cost of specification testing in analog/RF circuits, spatial correlation modeling of wafer-level measurements has recently attracted increased attention. Existing approaches for capturing and leveraging such correlation, however, rely on the assumption that spatial variation is smooth and continuous. This, in turn, limits the effectiveness of these methods on actual production data, which often exhibits localized spatial discontinuous effects. In this work, we propose a novel approach which enables spatial correlation modeling of waferlevel analog/RF tests to handle such effects and, thereby, to drastically reduce prediction error for measurements exhibiting discontinuous spatial patterns. The core of the proposed approach is a k-means algorithm which partitions a wafer into k clusters, as caused by discontinuous effects. Individual correlation models are then constructed within each cluster, revoking the assumption that spatial patterns should be smooth and continuous across the entire wafer. Effectiveness of the proposed approach is evaluated on industrial probe test data from more than 3,400 wafers, revealing significant error reduction over existing approaches
    corecore