806 research outputs found

    Entwicklung einer Fully-Convolutional-Netzwerkarchitektur fĂŒr die Detektion von defekten LED-Chips in Photolumineszenzbildern

    Get PDF
    Nowadays, light-emitting diodes (LEDs) can be found in a large variety of applications, from standard LEDs in domestic lighting solutions to advanced chip designs in automobiles, smart watches and video walls. The advances in chip design also affect the test processes, where the execution of certain contact measurements is exacerbated by ever decreasing chip dimensions or even rendered impossible due to the chip design. As an instance, wafer probing determines the electrical and optical properties of all LED chips on a wafer by contacting each and every chip with a prober needle. Chip designs without a contact pad on the surface, however, elude wafer probing and while electrical and optical properties can be determined by sample measurements, defective LED chips are distributed randomly over the wafer. Here, advanced data analysis methods provide a new approach to gather defect information from already available non-contact measurements. Photoluminescence measurements, for example, record a brightness image of an LED wafer, where conspicuous brightness values indicate defective chips. To extract these defect information from photoluminescence images, a computer-vision algorithm is required that transforms photoluminescence images into defect maps. In other words, each and every pixel of a photoluminescence image must be classifed into a class category via semantic segmentation, where so-called fully-convolutional-network algorithms represent the state-of-the-art method. However, the aforementioned task poses several challenges: on the one hand, each pixel in a photoluminescence image represents an LED chip and thus, pixel-fine output resolution is required. On the other hand, photoluminescence images show a variety of brightness values from wafer to wafer in addition to local areas of differing brightness. Additionally, clusters of defective chips assume various shapes, sizes and brightness gradients and thus, the algorithm must reliably recognise objects at multiple scales. Finally, not all salient brightness values correspond to defective LED chips, requiring the algorithm to distinguish salient brightness values corresponding to measurement artefacts, non-defect structures and defects, respectively. In this dissertation, a novel fully-convolutional-network architecture was developed that allows the accurate segmentation of defective LED chips in highly variable photoluminescence wafer images. For this purpose, the basic fully-convolutional-network architecture was modifed with regard to the given application and advanced architectural concepts were incorporated so as to enable a pixel-fine output resolution and a reliable segmentation of multiple scaled defect structures. Altogether, the developed dense ASPP Vaughan architecture achieved a pixel accuracy of 97.5 %, mean pixel accuracy of 96.2% and defect-class accuracy of 92.0 %, trained on a dataset of 136 input-label pairs and hereby showed that fully-convolutional-network algorithms can be a valuable contribution to data analysis in industrial manufacturing.Leuchtdioden (LEDs) werden heutzutage in einer Vielzahl von Anwendungen verbaut, angefangen bei Standard-LEDs in der Hausbeleuchtung bis hin zu technisch fortgeschrittenen Chip-Designs in Automobilen, Smartwatches und VideowĂ€nden. Die Weiterentwicklungen im Chip-Design beeinflussen auch die Testprozesse: Hierbei wird die DurchfĂŒhrung bestimmter Kontaktmessungen durch zunehmend verringerte Chip-Dimensionen entweder erschwert oder ist aufgrund des Chip-Designs unmöglich. Die sogenannteWafer-Prober-Messung beispielsweise ermittelt die elektrischen und optischen Eigenschaften aller LED-Chips auf einem Wafer, indem jeder einzelne Chip mit einer Messnadel kontaktiert und vermessen wird; Chip-Designs ohne Kontaktpad auf der OberflĂ€che können daher nicht durch die Wafer-Prober-Messung charakterisiert werden. WĂ€hrend die elektrischen und optischen Chip-Eigenschaften auch mittels Stichprobenmessungen bestimmt werden können, verteilen sich defekte LED-Chips zufĂ€llig ĂŒber die WaferflĂ€che. Fortgeschrittene Datenanalysemethoden ermöglichen hierbei einen neuen Ansatz, Defektinformationen aus bereits vorhandenen, berĂŒhrungslosen Messungen zu gewinnen. Photolumineszenzmessungen, beispielsweise, erfassen ein Helligkeitsbild des LEDWafers, in dem auffĂ€llige Helligkeitswerte auf defekte LED-Chips hinweisen. Ein Bildverarbeitungsalgorithmus, der diese Defektinformationen aus Photolumineszenzbildern extrahiert und ein Defektabbild erstellt, muss hierzu jeden einzelnen Bildpunkt mittels semantischer Segmentation klassifizieren, eine Technik bei der sogenannte Fully-Convolutional-Netzwerke den Stand der Technik darstellen. Die beschriebene Aufgabe wird jedoch durch mehrere Faktoren erschwert: Einerseits entspricht jeder Bildpunkt eines Photolumineszenzbildes einem LED-Chip, so dass eine bildpunktfeine Auflösung der Netzwerkausgabe notwendig ist. Andererseits weisen Photolumineszenzbilder sowohl stark variierende Helligkeitswerte von Wafer zu Wafer als auch lokal begrenzte Helligkeitsabweichungen auf. ZusĂ€tzlich nehmen DefektanhĂ€ufungen unterschiedliche Formen, GrĂ¶ĂŸen und Helligkeitsgradienten an, weswegen der Algorithmus Objekte verschiedener Abmessungen zuverlĂ€ssig erkennen können muss. Schlussendlich weisen nicht alle auffĂ€lligen Helligkeitswerte auf defekte LED-Chips hin, so dass der Algorithmus in der Lage sein muss zu unterscheiden, ob auffĂ€llige Helligkeitswerte mit Messartefakten, defekten LED-Chips oder defektfreien Strukturen korrelieren. In dieser Dissertation wurde eine neuartige Fully-Convolutional-Netzwerkarchitektur entwickelt, die die akkurate Segmentierung defekter LED-Chips in stark variierenden Photolumineszenzbildern von LED-Wafern ermöglicht. Zu diesem Zweck wurde die klassische Fully-Convolutional-Netzwerkarchitektur hinsichtlich der beschriebenen Anwendung angepasst und fortgeschrittene architektonische Konzepte eingearbeitet, um eine bildpunktfeine Ausgabeauflösung und eine zuverlĂ€ssige Sementierung verschieden großer Defektstrukturen umzusetzen. Insgesamt erzielt die entwickelte dense-ASPP-Vaughan-Architektur eine Pixelgenauigkeit von 97,5 %, durchschnittliche Pixelgenauigkeit von 96,2% und eine Defektklassengenauigkeit von 92,0 %, trainiert mit einem Datensatz von 136 Bildern. Hiermit konnte gezeigt werden, dass Fully-Convolutional-Netzwerke eine wertvolle Erweiterung der Datenanalysemethoden sein können, die in der industriellen Fertigung eingesetzt werden

    AI and ML Accelerator Survey and Trends

    Full text link
    This paper updates the survey of AI accelerators and processors from past three years. This paper collects and summarizes the current commercial accelerators that have been publicly announced with peak performance and power consumption numbers. The performance and power values are plotted on a scatter graph, and a number of dimensions and observations from the trends on this plot are again discussed and analyzed. Two new trends plots based on accelerator release dates are included in this year's paper, along with the additional trends of some neuromorphic, photonic, and memristor-based inference accelerators.Comment: 10 pages, 4 figures, 2022 IEEE High Performance Extreme Computing (HPEC) Conference. arXiv admin note: substantial text overlap with arXiv:2009.00993, arXiv:2109.0895

    Bayesian Learning from Sequential Data using Gaussian Processes with Signature Covariances

    Full text link
    We develop a Bayesian approach to learning from sequential data by using Gaussian processes (GPs) with so-called signature kernels as covariance functions. This allows to make sequences of different length comparable and to rely on strong theoretical results from stochastic analysis. Signatures capture sequential structure with tensors that can scale unfavourably in sequence length and state space dimension. To deal with this, we introduce a sparse variational approach with inducing tensors. We then combine the resulting GP with LSTMs and GRUs to build larger models that leverage the strengths of each of these approaches and benchmark the resulting GPs on multivariate time series (TS) classification datasets. Code available at https://github.com/tgcsaba/GPSig.Comment: Near camera ready version for ICML 2020. Previous title: "Variational Gaussian Processes with Signature Covariances

    An Acoustic Emission and Artificial Intelligence Approach to Structural Health Monitoring for Aerospace Application

    Get PDF
    In the area of aerospace and other applications, structural health monitoring (SHM) has been a significant and growing area of research in recent years. Throughout the operational life of aerospace structures, various damage scenarios may manifest, and it is of great concern to the aerospace community to develop methodologies for detecting and assessing these damage scenarios. In this paper, fundamental research on the use of the acoustic emission (AE) approach to SHM for fatigue crack growth is presented. In general, the AE approach to SHM and non-destructive evaluation (NDE) involves the sensing of ultrasonic Lamb waves propagating through a structure. Piezoelectric wafer active sensors (PWAS) have proven to be an effective tool in sensing these ultrasonic Lamb waves. The goal of this research was to conduct fundamental investigations into the use of PWAS for AE sensing of fatigued aerospace-grade aluminum 2024-T3 and the use of artificial intelligence approaches for AE signal classification efforts. The signal classification efforts presented in this thesis involve: (i) locating the source of the acoustic emission (source localization); (ii) determining whether an AE signal sensed is crack-related or noise; (iii) determining the crack length from which an AE originates. Ultimately, it is hypothesized and desired that the techniques developed in this paper and similar literature may be applied to production efforts of aerospace structures to identify and locate damage, optimize aircraft maintenance efforts, and prevent disastrous failure

    Terahertz Components and Systems : Metamaterials, Measurement Techniques and Applications

    Get PDF
    THz technology has been a promising, yet problematic field in science for a long time. Up until two decades ago, the lack of fundamental components and materials operating at THz frequencies constrained its use mostly to astronomy, with very little commercial focus. Today, the field has grown remarkably, with both scientific and industrial applications pushing the development of new devices and systems to control THz radiation. Further work is necessary to overcome the region’s fundamental challenges and advance the technology on par with the rest of the electromagnetic spectrum. This thesis aims to address new applications for THz spectroscopy, both in the frequency and time domain, as well as enhancement of THz device performance. A new design approach for THz resonant metamaterials is proposed that aims to improve their resonant response, irrespective of individual resonator geometries. The new approach can be applied to a wide range of already existing structures without altering the individual resonator design and relies on metamaterial cell symmetry and substrate dimensions. The design approach is used to create split-ring optical modulators, demonstrating their response is strong enough to be actuated with an LED lamp as a light source alone. The development of a multiple-angle-of-incidence, multi-wavelength THz ellipsometry system is also presented. The utility of the system for material characterisation is demonstrated, extracting complex optical parameters of composite materials, as well as non-homogeneous, anisotropic and highly absorptive materials in the THz range, which can be otherwise problematic to characterise. The use of the ellipsometry system as an imaging tool for visualising and measuring internal material stresses is introduced. Finally, the application of THz-TDS in conjunction with machine learning for waste oil quality control is investigated, introducing a new potential field of application for THz spectroscopy

    Implementation of variational quantum algorithms on superconducting qudits

    Get PDF
    Quantum computing is considered an emerging technology with promising applications in chemistry, materials, medicine, and cryptography. Superconducting circuits are a leading candidate hardware platform for the realisation of quantum computing, and superconducting devices have now been demonstrated at a scale of hundreds of qubits. Further scale-up faces challenges in wiring, frequency crowding, and the high cost of control electronics. Complementary to increasing the number of qubits, using qutrits (3-level systems) or qudits (d-level systems, d>3) as the basic building block for quantum processors can also increase their computational capability. A commonly used superconducting qubit design, the transmon, has more than two levels. It is a good candidate for a qutrit or qudit processor. Variational quantum algorithms are a type of quantum algorithm that can be implemented on near-term devices. They have been proposed to have a higher tolerance to noise in near-term devices, making them promising for near-term applications of quantum computing. The difference between qubits and qudits makes it non-trivial to translate a variational algorithm designed for qubits onto a qudit quantum processor. The algorithm needs to be either rewritten into a qudit version or an emulator needs to be developed to emulate a qubit processor with a qudit processor. This thesis describes research on the implementation of variational quantum algorithms, with a particular focus on utilising more than two computational levels of transmons. The work comprises building a two-qubit transmon device and a multi-level transmon device that is used as a qutrit or a qudit (d = 4). We fully benchmarked the two-qubit and the single qudit devices with randomised benchmarking and gate-set tomography, and found good agreement between the two approaches. The qutrit Hadamard gate is reported to have an infidelity of 3.22 ± 0.11 × 10−3, which is comparable to state-of-the-art results. We use the qudit to implement a two-qubit emulator and report that the two-qubit Clifford gate randomised benchmarking result on the emulator (infidelity 9.5 ± 0.7 × 10−2) is worse than the physical two-qubit (infidelity 4.0 ± 0.3 × 10−2) result. We also implemented active reset for the qudit transmon to demonstrate preparing high-fidelity initial states with active feedback. We found the initial state fidelity improved from 0.900 ± 0.011 to 0.9932 ± 0.0013 from gate set tomography. We finally utilised the single qudit device to implement quantum algorithms. First, a single qutrit classifier for the iris dataset was implemented. We report a successful demonstration of the iris classifier, which yields the training accuracy of the qutrit classifier as 0.96 ± 0.03 and the testing accuracy as 0.94 ± 0.04 among multiple trials. Second, we implemented a two-qubit emulator with a 4-level qudit and used the emulator to demonstrate a variational quantum eigensolver for hydrogen molecules. The solved energy versus the hydrogen bond distance is within 1.5 × 10−2 Hartree, below the chemical accuracy threshold. From the characterisation, benchmarking results, and successful demonstration of two quantum algorithms, we conclude that higher levels of a transmon can be used to increase the size of the Hilbert space used for quantum computation with minimal extra cost

    Towards reliable parameter extraction in MEMS final module testing using Bayesian inference

    Get PDF
    In micro-electro-mechanical systems (MEMS) testing high overall precision and reliability are essential. Due to the additional requirement of runtime efficiency, machine learning methods have been investigated in recent years. However, these methods are often associated with inherent challenges concerning uncertainty quantification and guarantees of reliability. The goal of this paper is therefore to present a new machine learning approach in MEMS testing based on Bayesian inference to determine whether the estimation is trustworthy. The overall predictive performance as well as the uncertainty quantification are evaluated with four methods: Bayesian neural network, mixture density network, probabilistic Bayesian neural network and BayesFlow. They are investigated under the variation in training set size, different additive noise levels, and an out-of-distribution condition, namely the variation in the damping factor of the MEMS device. Furthermore, epistemic and aleatoric uncertainties are evaluated and discussed to encourage thorough inspection of models before deployment striving for reliable and efficient parameter estimation during final module testing of MEMS devices. BayesFlow consistently outperformed the other methods in terms of the predictive performance. As the probabilistic Bayesian neural network enables the distinction between epistemic and aleatoric uncertainty, their share of the total uncertainty has been intensively studied
    • 

    corecore