186 research outputs found

    Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets

    Get PDF
    Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen. In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren. Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen

    Object Detection Frameworks for Fully Automated Particle Picking in Cryo-EM

    Get PDF
    Particle picking in cryo-EM is a form of object detection for noisy, low contrast, and out-of-focus microscopy images, taken of different (unknown) structures. This thesis presents a fully automated approach which, for the first time, explicitly considers training on multiple structures, while simultaneously learning both specialized models for each structure used for training and a generic model that can be applied to unseen structures. The presented architecture is fully convolutional and divided into two parts: (i) a portion which shares its weights across all structures and (ii) N+1 parallel sets of sub-architectures, N of which are specialized to the structures used for training and a generic model whose weights are tied to the layers for the specialized models. Experiments reveal improvements in multiple use cases over the-state-of-art and present additional possibilities to practitioners

    Quantitative Image Simulation and Analysis of Nanoparticles

    Get PDF

    Novel computational methods for in vitro and in situ cryo-electron microscopy

    Get PDF
    Over the past decade, advances in microscope hardware and image data processing algorithms have made cryo-electron microscopy (cryo-EM) a dominant technique for protein structure determination. Near-atomic resolution can now be obtained for many challenging in vitro samples using single-particle analysis (SPA), while sub-tomogram averaging (STA) can obtain sub-nanometer resolution for large protein complexes in a crowded cellular environment. Reaching high resolution requires large amounts of im-age data. Modern transmission electron microscopes (TEMs) automate the acquisition process and can acquire thousands of micrographs or hundreds of tomographic tilt se-ries over several days without intervention. In a first step, the data must be pre-processed: Micrographs acquired as movies are cor-rected for stage and beam-induced motion. For tilt series, additional alignment of all micrographs in 3D is performed using gold- or patch-based fiducials. Parameters of the contrast-transfer function (CTF) are estimated to enable its reversal during SPA refine-ment. Finally, individual protein particles must be located and extracted from the aligned micrographs. Current pre-processing algorithms, especially those for particle picking, are not robust enough to enable fully unsupervised operation. Thus, pre-processing is start-ed after data collection, and takes several days due to the amount of supervision re-quired. Pre-processing the data in parallel to acquisition with more robust algorithms would save time and allow to discover bad samples and microscope settings early on. Warp is a new software for cryo-EM data pre-processing. It implements new algorithms for motion correction, CTF estimation, tomogram reconstruction, as well as deep learn-ing-based approaches to particle picking and image denoising. The algorithms are more accurate and robust, enabling unsupervised operation. Warp integrates all pre-processing steps into a pipeline that is executed on-the-fly during data collection. Inte-grated with SPA tools, the pipeline can produce 2D and 3D classes less than an hour into data collection for favorable samples. Here I describe the implementation of the new algorithms, and evaluate them on various movie and tilt series data sets. I show that un-supervised pre-processing of a tilted influenza hemagglutinin trimer sample with Warp and refinement in cryoSPARC can improve previously published resolution from 3.9 Å to 3.2 Å. Warp’s algorithms operate in a reference-free manner to improve the image resolution at the pre-processing stage when no high-resolution maps are available for the particles yet. Once 3D maps have been refined, they can be used to go back to the raw data and perform reference-based refinement of sample motion and CTF in movies and tilt series. M is a new tool I developed to solve this task in a multi-particle framework. Instead of following the SPA assumption that every particle is single and independent, M models all particles in a field of view as parts of a large, physically connected multi-particle system. This allows M to optimize hyper-parameters of the system, such as sample motion and deformation, or higher-order aberrations in the CTF. Because M models these effects accurately and optimizes all hyper-parameters simultaneously with particle alignments, it can surpass previous reference-based frame and tilt series alignment tools. Here I de-scribe the implementation of M, evaluate it on several data sets, and demonstrate that the new algorithms achieve equally high resolution with movie and tilt series data of the same sample. Most strikingly, the combination of Warp, RELION and M can resolve 70S ribosomes bound to an antibiotic at 3.5 Å inside vitrified Mycoplasma pneumoniae cells, marking a major advance in resolution for in situ imaging

    Particle size distribution based on deep learning instance segmentation

    Get PDF
    Abstract. Deep learning has become one of the most important topics in Computer Science, and recently it proved to deliver outstanding performances in the field of Computer Vision, ranging from image classification and object detection to instance segmentation and panoptic segmentation. However, most of these results were obtained on large, publicly available datasets, that exhibit a low level of scene complexity. Less is known about applying deep neural networks to images acquired in industrial settings, where data is available in limited amounts. Moreover, comparing an image-based measurement boosted by deep learning to an established reference method can pave the way towards a shift in industrial measurements. This thesis hypothesizes that the particle size distribution can be estimated by employing a deep neural network to segment the particles of interest. The analysis was performed on two deep neural networks, comparing the results of the instance segmentation and the resulted size distributions. First, the data was manually labelled by selecting apatite and phlogopite particles, formulating the problem as a two-class instance segmentation task. Next, models were trained based on the two architectures and then used for predicting instances of particles on previously unseen images. Ultimately, accumulating the sizes of the predicted particles would result in a particle size distribution for a given dataset. The final results validated the hypothesis to some extent and showed that tackling difficult and complex challenges in the industry by leveraging state-of-the-art deep learning neural networks leads to promising results. The system was able to correctly identify most of the particles, even in challenging situations. The resulted particle size distribution was also compared to a reference measurement obtained by the laser diffraction method, but still further research and experiments are required in order to properly compare the two methods. The two evaluated architectures yielded great results, with relatively small amounts of annotated data

    Computer Vision Approaches to Liquid-Phase Transmission Electron Microscopy

    Get PDF
    Electron microscopy (EM) is a technique that exploits the interaction between electron and matter to produce high resolution images down to atomic level. In order to avoid undesired scattering in the electron path, EM samples are conventionally imaged in solid state under vacuum conditions. Recently, this limit has been overcome by the realization of liquid-phase electron microscopy (LP EM), a technique that enables the analysis of samples in their liquid native state. LP EM paired with a high frame rate acquisition direct detection camera allows tracking the motion of particles in liquids, as well as their temporal dynamic processes. In this research work, LP EM is adopted to image the dynamics of particles undergoing Brownian motion, exploiting their natural rotation to access all the particle views, in order to reconstruct their 3D structure via tomographic techniques. However, specific computer vision-based tools were designed around the limitations of LP EM in order to elaborate the results of the imaging process. Consequently, different deblurring and denoising approaches were adopted to improve the quality of the images. Therefore, the processed LP EM images were adopted to reconstruct the 3D model of the imaged samples. This task was performed by developing two different methods: Brownian tomography (BT) and Brownian particle analysis (BPA). The former tracks in time a single particle, capturing its dynamics evolution over time. The latter is an extension in time of the single particle analysis (SPA) technique. Conventionally it is paired to cryo-EM to reconstruct 3D density maps starting from thousands of EM images by capturing hundreds of particles of the same species frozen on a grid. On the contrary, BPA has the ability to process image sequences that may not contain thousands of particles, but instead monitors individual particle views across consecutive frames, rather than across a single frame

    Deep Learning-Guided Prediction of Material’s Microstructures and Applications to Advanced Manufacturing

    Get PDF
    Material microstructure prediction based on processing conditions is very useful in advanced manufacturing. Trial-and-error experiments are very time-consuming to exhaust numerous combinations of processing parameters and characterize the resulting microstructures. To accelerate process development and optimization, researchers have explored microstructure prediction methods, including physical-based modeling and feature-based machine learning. Nevertheless, they both have limitations. Physical-based modeling consumes too much computational power. And in feature-based machine learning, low-dimensional microstructural features are manually extracted to represent high-dimensional microstructures, which leads to information loss. In this dissertation, a deep learning-guided microstructure prediction framework is established. It uses a conditional generative adversarial network (CGAN) to regress microstructures against numerical processing parameters. After training, the algorithm grasps the mapping between microstructures and processing parameters and can infer the microstructure according to an unseen processing parameter value. This CGAN-enabled approach consumes low computational power for prediction and does not require manual feature extraction. A regression-based conditional Wasserstein generative adversarial network (RCWGAN) is developed, and its microstructure prediction capability is demonstrated on a synthetic micrograph dataset. Several important hyperparameters, including loss function, model depth, number of training epochs, and size of the training set, are systematically studied and optimized. After optimization, prediction accuracy in various microstructural features is over 92%. Then the RCWGAN is validated on a scanning electron microscopy (SEM) micrograph dataset obtained from laser-sintered alumina. Data augmentation is applied to ensure an adequate number of training samples. Different regularization technologies are studied. It is found that gradient penalty can preserve the most details in the generated microstructure. After training, the RCWGAN is able to predict the microstructure as a function of laser power. In-situ microstructure monitoring using the RCWGAN is proposed and demonstrated. Obtaining microstructure information during fabrication could enable accurate microstructure control. It opens the possibility of fabricating a new kind of materials with novel functionalities. The RCWGAN is integrated into a laser sintering system equipped with a camera to demonstrate this novel application. Surface-emission brightness is captured by the camera during the laser sintering process and fed to the RCWGAN for online microstructure prediction. After training, the RCWGAN learns the mapping between surface-emission brightness and microstructures and can make prediction in seconds. The prediction accuracy is over 95% in terms of average grain size
    corecore