597 research outputs found

    Deep CNN-Based Automated Optical Inspection for Aerospace Components

    Get PDF
    ABSTRACT The defect detection problem is of outmost importance in high-tech industries such as aerospace manufacturing and is widely employed using automated industrial quality control systems. In the aerospace manufacturing industry, composite materials are extensively applied as structural components in civilian and military aircraft. To ensure the quality of the product and high reliability, manual inspection and traditional automatic optical inspection have been employed to identify the defects throughout production and maintenance. These inspection techniques have several limitations such as tedious, time- consuming, inconsistent, subjective, labor intensive, expensive, etc. To make the operation effective and efficient, modern automated optical inspection needs to be preferred. In this dissertation work, automatic defect detection techniques are tested on three levels using a novel aerospace composite materials image dataset (ACMID). First, classical machine learning models, namely, Support Vector Machine and Random Forest, are employed for both datasets. Second, deep CNN-based models, such as improved ResNet50 and MobileNetV2 architectures are trained on ACMID datasets. Third, an efficient defect detection technique that combines the features of deep learning and classical machine learning model is proposed for ACMID dataset. To assess the aerospace composite components, all the models are trained and tested on ACMID datasets with distinct sizes. In addition, this work investigates the scenario when defective and non-defective samples are scarce and imbalanced. To overcome the problems of imbalanced and scarce datasets, oversampling techniques and data augmentation using improved deep convolutional generative adversarial networks (DCGAN) are considered. Furthermore, the proposed models are also validated using one of the benchmark steel surface defects (SSD) dataset

    Defect detection in infrared thermography by deep learning algorithms

    Get PDF
    L'évaluation non destructive (END) est un domaine permettant d'identifier tous les types de dommages structurels dans un objet d'intérêt sans appliquer de dommages et de modifications permanents. Ce domaine fait l'objet de recherches intensives depuis de nombreuses années. La thermographie infrarouge (IR) est l'une des technologies d'évaluation non destructive qui permet d'inspecter, de caractériser et d'analyser les défauts sur la base d'images infrarouges (séquences) provenant de l'enregistrement de l'émission et de la réflexion de la lumière infrarouge afin d'évaluer les objets non autochauffants pour le contrôle de la qualité et l'assurance de la sécurité. Ces dernières années, le domaine de l'apprentissage profond de l'intelligence artificielle a fait des progrès remarquables dans les applications de traitement d'images. Ce domaine a montré sa capacité à surmonter la plupart des inconvénients des autres approches existantes auparavant dans un grand nombre d'applications. Cependant, en raison de l'insuffisance des données d'entraînement, les algorithmes d'apprentissage profond restent encore inexplorés, et seules quelques publications font état de leur application à l'évaluation non destructive de la thermographie (TNDE). Les algorithmes d'apprentissage profond intelligents et hautement automatisés pourraient être couplés à la thermographie infrarouge pour identifier les défauts (dommages) dans les composites, l'acier, etc. avec une confiance et une précision élevée. Parmi les sujets du domaine de recherche TNDE, les techniques d'apprentissage automatique supervisées et non supervisées sont les tâches les plus innovantes et les plus difficiles pour l'analyse de la détection des défauts. Dans ce projet, nous construisons des cadres intégrés pour le traitement des données brutes de la thermographie infrarouge à l'aide d'algorithmes d'apprentissage profond et les points forts des méthodologies proposées sont les suivants: 1. Identification et segmentation automatique des défauts par des algorithmes d'apprentissage profond en thermographie infrarouge. Les réseaux neuronaux convolutifs (CNN) pré-entraînés sont introduits pour capturer les caractéristiques des défauts dans les images thermiques infrarouges afin de mettre en œuvre des modèles basés sur les CNN pour la détection des défauts structurels dans les échantillons composés de matériaux composites (diagnostic des défauts). Plusieurs alternatives de CNNs profonds pour la détection de défauts dans la thermographie infrarouge. Les comparaisons de performance de la détection et de la segmentation automatique des défauts dans la thermographie infrarouge en utilisant différentes méthodes de détection par apprentissage profond : (i) segmentation d'instance (Center-mask ; Mask-RCNN) ; (ii) détection d’objet (Yolo-v3 ; Faster-RCNN) ; (iii) segmentation sémantique (Unet ; Res-unet); 2. Technique d'augmentation des données par la génération de données synthétiques pour réduire le coût des dépenses élevées associées à la collecte de données infrarouges originales dans les composites (composants d'aéronefs.) afin d'enrichir les données de formation pour l'apprentissage des caractéristiques dans TNDE; 3. Le réseau antagoniste génératif (GAN convolutif profond et GAN de Wasserstein) est introduit dans la thermographie infrarouge associée à la thermographie partielle des moindres carrés (PLST) (réseau PLS-GANs) pour l'extraction des caractéristiques visibles des défauts et l'amélioration de la visibilité des défauts pour éliminer le bruit dans la thermographie pulsée; 4. Estimation automatique de la profondeur des défauts (question de la caractérisation) à partir de données infrarouges simulées en utilisant un réseau neuronal récurrent simplifié : Gate Recurrent Unit (GRU) à travers l'apprentissage supervisé par régression.Non-destructive evaluation (NDE) is a field to identify all types of structural damage in an object of interest without applying any permanent damage and modification. This field has been intensively investigated for many years. The infrared thermography (IR) is one of NDE technology through inspecting, characterize and analyzing defects based on the infrared images (sequences) from the recordation of infrared light emission and reflection to evaluate non-self-heating objects for quality control and safety assurance. In recent years, the deep learning field of artificial intelligence has made remarkable progress in image processing applications. This field has shown its ability to overcome most of the disadvantages in other approaches existing previously in a great number of applications. Whereas due to the insufficient training data, deep learning algorithms still remain unexplored, and only few publications involving the application of it for thermography nondestructive evaluation (TNDE). The intelligent and highly automated deep learning algorithms could be coupled with infrared thermography to identify the defect (damages) in composites, steel, etc. with high confidence and accuracy. Among the topics in the TNDE research field, the supervised and unsupervised machine learning techniques both are the most innovative and challenging tasks for defect detection analysis. In this project, we construct integrated frameworks for processing raw data from infrared thermography using deep learning algorithms and highlight of the methodologies proposed include the following: 1. Automatic defect identification and segmentation by deep learning algorithms in infrared thermography. The pre-trained convolutional neural networks (CNNs) are introduced to capture defect feature in infrared thermal images to implement CNNs based models for the detection of structural defects in samples made of composite materials (fault diagnosis). Several alternatives of deep CNNs for the detection of defects in the Infrared thermography. The comparisons of performance of the automatic defect detection and segmentation in infrared thermography using different deep learning detection methods: (i) instance segmentation (Center-mask; Mask-RCNN); (ii) objective location (Yolo-v3; Faster-RCNN); (iii) semantic segmentation (Unet; Res-unet); 2. Data augmentation technique through synthetic data generation to reduce the cost of high expense associated with the collection of original infrared data in the composites (aircraft components.) to enrich training data for feature learning in TNDE; 3. The generative adversarial network (Deep convolutional GAN and Wasserstein GAN) is introduced to the infrared thermography associated with partial least square thermography (PLST) (PLS-GANs network) for visible feature extraction of defects and enhancement of the visibility of defects to remove noise in Pulsed thermography; 4. Automatic defect depth estimation (Characterization issue) from simulated infrared data using a simplified recurrent neural network: Gate Recurrent Unit (GRU) through the regression supervised learning

    Generative Adversarial Networks to Improve the Robustness of Visual Defect Segmentation by Semantic Networks in Manufacturing Components

    Get PDF
    This paper describes the application of Semantic Networks for the detection of defects in images of metallic manufactured components in a situation where the number of available samples of defects is small, which is rather common in real practical environments. In order to overcome this shortage of data, the common approach is to use conventional data augmentation techniques. We resort to Generative Adversarial Networks (GANs) that have shown the capability to generate highly convincing samples of a specific class as a result of a game between a discriminator and a generator module. Here, we apply the GANs to generate samples of images of metallic manufactured components with specific defects, in order to improve training of Semantic Networks (specifically DeepLabV3+ and Pyramid Attention Network (PAN) networks) carrying out the defect detection and segmentation. Our process carries out the generation of defect images using the StyleGAN2 with the DiffAugment method, followed by a conventional data augmentation over the entire enriched dataset, achieving a large balanced dataset that allows robust training of the Semantic Network. We demonstrate the approach on a private dataset generated for an industrial client, where images are captured by an ad-hoc photometric-stereo image acquisition system, and a public dataset, the Northeastern University surface defect database (NEU). The proposed approach achieves an improvement of 7% and 6% in an intersection over union (IoU) measure of detection performance on each dataset over the conventional data augmentation

    Artificial intelligence for advanced manufacturing quality

    Get PDF
    100 p.This Thesis addresses the challenge of AI-based image quality control systems applied to manufacturing industry, aiming to improve this field through the use of advanced techniques for data acquisition and processing, in order to obtain robust, reliable and optimal systems. This Thesis presents contributions onthe use of complex data acquisition techniques, the application and design of specialised neural networks for the defect detection, and the integration and validation of these systems in production processes. It has been developed in the context of several applied research projects that provided a practical feedback of the usefulness of the proposed computational advances as well as real life data for experimental validation

    Domain Adapted Deep-Learning for Improved Ultrasonic Crack Characterization Using Limited Experimental Data

    Get PDF
    Deep learning is an effective method for ultrasonic crack characterization due to its high level of automation and accuracy. Simulating the training set has been shown to be an effective method of circumventing the lack of experimental data common to nondestructive evaluation (NDE) applications. However, a simulation can neither be completely accurate nor capture all variability present in the real inspection. This means that the experimental and simulated data will be from different (but related) distributions, leading to inaccuracy when a deep learning algorithm trained on simulated data is applied to experimental measurements. This article aims to tackle this problem through the use of domain adaptation (DA). A convolutional neural network (CNN) is used to predict the depth of surface-breaking defects, with in-line pipe inspection as the targeted application. Three DA methods across varying sizes of experimental training data are compared to two non-DA methods as a baseline. The performance of the methods tested is evaluated by sizing 15 experimental notches of length (1–5 mm) and inclined at angles of up to 20° from the vertical. Experimental training sets are formed with between 1 and 15 notches. Of the DA methods investigated, an adversarial approach is found to be the most effective way to use the limited experimental training data. With this method, and only three notches, the resulting network gives a root-mean-square error (RMSE) in sizing of 0.5 ± 0.037 mm, whereas with only experimental data the RMSE is 1.5 ± 0.13 mm and with only simulated data it is 0.64 ± 0.044 mm

    Automated Semiconductor Defect Inspection in Scanning Electron Microscope Images: a Systematic Review

    Full text link
    A growing need exists for efficient and accurate methods for detecting defects in semiconductor materials and devices. These defects can have a detrimental impact on the efficiency of the manufacturing process, because they cause critical failures and wafer-yield limitations. As nodes and patterns get smaller, even high-resolution imaging techniques such as Scanning Electron Microscopy (SEM) produce noisy images due to operating close to sensitivity levels and due to varying physical properties of different underlayers or resist materials. This inherent noise is one of the main challenges for defect inspection. One promising approach is the use of machine learning algorithms, which can be trained to accurately classify and locate defects in semiconductor samples. Recently, convolutional neural networks have proved to be particularly useful in this regard. This systematic review provides a comprehensive overview of the state of automated semiconductor defect inspection on SEM images, including the most recent innovations and developments. 38 publications were selected on this topic, indexed in IEEE Xplore and SPIE databases. For each of these, the application, methodology, dataset, results, limitations and future work were summarized. A comprehensive overview and analysis of their methods is provided. Finally, promising avenues for future work in the field of SEM-based defect inspection are suggested.Comment: 16 pages, 12 figures, 3 table

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation

    Deep learning with filtering for defect characterization in pulsed thermography based non-destructive testing

    Get PDF
    Pulsed thermography is widely used for non-destructive testing of various materials. The temperature profile obtained after pulse heating is used to characterize the underlying defects in an object. In this paper, the automation of the process of defect visualization and depth quantification in pulsed thermography through various deep learning algorithms is reported. Stainless steel plate with artificial defects is considered for analysis. The raw temperature data is smoothed using moving average, Savitzky-Golay and quadratic regression filters to reduce noise. Thermal signal reconstruction, the conventional method to eliminate noise, is also used for generating filtered datasets. Defect visualization refers to identifying and locating the defects in an image sample and Mask region convolutional neural network (Mask R-CNN) is considered for not just detecting the defects but also locating them on the image. The located defects are utilized for depth estimation using the following networks-multi-layer perceptron (MLP), long short-term memory (LSTM) and gated recurrent units (GRU). The input to the networks is the temperature contrast characteristics which symbolizes the difference in temperature over defective and non-defective areas measured over 250 time points and output of the networks is the estimated depth. The study shows that LSTM based approach provides the least percentage error of 5.5% and is a very suitable approach for automation of defect characterization in pulsed thermography

    Augmented classification for electrical coil winding defects

    Get PDF
    A green revolution has accelerated over the recent decades with a look to replace existing transportation power solutions through the adoption of greener electrical alternatives. In parallel the digitisation of manufacturing has enabled progress in the tracking and traceability of processes and improvements in fault detection and classification. This paper explores electrical machine manufacture and the challenges faced in identifying failures modes during this life cycle through the demonstration of state-of-the-art machine vision methods for the classification of electrical coil winding defects. We demonstrate how recent generative adversarial networks can be used to augment training of these models to further improve their accuracy for this challenging task. Our approach utilises pre-processing and dimensionality reduction to boost performance of the model from a standard convolutional neural network (CNN) leading to a significant increase in accuracy

    State of AI-based monitoring in smart manufacturing and introduction to focused section

    Get PDF
    Over the past few decades, intelligentization, supported by artificial intelligence (AI) technologies, has become an important trend for industrial manufacturing, accelerating the development of smart manufacturing. In modern industries, standard AI has been endowed with additional attributes, yielding the so-called industrial artificial intelligence (IAI) that has become the technical core of smart manufacturing. AI-powered manufacturing brings remarkable improvements in many aspects of closed-loop production chains from manufacturing processes to end product logistics. In particular, IAI incorporating domain knowledge has benefited the area of production monitoring considerably. Advanced AI methods such as deep neural networks, adversarial training, and transfer learning have been widely used to support both diagnostics and predictive maintenance of the entire production process. It is generally believed that IAI is the critical technologies needed to drive the future evolution of industrial manufacturing. This article offers a comprehensive overview of AI-powered manufacturing and its applications in monitoring. More specifically, it summarizes the key technologies of IAI and discusses their typical application scenarios with respect to three major aspects of production monitoring: fault diagnosis, remaining useful life prediction, and quality inspection. In addition, the existing problems and future research directions of IAI are also discussed. This article further introduces the papers in this focused section on AI-based monitoring in smart manufacturing by weaving them into the overview, highlighting how they contribute to and extend the body of literature in this area
    corecore