57 research outputs found

    Deep Industrial Image Anomaly Detection: A Survey

    Full text link
    The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection

    A Survey on Unsupervised Anomaly Detection Algorithms for Industrial Images

    Full text link
    In line with the development of Industry 4.0, surface defect detection/anomaly detection becomes a topical subject in the industry field. Improving efficiency as well as saving labor costs has steadily become a matter of great concern in practice, where deep learning-based algorithms perform better than traditional vision inspection methods in recent years. While existing deep learning-based algorithms are biased towards supervised learning, which not only necessitates a huge amount of labeled data and human labor, but also brings about inefficiency and limitations. In contrast, recent research shows that unsupervised learning has great potential in tackling the above disadvantages for visual industrial anomaly detection. In this survey, we summarize current challenges and provide a thorough overview of recently proposed unsupervised algorithms for visual industrial anomaly detection covering five categories, whose innovation points and frameworks are described in detail. Meanwhile, publicly available datasets for industrial anomaly detection are introduced. By comparing different classes of methods, the advantages and disadvantages of anomaly detection algorithms are summarized. Based on the current research framework, we point out the core issue that remains to be resolved and provide further improvement directions. Meanwhile, based on the latest technological trends, we offer insights into future research directions. It is expected to assist both the research community and industry in developing a broader and cross-domain perspective

    Deteção automática de defeitos em couro

    Get PDF
    Dissertação de mestrado em Informatics EngineeringEsta dissertação desenvolve-se em torno do problema da deteção de defeitos em couro. A deteção de defeitos em couro é um problema tradicionalmente resolvido manualmente, usando avaliadores ex perientes na inspeção do couro. No entanto, como esta tarefa é lenta e suscetível ao erro humano, ao longo dos últimos 20 anos tem-se procurado soluções que automatizem a tarefa. Assim, surgiram várias soluções capazes de resolver o problema eficazmente utilizando técnicas de Machine Learning e Visão por Computador. No entanto, todas elas requerem um conjunto de dados de grande dimensão anotado e balanceado entre as várias categorias. Assim, esta dissertação pretende automatizar o processo tradicio nal, usando técnicas de Machine Learning, mas sem recorrer a datasets anotados de grandes dimensões. Para tal, são exploradas técnicas de Novelty Detection, as quais permitem resolver a tarefa de inspeção de defeitos utilizando um conjunto de dados não supervsionado, pequeno e não balanceado. Nesta dis sertação foram analisadas e testadas as seguintes técnicas de novelty detection: MSE Autoencoder, SSIM Autoencoder, CFLOW, STFPM, Reverse, and DRAEM. Estas técnicas foram treinadas e testadas com dois conjuntos de dados diferentes: MVTEC e Neadvance. As técnicas analisadas detectam e localizam a mai oria dos defeitos das imagens do MVTEC. Contudo, têm dificuldades em detetar os defeitos das imagens do dataset da Neadvance. Com base nos resultados obtidos, é proposta a melhor metodologia a usar para três diferentes cenários. No caso do poder computacional ser baixo, SSIM Autoencoder deve ser a técnica usada. No caso onde há poder computational suficiente e os exemplos a analisar são de uma só cor, DRAEM deve ser a técnica escolhida. Em qualquer outro caso, o STFPM deve ser a opção escolhida.This dissertation develops around the leather defects detection problem. The leather defects detec tion problem is traditionally manually solved, using experient assorters in the leather inspection. However, as this task is slow and prone to human error, over the last 20 years the searching for solutions that automatize this task has continued. In this way, several solutions capable to solve the problem effi ciently emerged using Machine Learning and Computer Vision techniques. Nonetheless, they all require a high-dimension dataset labeled and balanced between all categories. Thus, this dissertation pretends to automatize the traditional process, using the Machine Learning techniques without requiring a large dimensions labelled dataset. To this end, there will be explored Novelty Detection techniques, that in tend to solve the leather inspection task using an unsupervised small and non-balanced dataset. This dissertation analyzed and tested the following Novelty Detection techniques: MSE Autoencoder, SSIM Autoencoder, CFLOW, STFPM, Reverse, and DRAEM. These techniques are trained and tested in two distinct datasets: MVTEC and Neadvance. The analyzed techniques detect and localize most MVTEC defects. However, they have difficulties in defect detection on Neadvance samples. Based on the ob tained results, it is proposed the best methodology to use for three distinct scenarios. In the case where the computational power available is low, SSIM Autoencoder should be the technique to use. In the case where there is enough computational power and the samples to inspect have the same color, DRAEM should be the chosen technique. In any other case, the STFPM should be the chosen option

    Exploiting CNN’s visual explanations to drive anomaly detection

    Get PDF
    Nowadays, deep learning is a key technology for many applications in the industrial area such as anomaly detection. The role of Machine Learning (ML) in this field relies on the ability of training a network to learn to inspect images to determine the presence or not of anomalies. Frequently, in Industry 4.0 w.r.t. the anomaly detection task, the images to be analyzed are not optimal, since they contain edges or areas, that are not of interest which could lead the network astray. Thus, this study aims at identifying a systematic way to train a neural network to make it able to focus only on the area of interest. The study is based on the definition of a loss to be applied in the training phase of the network that, using masks, gives higher weight to the anomalies identified within the area of interest. The idea is to add an Overlap Coefficient to the standard cross-entropy. In this way, the more the identified anomaly is outside the Area of Interest (AOI) the greater is the loss. We call the resulting loss Cross-Entropy Overlap Distance (CEOD). The advantage of adding the masks in the training phase is that the network is forced to learn and recognize defects only in the area circumscribed by the mask. The added benefit is that, during inference, these masks will no longer be needed. Therefore, there is no difference, in terms of execution times, between a standard Convolutional Neural Network (CNN) and a network trained with this loss. In some applications, the masks themselves are determined at run-time through a trained segmentation network, as we have done for instance in the "Machine learning for visual inspection and quality control" project, funded by the MISE Competence Center Bi-REX

    Impurities Detection in Intensity Inhomogeneous Edible Bird’s Nest (EBN) Using a U-Net Deep Learning Model

    Get PDF
    As an important export, cleanliness control on edible bird’s nest (EBN) is paramount. Automatic impurities detection is in urgent need to replace manual practices. However, effective impurities detection algorithm is yet to be developed due to the unresolved inhomogeneous optical properties of EBN. The objective of this work is to develop a novel U-net based algorithm for accurate impurities detection. The algorithm leveraged the convolution mechanisms of U-net for precise and localized features extraction. Output probability tensors were then generated from the deconvolution layers for impurities detection and positioning. The U-net based algorithm outperformed previous image processing-based methods with a higher impurities detection rate of 96.69% and a lower misclassification rate of 10.08%. The applicability of the algorithm was further confirmed with a reasonably high dice coefficient of more than 0.8. In conclusion, the developed U-net based algorithm successfully mitigated intensity inhomogeneity in EBN and improved the impurities detection rate

    Effective Transfer of Pretrained Large Visual Model for Fabric Defect Segmentation via Specifc Knowledge Injection

    Full text link
    Fabric defect segmentation is integral to textile quality control. Despite this, the scarcity of high-quality annotated data and the diversity of fabric defects present significant challenges to the application of deep learning in this field. These factors limit the generalization and segmentation performance of existing models, impeding their ability to handle the complexity of diverse fabric types and defects. To overcome these obstacles, this study introduces an innovative method to infuse specialized knowledge of fabric defects into the Segment Anything Model (SAM), a large-scale visual model. By introducing and training a unique set of fabric defect-related parameters, this approach seamlessly integrates domain-specific knowledge into SAM without the need for extensive modifications to the pre-existing model parameters. The revamped SAM model leverages generalized image understanding learned from large-scale natural image datasets while incorporating fabric defect-specific knowledge, ensuring its proficiency in fabric defect segmentation tasks. The experimental results reveal a significant improvement in the model's segmentation performance, attributable to this novel amalgamation of generic and fabric-specific knowledge. When benchmarking against popular existing segmentation models across three datasets, our proposed model demonstrates a substantial leap in performance. Its impressive results in cross-dataset comparisons and few-shot learning experiments further demonstrate its potential for practical applications in textile quality control.Comment: 13 pages,4 figures, 3 table

    Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets

    Get PDF
    Die Qualitätskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Überwachung von Aktivitäten, um zu überprüfen, ob die Produkte den gewünschten Qualitätsstandard erfüllen. Viele Ansätze für die Qualitätskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mühsam, kostspielig in der Entwicklung und schwer zu pflegen, während die erstellte Lösung oft spröde ist und für leicht unterschiedliche Anwendungsfälle erhebliche Anpassungen erfordert. Aus diesen Gründen wird die Qualitätskontrolle in der Industrie immer noch häufig manuell durchgeführt, was zeitaufwändig und fehleranfällig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jüngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um repräsentative Merkmale direkt aus den Daten zu lernen. Während herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-Ansätze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen. In dieser Dissertation werden Modelle und Techniken für die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch präparierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in überwachte und unüberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene überwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binären Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgültiges Modell kann mehrere Klassen von Defekten in großen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren. Das erfolgreiche Trainieren von überwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend große Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfügbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei Ansätze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine Qualitätsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchführen

    On Deep Machine Learning Methods for Anomaly Detection within Computer Vision

    Get PDF
    This thesis concerns deep learning approaches for anomaly detection in images. Anomaly detection addresses how to find any kind of pattern that differs from the regularities found in normal data and is receiving increasingly more attention in deep learning research. This is due in part to its wide set of potential applications ranging from automated CCTV surveillance to quality control across a range of industries. We introduce three original methods for anomaly detection applicable to two specific deployment scenarios. In the first, we detect anomalous activity in potentially crowded scenes through imagery captured via CCTV or other video recording devices. In the second, we segment defects in textures and demonstrate use cases representative of automated quality inspection on industrial production lines. In the context of detecting anomalous activity in scenes, we take an existing state-of-the-art method and introduce several enhancements including the use of a region proposal network for region extraction and a more information-preserving feature preprocessing strategy. This results in a simpler method that is significantly faster and suitable for real-time application. In addition, the increased efficiency facilitates building higher-dimensional models capable of improved anomaly detection performance, which we demonstrate on the pedestrian-based UCSD Ped2 dataset. In the context of texture defect detection, we introduce a method based on the idea of texture restoration that surpasses all state-of-the-art methods on the texture classes of the challenging MVTecAD dataset. In the same context, we additionally introduce a method that utilises transformer networks for future pixel and feature prediction. This novel method is able to perform competitive anomaly detection on most of the challenging MVTecAD dataset texture classes and illustrates both the promise and limitations of state-of-the-art deep learning transformers for the task of texture anomaly detection

    Explainable Deep Learning

    Get PDF
    Il grande successo che il Deep Learning ha ottenuto in ambiti strategici per la nostra società quali l'industria, la difesa, la medicina etc., ha portanto sempre più realtà a investire ed esplorare l'utilizzo di questa tecnologia. Ormai si possono trovare algoritmi di Machine Learning e Deep Learning quasi in ogni ambito della nostra vita. Dai telefoni, agli elettrodomestici intelligenti fino ai veicoli che guidiamo. Quindi si può dire che questa tecnologia pervarsiva è ormai a contatto con le nostre vite e quindi dobbiamo confrontarci con essa. Da questo nasce l’eXplainable Artificial Intelligence o XAI, uno degli ambiti di ricerca che vanno per la maggiore al giorno d'oggi in ambito di Deep Learning e di Intelligenza Artificiale. Il concetto alla base di questo filone di ricerca è quello di rendere e/o progettare i nuovi algoritmi di Deep Learning in modo che siano affidabili, interpretabili e comprensibili all'uomo. Questa necessità è dovuta proprio al fatto che le reti neurali, modello matematico che sta alla base del Deep Learning, agiscono come una scatola nera, rendendo incomprensibile all'uomo il ragionamento interno che compiono per giungere ad una decisione. Dato che stiamo delegando a questi modelli matematici decisioni sempre più importanti, integrandole nei processi più delicati della nostra società quali, ad esempio, la diagnosi medica, la guida autonoma o i processi di legge, è molto importante riuscire a comprendere le motivazioni che portano questi modelli a produrre determinati risultati. Il lavoro presentato in questa tesi consiste proprio nello studio e nella sperimentazione di algoritmi di Deep Learning integrati con tecniche di Intelligenza Artificiale simbolica. Questa integrazione ha un duplice scopo: rendere i modelli più potenti, consentendogli di compiere ragionamenti o vincolandone il comportamento in situazioni complesse, e renderli interpretabili. La tesi affronta due macro argomenti: le spiegazioni ottenute grazie all'integrazione neuro-simbolica e lo sfruttamento delle spiegazione per rendere gli algoritmi di Deep Learning più capaci o intelligenti. Il primo macro argomento si concentra maggiormente sui lavori svolti nello sperimentare l'integrazione di algoritmi simbolici con le reti neurali. Un approccio è stato quelli di creare un sistema per guidare gli addestramenti delle reti stesse in modo da trovare la migliore combinazione di iper-parametri per automatizzare la progettazione stessa di queste reti. Questo è fatto tramite l'integrazione di reti neurali con la Programmazione Logica Probabilistica (PLP) che consente di sfruttare delle regole probabilistiche indotte dal comportamento delle reti durante la fase di addestramento o ereditate dall'esperienza maturata dagli esperti del settore. Queste regole si innescano allo scatenarsi di un problema che il sistema rileva durate l'addestramento della rete. Questo ci consente di ottenere una spiegazione di cosa è stato fatto per migliorare l'addestramento una volta identificato un determinato problema. Un secondo approccio è stato quello di far cooperare sistemi logico-probabilistici con reti neurali per la diagnosi medica da fonti di dati eterogenee. La seconda tematica affrontata in questa tesi tratta lo sfruttamento delle spiegazioni che possiamo ottenere dalle rete neurali. In particolare, queste spiegazioni sono usate per creare moduli di attenzione che aiutano a vincolare o a guidare le reti neurali portandone ad avere prestazioni migliorate. Tutti i lavori sviluppati durante il dottorato e descritti in questa tesi hanno portato alle pubblicazioni elencate nel Capitolo 14.2.The great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2
    corecore