8 research outputs found

    Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders

    Full text link
    Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a pixel-wise reconstruction error based on an p\ell^p distance. This procedure, however, leads to large residuals whenever the reconstruction encompasses slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that it cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity which examines inter-dependencies between local image regions, taking into account luminance, contrast and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over the state of the art approaches for unsupervised defect segmentation that use pixel-wise reconstruction error metrics

    Convolutional Sparse Support Estimator Based Covid-19 Recognition from X-ray Images

    Get PDF
    Coronavirus disease (Covid-19) has been the main agenda of the whole world since it came in sight in December 2019. It has already caused thousands of causalities and infected several millions worldwide. Any technological tool that can be provided to healthcare practitioners to save time, effort, and possibly lives has crucial importance. The main tools practitioners currently use to diagnose Covid-19 are Reverse Transcription-Polymerase Chain reaction (RT-PCR) and Computed Tomography (CT), which require significant time, resources and acknowledged experts. X-ray imaging is a common and easily accessible tool that has great potential for Covid-19 diagnosis. In this study, we propose a novel approach for Covid-19 recognition from chest X-ray images. Despite the importance of the problem, recent studies in this domain produced not so satisfactory results due to the limited datasets available for training. Recall that Deep Learning techniques can generally provide state-of-the-art performance in many classification tasks when trained properly over large datasets, such data scarcity can be a crucial obstacle when using them for Covid-19 detection. Alternative approaches such as representation-based classification (collaborative or sparse representation) might provide satisfactory performance with limited size datasets, but they generally fall short in performance or speed compared to Machine Learning methods. To address this deficiency, Convolution Support Estimation Network (CSEN) has recently been proposed as a bridge between model-based and Deep Learning approaches by providing a non-iterative real-time mapping from query sample to ideally sparse representation coefficient' support, which is critical information for class decision in representation based techniques.Comment: 10 page

    A comparative study of anomaly detection methods for gross error detection problems.

    Get PDF
    The chemical industry requires highly accurate and reliable measurements to ensure smooth operation and effective monitoring of processing facilities. However, measured data inevitably contains errors from various sources. Traditionally in flow systems, data reconciliation through mass balancing is applied to reduce error by estimating balanced flows. However, this approach can only handle random errors. For non-random errors (called gross errors, GEs) which are caused by measurement bias, instrument failures, or process leaks, among others, this approach would return incorrect results. In recent years, many gross error detection (GED) methods have been proposed by the research community. It is recognised that the basic principle of GED is a special case of the detection of outliers (or anomalies) in data analytics. With the developments of Machine Learning (ML) research, patterns in the data can be discovered to provide effective detection of anomalous instances. In this paper, we present a comprehensive study of the application of ML-based Anomaly Detection methods (ADMs) in the GED context on a number of synthetic datasets and compare the results with several established GED approaches. We also perform data transformation on the measurement data and compare its associated results to the original results, as well as investigate the effects of training size on the detection performance. One class Support Vector Machine outperformed other ADMs and five selected statistical tests for GED on Accuracy, F1 Score, and Overall Power while Interquartile Range (IQR) method obtained the best selectivity outcome among the top 6 AMDs and the five statistical tests. The results indicate that ADMs can potentially be applied to GED problems

    Multiple Surface Pipeline Leak Detection Using Real-Time Sensor Data Analysis

    Get PDF
    Pipelines enable the largest volume of both intra and international transportation of oil and gas and play critical roles in the energy sufficiency of countries. The biggest drawback with the use of pipelines for oil and gas transportation is the problem of oil spills whenever the pipelines lose containment. The severity of the oil spill on the environment is a function of the volume of the spill and this is a function of the time taken to detect the leak and contain the spill from the pipeline. A single leak on the Enbridge pipeline spilled 3.3 million liters into the Kalamazoo river while a pipeline rupture in North Dakota which went undetected for 143 days spilled 29 million gallons into the environment.Several leak detection systems (LDS) have been developed with the capacity for rapid detection and localization of pipeline leaks, but the characteristics of these LDS limit their leak detection capability. Machine learning provides an opportunity to develop faster LDS, but it requires access to pipeline leak datasets that are proprietary in nature and not readily available. Current LDS have difficulty in detecting low-volume/low-pressure spills located far away from the inlet and outlet pressure sensors. Some reasons for this include the following, leak induced pressure variation generated by these leaks is dissipated before it gets to the inlet and outlet pressure sensors, another reason is that the LDS are designed for specific minimum detection levels which is a percentage of the flow volume of the pipeline, so when the leak falls below the LDS minimum detection value, the leak will not be detected. Perturbations generated by small volume leaks are often within the threshold values of the pipeline\u27s normal operational envelop as such the LDS disregards these perturbations. These challenges have been responsible for pipeline leaks going on for weeks only to be detected by third-party persons in the vicinity of the leaks. This research has been able to develop a framework for the generation of pipeline datasets using the PIPESIM software and the RAND function in Python. The topological data of the pipeline right of way, the pipeline network design specification, and the fluid flow properties are the required information for this framework. With this information, leaks can be simulated at any point on the pipeline and the datasets generated. This framework will facilitate the generation of the One-class dataset for the pipeline which can be used for the development of LDS using machine learning. The research also developed a leak detection topology for detecting low-volume leaks. This topology comprises of the installation of a pressure sensor with remote data transmission capacity at the midpoint of the line. The sensor utilizes the exception-based transmission scheme where it only transmits when the new data differs from the existing data value. This will extend the battery life of the sensor. The installation of the sensor at the midpoint of the line was found to increase the sensitivity of the LDS to leak-induced pressure variations which were traditionally dissipated before getting to the Inlet/outlet sensors. The research also proposed the development of a Leak Detection as a Service (LDaaS) platform where the pressure data from the inlet and the midpoint sensors are collated and subjected to a specially developed leak detection algorithm for the detection of pipeline leaks. This leak detection topology will enable operators to detect low-volume/low-pressure leaks that would have been missed by the existing leak detection system and deploy the oil spill response plans quicker thus reducing the volume of oil spilled into the environment. It will also provide a platform for regulators to monitor the leak alerts as they are generated and enable them to evaluate the oil spill response plans of the operators

    Detecting Anomalous Structures by Convolutional Sparse Models

    Get PDF
    acceptedVersionPeer reviewe

    Über lernende optische Inspektion am Beispiel der Schüttgutsortierung

    Get PDF
    Die automatische optische Inspektion spielt als zerstörungsfreie Analysemethode in modernen industriellen Fertigungsprozessen eine wichtige Rolle. Typische, kommerziell eingesetzte automatische Inspektionssysteme sind dabei speziell an die jeweilige Aufgabenstellung angepasst und sind sehr aufwendig in der Entwicklung und Inbetriebnahme. Außerdem kann mangelndes Systemwissen der Anwender die Inspektionsleistung im industriellen Einsatz verschlechtern. Maschinelle Lernverfahren bieten eine Alternative: Die Anwender stellen lediglich eine Stichprobe bereit und das System konfiguriert sich von selbst. Ebenso können diese Verfahren versteckte Zusammenhänge in den Daten aufdecken und so den Entwurf von Inspektionssystemen unterstützen. Diese Arbeit beschäftigt sich mit geeigneten lernenden Verfahren für die optische Inspektion. Die als Beispiel dienende Schüttgutsortierung setzt dabei die Rahmenbedingungen: Die Aufnahmebedingungen sind kontrolliert und die Objekterscheinung einfach. Gleichzeitig zeigen die Objekte mitunter nur wenige diskriminative Merkmale. Die Lernstichproben sind klein, unbalanciert und oft unvollständig in Bezug auf die möglichen Defektklassen. Zusätzlich ist die verfügbare Rechenzeit stark begrenzt. Unter Berücksichtigung dieser Besonderheiten werden in der vorliegenden Arbeit lernende Methoden für die Mustererkennungs-Schritte Bilderfassung, Merkmalsextraktion und Klassifikation entwickelt. Die Auslegung der Bilderfassung wird durch die automatische Selektion optischer Filter zur Hervorhebung diskriminativer Merkmale unterstützt. Anders als vergleichbare Methoden erlaubt die hier beschriebenen Methode die Selektion optische Filter mit beliebig komplizierten Transmissionskurven. Da relevante Merkmale die Grundvoraussetzung für eine erfolgreiche Klassifikation sind, nimmt die Merkmalsextraktion einen großen Teil der Arbeit ein. Solche Merkmale können beispielsweise aus einer Menge an Standardmerkmalen identifiziert werden. In der Schüttgutsortierung ist dabei neben der Relevanz aber auch der Rechenaufwand der Merkmalsextraktion von Bedeutung. In dieser Arbeit wird daher ein Merkmalsselektionsverfahren beschrieben, welches diesen Aufwand mit einbezieht. Daneben werden auch Verfahren untersucht, mit denen sich Merkmale mit Hilfe einer Lernstichprobe an ein gegebenes Sortierproblem anpassen lassen. Im Rahmen dieser Arbeit werden dazu zwei Methoden zum Lernen von Formmerkmalen bzw. von Farb- und Texturmerkmalen beschrieben. Mit beiden Verfahren werden einfache, schnell berechenbare, aber wenig diskriminative Merkmale zu hochdiskriminativen Deskriptoren kombiniert. Das Verfahren zum Lernen der Farb- und Texturdeskriptoren erlaubt außerdem die Detektion und Rückweisung unbekannter Objekte. Diese Rückweisungsoption wird im Sinne statistischer Tests für Anwender leicht verständlich parametriert. Die Detektion unbekannter Objekte ist auch das Ziel der Einklassenklassifikation. Hierfür wird in dieser Arbeit ein Verfahren beschrieben, das den Klassifikator anhand einer Lernstichprobe mit lediglich Beispielen der Positivklasse festlegt. Die Struktur dieses Klassifikators wird außerdem ausgenutzt, um sicher unbekannte Objekte um Größenordnungen schneller zurückzuweisen als dies mit alternativen Verfahren möglich ist. Alle vorgestellten Verfahren werden anhand von synthetischen Datensätzen und Datensätzen aus der Lebensmittelinspektion, Mineralsortierung und Inspektion technischer Gegenstände quantitativ evaluiert. In einer Gegenüberstellung mit vergleichbaren Methoden aus der Literatur werden die Stärken und Einschränkungen der Methoden herausgestellt. Hierbei zeigten sich alle vorgestellten Verfahren gut für die Schüttgutsortierung geeignet. Die vorgestellten Verfahren ergänzen sich außerdem gegenseitig. Sie können genutzt werden, um ein komplettes Sortiersystem auszulegen oder um einzeln als Komponenten in einem bestehenden System eingesetzt zu werden. Die Methoden sind dabei nicht auf einen bestimmten Anwendungsfall zugeschnitten, sondern für eine großen Palette an Produkten einsetzbar. Somit liefert diese Arbeit einen Beitrag zur Anwendung maschineller Lernverfahren in optischen Inspektionssystemen
    corecore