96 research outputs found

    An architecture to predict anomalies in industrial processes

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThe Internet of Things (IoT) and machine learning algorithms (ML) are enabling a revolutionary change in digitization in numerous areas, benefiting Industry 4.0 in particular. Predictive maintenance using machine learning models is being used to protect assets in industry. In this paper, an architecture for predicting anomalies in industrial processes was proposed in which SMEs can be guided in implementing an IIoT architecture for predictive maintenance (PdM). This research was conducted to understand what machine learning architectures and models are generally used by industry for PdM. An overview of the concepts of the Industrial Internet of Things (IIoT), machine learning (ML), and predictive maintenance (PdM) was provided, and through a systematic literature review, it was possible to understand their applications and which technologies enable their use. The survey revealed that PdM applications are increasingly common and that there are many studies on the development of new ML techniques. The survey conducted confirmed the usefulness of the artifact and showed the need for an architecture to guide the implementation of PdM. This research can be a contribution for SMEs, allowing them to become more efficient and reduce both production and maintenance costs in order to keep up with multinational companies

    A manifold learning approach to target detection in high-resolution hyperspectral imagery

    Get PDF
    Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying targets such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m \u3c\u3c d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space

    A Survey on Unsupervised Anomaly Detection Algorithms for Industrial Images

    Full text link
    In line with the development of Industry 4.0, surface defect detection/anomaly detection becomes a topical subject in the industry field. Improving efficiency as well as saving labor costs has steadily become a matter of great concern in practice, where deep learning-based algorithms perform better than traditional vision inspection methods in recent years. While existing deep learning-based algorithms are biased towards supervised learning, which not only necessitates a huge amount of labeled data and human labor, but also brings about inefficiency and limitations. In contrast, recent research shows that unsupervised learning has great potential in tackling the above disadvantages for visual industrial anomaly detection. In this survey, we summarize current challenges and provide a thorough overview of recently proposed unsupervised algorithms for visual industrial anomaly detection covering five categories, whose innovation points and frameworks are described in detail. Meanwhile, publicly available datasets for industrial anomaly detection are introduced. By comparing different classes of methods, the advantages and disadvantages of anomaly detection algorithms are summarized. Based on the current research framework, we point out the core issue that remains to be resolved and provide further improvement directions. Meanwhile, based on the latest technological trends, we offer insights into future research directions. It is expected to assist both the research community and industry in developing a broader and cross-domain perspective

    Data-efficient reinforcement learning with self-predictive representations

    Full text link
    L'efficacité des données reste un défi majeur dans l'apprentissage par renforcement profond. Bien que les techniques modernes soient capables d'atteindre des performances élevées dans des tâches extrêmement complexes, y compris les jeux de stratégie comme le StarCraft, les échecs, le shogi et le go, ainsi que dans des domaines visuels exigeants comme les jeux Atari, cela nécessite généralement d'énormes quantités de données interactives, limitant ainsi l'application pratique de l'apprentissage par renforcement. Dans ce mémoire, nous proposons la SPR, une méthode inspirée des récentes avancées en apprentissage auto-supervisé de représentations, conçue pour améliorer l'efficacité des données des agents d'apprentissage par renforcement profond. Nous évaluons cette méthode sur l'environement d'apprentissage Atari, et nous montrons qu'elle améliore considérablement les performances des agents avec un surcroît de calcul modéré. Lorsqu'on lui accorde à peu près le même temps d'apprentissage qu'aux testeurs humains, un agent d'apprentissage par renforcement augmenté de SPR atteint des performances surhumaines dans 7 des 26 jeux, une augmentation de 350% par rapport à l'état de l'art précédent, tout en améliorant fortement les performances moyennes et médianes. Nous évaluons également cette méthode sur un ensemble de tâches de contrôle continu, montrant des améliorations substantielles par rapport aux méthodes précédentes. Le chapitre 1 présente les concepts nécessaires à la compréhension du travail présenté, y compris des aperçus de l'apprentissage par renforcement profond et de l'apprentissage auto-supervisé de représentations. Le chapitre 2 contient une description détaillée de nos contributions à l'exploitation de l'apprentissage de représentation auto-supervisé pour améliorer l'efficacité des données dans l'apprentissage par renforcement. Le chapitre 3 présente quelques conclusions tirées de ces travaux, y compris des propositions pour les travaux futurs.Data efficiency remains a key challenge in deep reinforcement learning. Although modern techniques have been shown to be capable of attaining high performance in extremely complex tasks, including strategy games such as StarCraft, Chess, Shogi, and Go as well as in challenging visual domains such as Atari games, doing so generally requires enormous amounts of interactional data, limiting how broadly reinforcement learning can be applied. In this thesis, we propose SPR, a method drawing from recent advances in self-supervised representation learning designed to enhance the data efficiency of deep reinforcement learning agents. We evaluate this method on the Atari Learning Environment, and show that it dramatically improves performance with limited computational overhead. When given roughly the same amount of learning time as human testers, a reinforcement learning agent augmented with SPR achieves super-human performance on 7 out of 26 games, an increase of 350% over the previous state of the art, while also strongly improving mean and median performance. We also evaluate this method on a set of continuous control tasks, showing substantial improvements over previous methods. Chapter 1 introduces concepts necessary to understand the work presented, including overviews of Deep Reinforcement Learning and Self-Supervised Representation learning. Chapter 2 contains a detailed description of our contributions towards leveraging self-supervised representation learning to improve data-efficiency in reinforcement learning. Chapter 3 provides some conclusions drawn from this work, including a number of proposals for future work

    Data-driven solutions to enhance planning, operation and design tools in Industry 4.0 context

    Get PDF
    This thesis proposes three different data-driven solutions to be combined to state-of-the-art solvers and tools in order to primarily enhance their computational performances. The problem of efficiently designing the open sea floating platforms on which wind turbines can be mount on will be tackled, as well as the tuning of a data-driven engine's monitoring tool for maritime transportation. Finally, the activities of SAT and ASP solvers will be thoroughly studied and a deep learning architecture will be proposed to enhance the heuristics-based solving approach adopted by such software. The covered domains are different and the same is true for their respective targets. Nonetheless, the proposed Artificial Intelligence and Machine Learning algorithms are shared as well as the overall picture: promote Industrial AI and meet the constraints imposed by Industry 4.0 vision. The lesser presence of human-in-the-loop, a data-driven approach to discover causalities otherwise ignored, a special attention to the environmental impact of industries' emissions, a real and efficient exploitation of the Big Data available today are just a subset of the latter. Hence, from a broader perspective, the experiments carried out within this thesis are driven towards the aforementioned targets and the resulting outcomes are satisfactory enough to potentially convince the research community and industrialists that they are not just "visions" but they can be actually put into practice. However, it is still an introduction to the topic and the developed models are at what can be defined a "pilot" stage. Nonetheless, the results are promising and they pave the way towards further improvements and the consolidation of the dictates of Industry 4.0

    A statistical approach to automated detection of multi-component radio sources

    Get PDF
    Advances in radio astronomy are allowing for deeper and wider areas of the sky to be observed than ever before. Source counts of future radio surveys are expected to number in the tens of millions. Source finding techniques are used to identify sources in a radio image, however, these techniques identify single distinct sources and are challenged to identify multi-component sources, that is to say, where two or more distinct sources belong to the same underlying physical phenomenon, such as a radio galaxy. Identification of such phenomena is an important step in generating catalogues from surveys on which much of the radio astronomy science is based. Historically, identifying multi-component sources was conducted by visual inspection, however, the size of future surveys makes manual identification prohibitive. An algorithm to automate this process using statistical techniques is proposed. The algorithm is demonstrated on two radio images. The output of the algorithm is a catalogue where nearest neighbour source pairs are assigned a probability score of being a component of the same physical object. By applying several selection criteria, pairs of sources which are likely to be multi-component sources can be determined. Radio image cutouts are then generated from this selection and may be used as input into radio source classification techniques. Successful identification of multi-component sources using this method is demonstrated

    An adaptive ensemble filter for heavy-tailed distributions: tuning-free inflation and localization

    Full text link
    Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes as well as the uncertainty from physical sensors. In these settings, the Kalman filter and its ensemble version - the ensemble Kalman filter (EnKF) - that have been designed under Gaussian assumptions result in degraded performance. t-distributions are a parametric family of distributions whose tail-heaviness is modulated by a degree of freedom ν\nu. Interestingly, Cauchy and Gaussian distributions correspond to the extreme cases of a t-distribution for ν=1\nu = 1 and ν=∞\nu = \infty, respectively. Leveraging tools from measure transport (Spantini et al., SIAM Review, 2022), we present a generalization of the EnKF whose prior-to-posterior update leads to exact inference for t-distributions. We demonstrate that this filter is less sensitive to outlying synthetic observations generated by the observation model for small ν\nu. Moreover, it recovers the Kalman filter for ν=∞\nu = \infty. For nonlinear state-space models with heavy-tailed noise, we propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations. We rely on a regularized expectation-maximization (EM) algorithm to estimate the mean, scale matrix, and degree of freedom of heavy-tailed \textit{t}-distributions from limited samples (Finegold and Drton, arXiv preprint, 2014). Leveraging the conditional independence of the joint forecast distribution, we regularize the scale matrix with an l1l1 sparsity-promoting penalization of the log-likelihood at each iteration of the EM algorithm. By sequentially estimating the degree of freedom at each analysis step, our filter can adapt its prior-to-posterior update to the tail-heaviness of the data. We demonstrate the benefits of this new ensemble filter on challenging filtering problems.Comment: 28 pages, 9 figure

    Maximum Margin Learning Under Uncertainty

    Get PDF
    PhDIn this thesis we study the problem of learning under uncertainty using the statistical learning paradigm. We rst propose a linear maximum margin classi er that deals with uncertainty in data input. More speci cally, we reformulate the standard Support Vector Machine (SVM) framework such that each training example can be modeled by a multi-dimensional Gaussian distribution described by its mean vector and its covariance matrix { the latter modeling the uncertainty. We address the classi cation problem and de ne a cost function that is the expected value of the classical SVM cost when data samples are drawn from the multi-dimensional Gaussian distributions that form the set of the training examples. Our formulation approximates the classical SVM formulation when the training examples are isotropic Gaussians with variance tending to zero. We arrive at a convex optimization problem, which we solve e - ciently in the primal form using a stochastic gradient descent approach. The resulting classi er, which we name SVM with Gaussian Sample Uncertainty (SVM-GSU), is tested on synthetic data and ve publicly available and popular datasets; namely, the MNIST, WDBC, DEAP, TV News Channel Commercial Detection, and TRECVID MED datasets. Experimental results verify the e ectiveness of the proposed method. Next, we extended the aforementioned linear classi er so as to lead to non-linear decision boundaries, using the RBF kernel. This extension, where we use isotropic input uncertainty and we name Kernel SVM with Isotropic Gaussian Sample Uncertainty (KSVM-iGSU), is used in the problems of video event detection and video aesthetic quality assessment. The experimental results show that exploiting input uncertainty, especially in problems where only a limited number of positive training examples are provided, can lead to better classi cation, detection, or retrieval performance. Finally, we present a preliminary study on how the above ideas can be used under the deep convolutional neural networks learning paradigm so as to exploit inherent sources of uncertainty, such as spatial pooling operations, that are usually used in deep networks
    • …
    corecore