748 research outputs found

    A comparative study of anomaly detection methods for gross error detection problems.

    Get PDF
    The chemical industry requires highly accurate and reliable measurements to ensure smooth operation and effective monitoring of processing facilities. However, measured data inevitably contains errors from various sources. Traditionally in flow systems, data reconciliation through mass balancing is applied to reduce error by estimating balanced flows. However, this approach can only handle random errors. For non-random errors (called gross errors, GEs) which are caused by measurement bias, instrument failures, or process leaks, among others, this approach would return incorrect results. In recent years, many gross error detection (GED) methods have been proposed by the research community. It is recognised that the basic principle of GED is a special case of the detection of outliers (or anomalies) in data analytics. With the developments of Machine Learning (ML) research, patterns in the data can be discovered to provide effective detection of anomalous instances. In this paper, we present a comprehensive study of the application of ML-based Anomaly Detection methods (ADMs) in the GED context on a number of synthetic datasets and compare the results with several established GED approaches. We also perform data transformation on the measurement data and compare its associated results to the original results, as well as investigate the effects of training size on the detection performance. One class Support Vector Machine outperformed other ADMs and five selected statistical tests for GED on Accuracy, F1 Score, and Overall Power while Interquartile Range (IQR) method obtained the best selectivity outcome among the top 6 AMDs and the five statistical tests. The results indicate that ADMs can potentially be applied to GED problems

    Anomaly Detection using Autoencoders in High Performance Computing Systems

    Full text link
    Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states). We propose a novel approach for anomaly detection in High Performance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with). We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%).Comment: 9 pages, 3 figure

    Variational autoencoders for anomaly detection in the behaviour of the elderly using electricity consumption data

    Get PDF
    According To The World Health Organization, Between 2000 And 2050, The Propor Tion Of The World&#39 S Population Over 60 Will Double, From 11% To 22%. In Absolute Numbers, This Age Group Will Increase From 605 Million To 2 Billion In The Course Of Half A Century. It Is A Reality That Most Of Them Prefer To Live Alone, So It Is Necessary To Look For Mechanisms And Tools That Will Help Them To Improve Their Autonomy. Although In Recent Years, We Have Been Living In A Veritable Explosion Of Domotic Sys Tems That Facilitate People&#39 S Daily Lives, It Is Also True That There Are Not Many Tools Specifically Aimed At This Sector Of The Population. The Aim Of This Paper Is To Present A Potential Solution To The Monitoring Of Activity Of Daily Living In The Least Intrusive Way For People. In This Case, Anomalous Patterns Of Daily Activities Will Be Detected By Analysing The Daily Consumption Of Household Appliances. People Who Live Alone Usu Ally Have A Pattern Of Daily Behaviour In The Use Of Household Appliances (Coffee Machine, Microwave, Television, Etc.). A Neuronal Model Is Proposed For The Detection Of Abnormal Behaviour Based On An Autoencoder Architecture. This Solution Will Be Compared With A Variational Autoencoder To Analyse The Improvements That Can Be Obtained. The Well-Known Dataset Called Uk-Dale Will Be Used To Validate The Proposal.V PRICIT (Regional Programme of Research and Technological Innovation); Madrid Government (Comunidad de Madrid-Spain); Universidad Carlos III de Madrid, and Competitiveness (MINECO), Grant/Award Numbers: RTC-2016-5059-8, RTC-2016-5191-8, RTC-2016-5595-2, TEC2017-88048-C2-2-R; Spanish Ministry of Economy; Company MasMovi

    Learning Representations for Novelty and Anomaly Detection

    Get PDF
    The problem of novelty or anomaly detection refers to the ability to automatically identify data samples that differ from a notion of normality. Techniques that address this problem are necessary in many applications, like in medical diagnosis, autonomous driving, fraud detection, or cyber-attack detection, just to mention a few. The problem is inherently challenging because of the openness of the space of distributions that characterize novelty or outlier data points. This is often matched with the inability to adequately represent such distributions due to the lack of representative data. In this dissertation we address the challenge above by making several contributions. (a)We introduce an unsupervised framework for novelty detection, which is based on deep learning techniques, and which does not require labeled data representing the distribution of outliers. (b) The framework is general and based on first principles by detecting anomalies via computing their probabilities according to the distribution representing normality. (c) The framework can handle high-dimensional data such as images, by performing a non-linear dimensionality reduction of the input space into an isometric lower-dimensional space, leading to a computationally efficient method. (d) The framework is guarded from the potential inclusion of distributions of outliers into the distribution of normality by favoring that only inlier data can be well represented by the model. (e) The methods are evaluated extensively on multiple computer vision benchmark datasets, where it is shown that they compare favorably with the state of the art

    A Deep Learning based Detection Method for Combined Integrity-Availability Cyber Attacks in Power System

    Full text link
    As one of the largest and most complex systems on earth, power grid (PG) operation and control have stepped forward as a compound analysis on both physical and cyber layers which makes it vulnerable to assaults from economic and security considerations. A new type of attack, namely as combined data Integrity-Availability attack, has been recently proposed, where the attackers can simultaneously manipulate and blind some measurements on SCADA system to mislead the control operation and keep stealthy. Compared with traditional FDIAs, this combined attack can further complicate and vitiate the model-based detection mechanism. To detect such attack, this paper proposes a novel random denoising LSTM-AE (LSTMRDAE) framework, where the spatial-temporal correlations of measurements can be explicitly captured and the unavailable data is countered by the random dropout layer. The proposed algorithm is evaluated and the performance is verified on a standard IEEE 118-bus system under various unseen attack attempts

    Representation Learning with Adversarial Latent Autoencoders

    Get PDF
    A large number of deep learning methods applied to computer vision problems require encoder-decoder maps. These methods include, but are not limited to, self-representation learning, generalization, few-shot learning, and novelty detection. Encoder-decoder maps are also useful for photo manipulation, photo editing, superresolution, etc. Encoder-decoder maps are typically learned using autoencoder networks.Traditionally, autoencoder reciprocity is achieved in the image-space using pixel-wisesimilarity loss, which has a widely known flaw of producing non-realistic reconstructions. This flaw is typical for the Variational Autoencoder (VAE) family and is not only limited to pixel-wise similarity losses, but is common to all methods relying upon the explicit maximum likelihood training paradigm, as opposed to an implicit one. Likelihood maximization, coupled with poor decoder distribution leads to poor or blurry reconstructions at best. Generative Adversarial Networks (GANs) on the other hand, perform an implicit maximization of the likelihood by solving a minimax game, thus bypassing the issues derived from the explicit maximization. This provides GAN architectures with remarkable generative power, enabling the generation of high-resolution images of humans, which are indistinguishable from real photos to the naked eye. However, GAN architectures lack inference capabilities, which makes them unsuitable for training encoder-decoder maps, effectively limiting their application space.We introduce an autoencoder architecture that (a) is free from the consequences ofmaximizing the likelihood directly, (b) produces reconstructions competitive in quality with state-of-the-art GAN architectures, and (c) allows learning disentangled representations, which makes it useful in a variety of problems. We show that the proposed architecture and training paradigm significantly improves the state-of-the-art in novelty and anomaly detection methods, it enables novel kinds of image manipulations, and has significant potential for other applications

    Spatiotemporal anomaly detection: streaming architecture and algorithms

    Get PDF
    Includes bibliographical references.2020 Summer.Anomaly detection is the science of identifying one or more rare or unexplainable samples or events in a dataset or data stream. The field of anomaly detection has been extensively studied by mathematicians, statisticians, economists, engineers, and computer scientists. One open research question remains the design of distributed cloud-based architectures and algorithms that can accurately identify anomalies in previously unseen, unlabeled streaming, multivariate spatiotemporal data. With streaming data, time is of the essence, and insights are perishable. Real-world streaming spatiotemporal data originate from many sources, including mobile phones, supervisory control and data acquisition enabled (SCADA) devices, the internet-of-things (IoT), distributed sensor networks, and social media. Baseline experiments are performed on four (4) non-streaming, static anomaly detection multivariate datasets using unsupervised offline traditional machine learning (TML), and unsupervised neural network techniques. Multiple architectures, including autoencoders, generative adversarial networks, convolutional networks, and recurrent networks, are adapted for experimentation. Extensive experimentation demonstrates that neural networks produce superior detection accuracy over TML techniques. These same neural network architectures can be extended to process unlabeled spatiotemporal streaming using online learning. Space and time relationships are further exploited to provide additional insights and increased anomaly detection accuracy. A novel domain-independent architecture and set of algorithms called the Spatiotemporal Anomaly Detection Environment (STADE) is formulated. STADE is based on federated learning architecture. STADE streaming algorithms are based on a geographically unique, persistently executing neural networks using online stochastic gradient descent (SGD). STADE is designed to be pluggable, meaning that alternative algorithms may be substituted or combined to form an ensemble. STADE incorporates a Stream Anomaly Detector (SAD) and a Federated Anomaly Detector (FAD). The SAD executes at multiple locations on streaming data, while the FAD executes at a single server and identifies global patterns and relationships among the site anomalies. Each STADE site streams anomaly scores to the centralized FAD server for further spatiotemporal dependency analysis and logging. The FAD is based on recent advances in DNN-based federated learning. A STADE testbed is implemented to facilitate globally distributed experimentation using low-cost, commercial cloud infrastructure provided by Microsoft™. STADE testbed sites are situated in the cloud within each continent: Africa, Asia, Australia, Europe, North America, and South America. Communication occurs over the commercial internet. Three STADE case studies are investigated. The first case study processes commercial air traffic flows, the second case study processes global earthquake measurements, and the third case study processes social media (i.e., Twitter™) feeds. These case studies confirm that STADE is a viable architecture for the near real-time identification of anomalies in streaming data originating from (possibly) computationally disadvantaged, geographically dispersed sites. Moreover, the addition of the FAD provides enhanced anomaly detection capability. Since STADE is domain-independent, these findings can be easily extended to additional application domains and use cases

    DDMT: Denoising Diffusion Mask Transformer Models for Multivariate Time Series Anomaly Detection

    Full text link
    Anomaly detection in multivariate time series has emerged as a crucial challenge in time series research, with significant research implications in various fields such as fraud detection, fault diagnosis, and system state estimation. Reconstruction-based models have shown promising potential in recent years for detecting anomalies in time series data. However, due to the rapid increase in data scale and dimensionality, the issues of noise and Weak Identity Mapping (WIM) during time series reconstruction have become increasingly pronounced. To address this, we introduce a novel Adaptive Dynamic Neighbor Mask (ADNM) mechanism and integrate it with the Transformer and Denoising Diffusion Model, creating a new framework for multivariate time series anomaly detection, named Denoising Diffusion Mask Transformer (DDMT). The ADNM module is introduced to mitigate information leakage between input and output features during data reconstruction, thereby alleviating the problem of WIM during reconstruction. The Denoising Diffusion Transformer (DDT) employs the Transformer as an internal neural network structure for Denoising Diffusion Model. It learns the stepwise generation process of time series data to model the probability distribution of the data, capturing normal data patterns and progressively restoring time series data by removing noise, resulting in a clear recovery of anomalies. To the best of our knowledge, this is the first model that combines Denoising Diffusion Model and the Transformer for multivariate time series anomaly detection. Experimental evaluations were conducted on five publicly available multivariate time series anomaly detection datasets. The results demonstrate that the model effectively identifies anomalies in time series data, achieving state-of-the-art performance in anomaly detection.Comment: 16 pages, 9 figure
    corecore