1,754 research outputs found
On the Nature and Types of Anomalies: A Review
Anomalies are occurrences in a dataset that are in some way unusual and do
not fit the general patterns. The concept of the anomaly is generally
ill-defined and perceived as vague and domain-dependent. Moreover, despite some
250 years of publications on the topic, no comprehensive and concrete overviews
of the different types of anomalies have hitherto been published. By means of
an extensive literature review this study therefore offers the first
theoretically principled and domain-independent typology of data anomalies, and
presents a full overview of anomaly types and subtypes. To concretely define
the concept of the anomaly and its different manifestations, the typology
employs five dimensions: data type, cardinality of relationship, anomaly level,
data structure and data distribution. These fundamental and data-centric
dimensions naturally yield 3 broad groups, 9 basic types and 61 subtypes of
anomalies. The typology facilitates the evaluation of the functional
capabilities of anomaly detection algorithms, contributes to explainable data
science, and provides insights into relevant topics such as local versus global
anomalies.Comment: 38 pages (30 pages content), 10 figures, 3 tables. Preprint; review
comments will be appreciated. Improvements in version 2: Explicit mention of
fifth anomaly dimension; Added section on explainable anomaly detection;
Added section on variations on the anomaly concept; Various minor additions
and improvement
PETMiner - A visual analysis tool for petrophysical properties of core sample data
The aim of the PETMiner software is to reduce the time and monetary cost of analysing petrophysical data that is obtained from reservoir sample cores. Analysis of these data requires tacit knowledge to fill ‘gaps’ so that predictions can be made for incomplete data. Through discussions with 30 industry and academic specialists, we identified three analysis use cases that exemplified the limitations of current petrophysics analysis tools. We used those use cases to develop nine core requirements for PETMiner, which is innovative because of its ability to display detailed images of the samples as data points, directly plot multiple sample properties and derived measures for comparison, and substantially reduce interaction cost. An 11-month evaluation demonstrated benefits across all three use cases by allowing a consultant to: (1) generate more accurate reservoir flow models, (2) discover a previously unknown relationship between one easy-to-measure property and another that is costly, and (3) make a 100-fold reduction in the time required to produce plots for a report
Graph based Anomaly Detection and Description: A Survey
Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised vs. (semi-)supervised approaches, for static vs. dynamic graphs, for attributed vs. plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the ‘why’, of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field
Explainable Contextual Anomaly Detection using Quantile Regression Forests
Traditional anomaly detection methods aim to identify objects that deviate
from most other objects by treating all features equally. In contrast,
contextual anomaly detection methods aim to detect objects that deviate from
other objects within a context of similar objects by dividing the features into
contextual features and behavioral features. In this paper, we develop
connections between dependency-based traditional anomaly detection methods and
contextual anomaly detection methods. Based on resulting insights, we propose a
novel approach to inherently interpretable contextual anomaly detection that
uses Quantile Regression Forests to model dependencies between features.
Extensive experiments on various synthetic and real-world datasets demonstrate
that our method outperforms state-of-the-art anomaly detection methods in
identifying contextual anomalies in terms of accuracy and interpretability.Comment: Manuscript submitted to Data Mining and Knowledge Discovery in
October 2022 for possible publication. This is the revised version submitted
in April 202
Inferring Anomalies from Data using Bayesian Networks
Existing studies on data mining has largely focused on the design of measures and algorithms to identify outliers in large and high dimensional categorical and numeric databases. However, not much stress has been given on the interestingness of the reported outlier. One way to ascertain interestingness and usefulness of the reported outlier is by making use of domain knowledge. In this thesis, we present measures to discover outliers based on background knowledge, represented by a Bayesian network. Using causal relationships between attributes encoded in the Bayesian framework, we demonstrate that meaningful outliers, i.e., outliers which encode important or new information are those which violate causal relationships encoded in the model. Depending upon nature of data, several approaches are proposed to identify and explain anomalies using Bayesian knowledge. Outliers are often identified as data points which are ``rare'', ''isolated'', or ''far away from their nearest neighbors''. We show that these characteristics may not be an accurate way of describing interesting outliers. Through a critical analysis on several existing outlier detection techniques, we show why there is a mismatch between outliers as entities described by these characteristics and ``real'' outliers as identified using Bayesian approach. We show that the Bayesian approaches presented in this thesis has better accuracy in mining genuine outliers while, keeping a low false positive rate as compared to traditional outlier detection techniques
Machine Learning for Identifying Group Trajectory Outliers
Prior works on the trajectory outlier detection problem solely consider individual outliers. However, in real-world scenarios, trajectory outliers can often appear in groups, e.g., a group of bikes that deviates to the usual trajectory due to the maintenance of streets in the context of intelligent transportation. The current paper considers the Group Trajectory Outlier (GTO) problem and proposes three algorithms. The first and the second algorithms are extensions of the well-known DBSCAN and kNN algorithms, while the third one models the GTO problem as a feature selection problem. Furthermore, two different enhancements for the proposed algorithms are proposed. The first one is based on ensemble learning and computational intelligence, which allows for merging algorithms’ outputs to possibly improve the final result. The second is a general high-performance computing framework that deals with big trajectory databases, which we used for a GPU-based implementation. Experimental results on different real trajectory databases show the scalability of the proposed approaches.acceptedVersio
- …