1,776 research outputs found
Prototypes as Explanation for Time Series Anomaly Detection
Detecting abnormal patterns that deviate from a certain regular repeating
pattern in time series is essential in many big data applications. However, the
lack of labels, the dynamic nature of time series data, and unforeseeable
abnormal behaviors make the detection process challenging. Despite the success
of recent deep anomaly detection approaches, the mystical mechanisms in such
black-box models have become a new challenge in safety-critical applications.
The lack of model transparency and prediction reliability hinders further
breakthroughs in such domains. This paper proposes ProtoAD, using prototypes as
the example-based explanation for the state of regular patterns during anomaly
detection. Without significant impact on the detection performance, prototypes
shed light on the deep black-box models and provide intuitive understanding for
domain experts and stakeholders. We extend the widely used prototype learning
in classification problems into anomaly detection. By visualizing both the
latent space and input space prototypes, we intuitively demonstrate how regular
data are modeled and why specific patterns are considered abnormal
Autoencoders for strategic decision support
In the majority of executive domains, a notion of normality is involved in
most strategic decisions. However, few data-driven tools that support strategic
decision-making are available. We introduce and extend the use of autoencoders
to provide strategically relevant granular feedback. A first experiment
indicates that experts are inconsistent in their decision making, highlighting
the need for strategic decision support. Furthermore, using two large
industry-provided human resources datasets, the proposed solution is evaluated
in terms of ranking accuracy, synergy with human experts, and dimension-level
feedback. This three-point scheme is validated using (a) synthetic data, (b)
the perspective of data quality, (c) blind expert validation, and (d)
transparent expert evaluation. Our study confirms several principal weaknesses
of human decision-making and stresses the importance of synergy between a model
and humans. Moreover, unsupervised learning and in particular the autoencoder
are shown to be valuable tools for strategic decision-making
A Survey on Explainable Anomaly Detection
In the past two decades, most research on anomaly detection has focused on
improving the accuracy of the detection, while largely ignoring the
explainability of the corresponding methods and thus leaving the explanation of
outcomes to practitioners. As anomaly detection algorithms are increasingly
used in safety-critical domains, providing explanations for the high-stakes
decisions made in those domains has become an ethical and regulatory
requirement. Therefore, this work provides a comprehensive and structured
survey on state-of-the-art explainable anomaly detection techniques. We propose
a taxonomy based on the main aspects that characterize each explainable anomaly
detection technique, aiming to help practitioners and researchers find the
explainable anomaly detection method that best suits their needs.Comment: Paper accepted by the ACM Transactions on Knowledge Discovery from
Data (TKDD) for publication (preprint version
- …