1,193 research outputs found
Towards Explainable Visual Anomaly Detection
Anomaly detection and localization of visual data, including images and
videos, are of great significance in both machine learning academia and applied
real-world scenarios. Despite the rapid development of visual anomaly detection
techniques in recent years, the interpretations of these black-box models and
reasonable explanations of why anomalies can be distinguished out are scarce.
This paper provides the first survey concentrated on explainable visual anomaly
detection methods. We first introduce the basic background of image-level
anomaly detection and video-level anomaly detection, followed by the current
explainable approaches for visual anomaly detection. Then, as the main content
of this survey, a comprehensive and exhaustive literature review of explainable
anomaly detection methods for both images and videos is presented. Finally, we
discuss several promising future directions and open problems to explore on the
explainability of visual anomaly detection
A Survey on Explainable Anomaly Detection
In the past two decades, most research on anomaly detection has focused on
improving the accuracy of the detection, while largely ignoring the
explainability of the corresponding methods and thus leaving the explanation of
outcomes to practitioners. As anomaly detection algorithms are increasingly
used in safety-critical domains, providing explanations for the high-stakes
decisions made in those domains has become an ethical and regulatory
requirement. Therefore, this work provides a comprehensive and structured
survey on state-of-the-art explainable anomaly detection techniques. We propose
a taxonomy based on the main aspects that characterize each explainable anomaly
detection technique, aiming to help practitioners and researchers find the
explainable anomaly detection method that best suits their needs.Comment: Paper accepted by the ACM Transactions on Knowledge Discovery from
Data (TKDD) for publication (preprint version
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Discovering and exploiting the causality in deep neural networks (DNNs) are
crucial challenges for understanding and reasoning causal effects (CE) on an
explainable visual model. "Intervention" has been widely used for recognizing a
causal relation ontologically. In this paper, we propose a causal inference
framework for visual reasoning via do-calculus. To study the intervention
effects on pixel-level features for causal reasoning, we introduce pixel-wise
masking and adversarial perturbation. In our framework, CE is calculated using
features in a latent space and perturbed prediction from a DNN-based model. We
further provide the first look into the characteristics of discovered CE of
adversarially perturbed images generated by gradient-based methods
\footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}.
Experimental results show that CE is a competitive and robust index for
understanding DNNs when compared with conventional methods such as
class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for
human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds
promises for detecting adversarial examples as it possesses distinct
characteristics in the presence of adversarial perturbations.Comment: Noted our camera-ready version has changed the title. "When Causal
Intervention Meets Adversarial Examples and Image Masking for Deep Neural
Networks" as the v3 official paper title in IEEE Proceeding. Please use it in
your formal reference. Accepted at IEEE ICIP 2019. Pytorch code has released
on https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvIm
Looking in the Right place for Anomalies: Explainable AI through Automatic Location Learning
Deep learning has now become the de facto approach to the recognition of
anomalies in medical imaging. Their 'black box' way of classifying medical
images into anomaly labels poses problems for their acceptance, particularly
with clinicians. Current explainable AI methods offer justifications through
visualizations such as heat maps but cannot guarantee that the network is
focusing on the relevant image region fully containing the anomaly. In this
paper, we develop an approach to explainable AI in which the anomaly is assured
to be overlapping the expected location when present. This is made possible by
automatically extracting location-specific labels from textual reports and
learning the association of expected locations to labels using a hybrid
combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks
(Bi-LSTM) and DenseNet-121. Use of this expected location to bias the
subsequent attention-guided inference network based on ResNet101 results in the
isolation of the anomaly at the expected location when present. The method is
evaluated on a large chest X-ray dataset.Comment: 5 pages, Paper presented as a poster at the International Symposium
on Biomedical Imaging, 2020, Paper Number 65
A deep reinforcement learning based homeostatic system for unmanned position control
Deep Reinforcement Learning (DRL) has been proven to be capable of designing an optimal control theory by minimising the error in dynamic systems. However, in many of the real-world operations, the exact behaviour of the environment is unknown. In such environments, random changes cause the system to reach different states for the same action. Hence, application of DRL for unpredictable environments is difficult as the states of the world cannot be known for non-stationary transition and reward functions. In this paper, a mechanism to encapsulate the randomness of the environment is suggested using a novel bio-inspired homeostatic approach based on a hybrid of Receptor Density Algorithm (an artificial immune system based anomaly detection application) and a Plastic Spiking Neuronal model. DRL is then introduced to run in conjunction with the above hybrid model. The system is tested on a vehicle to autonomously re-position in an unpredictable environment. Our results show that the DRL based process control raised the accuracy of the hybrid model by 32%.N/
Suspicious Behavior Detection with Temporal Feature Extraction and Time-Series Classification for Shoplifting Crime Prevention
The rise in crime rates in many parts of the world, coupled with advancements in computer vision, has increased the need for automated crime detection services. To address this issue, we propose a new approach for detecting suspicious behavior as a means of preventing shoplifting. Existing methods are based on the use of convolutional neural networks that rely on extracting spatial features from pixel values. In contrast, our proposed method employs object detection based on YOLOv5 with Deep Sort to track people through a video, using the resulting bounding box coordinates as temporal features. The extracted temporal features are then modeled as a time-series classification problem. The proposed method was tested on the popular UCF Crime dataset, and benchmarked against the current state-of-the-art robust temporal feature magnitude (RTFM) method, which relies on the Inflated 3D ConvNet (I3D) preprocessing method. Our results demonstrate an impressive 8.45-fold increase in detection inference speed compared to the state-of-the-art RTFM, along with an F1 score of 92%,outperforming RTFM by 3%. Furthermore, our method achieved these results without requiring expensive data augmentation or image feature extraction
Transparent Anomaly Detection via Concept-based Explanations
Advancements in deep learning techniques have given a boost to the
performance of anomaly detection. However, real-world and safety-critical
applications demand a level of transparency and reasoning beyond accuracy. The
task of anomaly detection (AD) focuses on finding whether a given sample
follows the learned distribution. Existing methods lack the ability to reason
with clear explanations for their outcomes. Hence to overcome this challenge,
we propose Transparent {A}nomaly Detection {C}oncept {E}xplanations (ACE). ACE
is able to provide human interpretable explanations in the form of concepts
along with anomaly prediction. To the best of our knowledge, this is the first
paper that proposes interpretable by-design anomaly detection. In addition to
promoting transparency in AD, it allows for effective human-model interaction.
Our proposed model shows either higher or comparable results to black-box
uninterpretable models. We validate the performance of ACE across three
realistic datasets - bird classification on CUB-200-2011, challenging
histopathology slide image classification on TIL-WSI-TCGA, and gender
classification on CelebA. We further demonstrate that our concept learning
paradigm can be seamlessly integrated with other classification-based AD
methods.Comment: Accepted at Neurips XAI in Action worksho
- …