6,737 research outputs found
Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective
Given the complexity and lack of transparency in deep neural networks (DNNs),
extensive efforts have been made to make these systems more interpretable or
explain their behaviors in accessible terms. Unlike most reviews, which focus
on algorithmic and model-centric perspectives, this work takes a "data-centric"
view, examining how data collection, processing, and analysis contribute to
explainable AI (XAI). We categorize existing work into three categories subject
to their purposes: interpretations of deep models, referring to feature
attributions and reasoning processes that correlate data points with model
outputs; influences of training data, examining the impact of training data
nuances, such as data valuation and sample anomalies, on decision-making
processes; and insights of domain knowledge, discovering latent patterns and
fostering new knowledge from data and models to advance social values and
scientific discovery. Specifically, we distill XAI methodologies into data
mining operations on training and testing data across modalities, such as
images, text, and tabular data, as well as on training logs, checkpoints,
models and other DNN behavior descriptors. In this way, our study offers a
comprehensive, data-centric examination of XAI from a lens of data mining
methods and applications
Explaining Deep Face Algorithms through Visualization: A Survey
Although current deep models for face tasks surpass human performance on some
benchmarks, we do not understand how they work. Thus, we cannot predict how it
will react to novel inputs, resulting in catastrophic failures and unwanted
biases in the algorithms. Explainable AI helps bridge the gap, but currently,
there are very few visualization algorithms designed for faces. This work
undertakes a first-of-its-kind meta-analysis of explainability algorithms in
the face domain. We explore the nuances and caveats of adapting general-purpose
visualization algorithms to the face domain, illustrated by computing
visualizations on popular face models. We review existing face explainability
works and reveal valuable insights into the structure and hierarchy of face
networks. We also determine the design considerations for practical face
visualizations accessible to AI practitioners by conducting a user study on the
utility of various explainability algorithms
GeXSe (Generative Explanatory Sensor System): An Interpretable Deep Generative Model for Human Activity Recognition in Smart Spaces
We introduce GeXSe (Generative Explanatory Sensor System), a novel framework
designed to extract interpretable sensor-based and vision domain features from
non-invasive smart space sensors. We combine these to provide a comprehensive
explanation of sensor-activation patterns in activity recognition tasks. This
system leverages advanced machine learning architectures, including transformer
blocks, Fast Fourier Convolution (FFC), and diffusion models, to provide a more
detailed understanding of sensor-based human activity data. A standout feature
of GeXSe is our unique Multi-Layer Perceptron (MLP) with linear, ReLU, and
normalization layers, specially devised for optimal performance on small
datasets. It also yields meaningful activation maps to explain sensor-based
activation patterns. The standard approach is based on a CNN model, which our
MLP model outperforms.GeXSe offers two types of explanations: sensor-based
activation maps and visual domain explanations using short videos. These
methods offer a comprehensive interpretation of the output from
non-interpretable sensor data, thereby augmenting the interpretability of our
model. Utilizing the Frechet Inception Distance (FID) for evaluation, it
outperforms established methods, improving baseline performance by about 6\%.
GeXSe also achieves a high F1 score of up to 0.85, demonstrating precision,
recall, and noise resistance, marking significant progress in reliable and
explainable smart space sensing systems.Comment: 29 pages,17 figure
- …