7,009 research outputs found
Towards Interpretable Deep Learning Models for Knowledge Tracing
As an important technique for modeling the knowledge states of learners, the
traditional knowledge tracing (KT) models have been widely used to support
intelligent tutoring systems and MOOC platforms. Driven by the fast
advancements of deep learning techniques, deep neural network has been recently
adopted to design new KT models for achieving better prediction performance.
However, the lack of interpretability of these models has painfully impeded
their practical applications, as their outputs and working mechanisms suffer
from the intransparent decision process and complex inner structures. We thus
propose to adopt the post-hoc method to tackle the interpretability issue for
deep learning based knowledge tracing (DLKT) models. Specifically, we focus on
applying the layer-wise relevance propagation (LRP) method to interpret
RNN-based DLKT model by backpropagating the relevance from the model's output
layer to its input layer. The experiment results show the feasibility using the
LRP method for interpreting the DLKT model's predictions, and partially
validate the computed relevance scores from both question level and concept
level. We believe it can be a solid step towards fully interpreting the DLKT
models and promote their practical applications in the education domain
Teaching Categories to Human Learners with Visual Explanations
We study the problem of computer-assisted teaching with explanations.
Conventional approaches for machine teaching typically only provide feedback at
the instance level e.g., the category or label of the instance. However, it is
intuitive that clear explanations from a knowledgeable teacher can
significantly improve a student's ability to learn a new concept. To address
these existing limitations, we propose a teaching framework that provides
interpretable explanations as feedback and models how the learner incorporates
this additional information. In the case of images, we show that we can
automatically generate explanations that highlight the parts of the image that
are responsible for the class label. Experiments on human learners illustrate
that, on average, participants achieve better test set performance on
challenging categorization tasks when taught with our interpretable approach
compared to existing methods
Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?
The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications
Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
The recent series of innovations in deep learning (DL) have shown enormous
potential to impact individuals and society, both positively and negatively.
The DL models utilizing massive computing power and enormous datasets have
significantly outperformed prior historical benchmarks on increasingly
difficult, well-defined research tasks across technology domains such as
computer vision, natural language processing, signal processing, and
human-computer interactions. However, the Black-Box nature of DL models and
their over-reliance on massive amounts of data condensed into labels and dense
representations poses challenges for interpretability and explainability of the
system. Furthermore, DLs have not yet been proven in their ability to
effectively utilize relevant domain knowledge and experience critical to human
understanding. This aspect is missing in early data-focused approaches and
necessitated knowledge-infused learning and other strategies to incorporate
computational knowledge. This article demonstrates how knowledge, provided as a
knowledge graph, is incorporated into DL methods using knowledge-infused
learning, which is one of the strategies. We then discuss how this makes a
fundamental difference in the interpretability and explainability of current
approaches, and illustrate it with examples from natural language processing
for healthcare and education applications.Comment: 6 pages + references, 4 figures, Accepted to IEEE internet computing
202
Interpretable deep learning in single-cell omics
Recent developments in single-cell omics technologies have enabled the
quantification of molecular profiles in individual cells at an unparalleled
resolution. Deep learning, a rapidly evolving sub-field of machine learning,
has instilled a significant interest in single-cell omics research due to its
remarkable success in analysing heterogeneous high-dimensional single-cell
omics data. Nevertheless, the inherent multi-layer nonlinear architecture of
deep learning models often makes them `black boxes' as the reasoning behind
predictions is often unknown and not transparent to the user. This has
stimulated an increasing body of research for addressing the lack of
interpretability in deep learning models, especially in single-cell omics data
analyses, where the identification and understanding of molecular regulators
are crucial for interpreting model predictions and directing downstream
experimental validations. In this work, we introduce the basics of single-cell
omics technologies and the concept of interpretable deep learning. This is
followed by a review of the recent interpretable deep learning models applied
to various single-cell omics research. Lastly, we highlight the current
limitations and discuss potential future directions. We anticipate this review
to bring together the single-cell and machine learning research communities to
foster future development and application of interpretable deep learning in
single-cell omics research
- …