7 research outputs found
Considerations for applying logical reasoning to explain neural network outputs
We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining artificial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the effectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
Explainability is becoming an important requirement for organizations that
make use of automated decision-making due to regulatory initiatives and a shift
in public awareness. Various and significantly different algorithmic methods to
provide this explainability have been introduced in the field, but the existing
literature in the machine learning community has paid little attention to the
stakeholder whose needs are rather studied in the human-computer interface
community. Therefore, organizations that want or need to provide this
explainability are confronted with the selection of an appropriate method for
their use case. In this paper, we argue there is a need for a methodology to
bridge the gap between stakeholder needs and explanation methods. We present
our ongoing work on creating this methodology to help data scientists in the
process of providing explainability to stakeholders. In particular, our
contributions include documents used to characterize XAI methods and user
requirements (shown in Appendix), which our methodology builds upon
Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features
Deep learning (DL) models achieve remarkable performance in classification
tasks. However, models with high complexity can not be used in many
risk-sensitive applications unless a comprehensible explanation is presented.
Explainable artificial intelligence (xAI) focuses on the research to explain
the decision-making of AI systems like DL. We extend a recent method of Class
Activation Maps (CAMs) which visualizes the importance of each feature of a
data sample contributing to the classification. In this paper, we aggregate
CAMs from multiple samples to show a global explanation of the classification
for semantically structured data. The aggregation allows the analyst to make
sophisticated assumptions and analyze them with further drill-down
visualizations. Our visual representation for the global CAM illustrates the
impact of each feature with a square glyph containing two indicators. The color
of the square indicates the classification impact of this feature. The size of
the filled square describes the variability of the impact between single
samples. For interesting features that require further analysis, a detailed
view is necessary that provides the distribution of these values. We propose an
interactive histogram to filter samples and refine the CAM to show relevant
samples only. Our approach allows an analyst to detect important features of
high-dimensional data and derive adjustments to the AI model based on our
global explanation visualization.Comment: submitted to xaiworldconference202
Deep Learning with Multimodal Data for Healthcare
Healthcare plays a significant role in communities in promoting and maintaining health, preventing and managing the disease, reducing health disability and premature death, and educating a healthy lifestyle. However, healthcare information is well known for its big data that is too vast and complex to manage manually. The healthcare data is heterogeneous, containing different modalities or types of information such as text, audio, images, and multi-type. Over the last few years, the Deep Learning (DL) approach has successfully solved many issues. The primary structure of DL lies in the Artificial Neural Network (ANN). It is also known as representation learning techniques as these approaches can effectively identify hidden patterns of the data without requiring any explicit feature extraction mechanism. In other words, DL architectures also support automatic feature extraction. It is different than machine learning techniques, where there is no need to extract features separately in DL.
In this dissertation, we proposed three DL architectures to handle multiple modalities data in healthcare. We systematically develop prediction models for identifying health conditions in several groups, including Post-Traumatic Stress Disorder (PTSD), Parkinson's Disease (PD), and PD with Dementia (PD-Dementia). First, we designed the DL framework for identifying PTSD among cancer survivors via social media. After that, we apply the DL time series approach to forecast PD patients' future health status. Last, we build DL architecture to identify dementia in diagnosed PD patients. All these work are motivated by several medical theories and health informatics perspectives. We have handled multimodal healthcare data information throughout these years, including text, audio features, and multivariate data. We also carefully studied each disease's background, including the symptoms and test assessment run by healthcare. We explored the online social media potential and medical applications capability for disease diagnosis and a health monitoring system to employ the developed models in a real-world scenario.
The DL for healthcare can become very helpful for supporting clinician's decisions and improving patient care. The leading institutions and medical bodies have recognized the benefits it brings, and the popularity of the solutions are well known. With support from a reliable computational system, it could help healthcare decide particular needs and environments and reduce the stresses that medical professionals may experience daily. Healthcare has high hopes for the role of DL in clinical decision support and predictive analytics for a wide variety of conditions
A Multidisciplinary Design and Evaluation Framework for Explainable AI Systems
Nowadays, algorithms analyze user data and affect the decision-making process for millions of people on matters like employment, insurance and loan rates, and even criminal justice. However, these algorithms that serve critical roles in many industries have their own biases that can result in discrimination and unfair decision-making.
Explainable Artificial Intelligence (XAI) systems can be a solution to predictable and accountable AI by explaining AI decision-making processes for end users and therefore increase user awareness and prevent bias and discrimination. The broad spectrum of research on XAI, including designing interpretable models, explainable user interfaces, and human-subject studies of XAI systems are sought in different disciplines such as machine learning, human-computer interactions (HCI), and visual analytics. The mismatch in objectives for the scholars to define, design, and evaluate the concept of XAI may slow down the overall advances of end-to-end XAI systems.
My research aims to converge knowledge behind design and evaluation of XAI systems between multiple disciplines to further support key benefits of algorithmic transparency and interpretability. To this end, I propose a comprehensive design and evaluation framework for XAI systems with step-by-step guidelines to pair different design goals with their evaluation methods for iterative system design cycles in multidisciplinary teams. This dissertation presents a comprehensive XAI design and evaluation framework to provide guidance for different design goals and evaluation approaches in XAI systems.
After a thorough review of XAI research in the fields of machine learning, visualization, and HCI, I present a categorization of XAI design goals and evaluation methods and show a mapping between design goals for different XAI user groups and their evaluation methods.
From my findings, I present a design and evaluation framework for XAI systems (Objective 1) to address the relation between different system design needs. The framework provides recommendations for different goals and ready-to-use tables of evaluation methods for XAI systems. The importance of this framework is in providing guidance for researchers on different aspects of XAI system design in multidisciplinary team efforts.
Then, I demonstrate and validate the proposed framework (Objective 2) through one end-to-end XAI system case study and two examples by analysis of previous XAI systems in terms of our framework. I present two contributions to my XAI design and evaluation framework to improve evaluation methods for XAI system
A user-based taxonomy for deep learning visualization
Deep learning has achieved impressive success in a variety of tasks and is developing rapidly in recent years. The problem of understanding the deep learning models has become an issue for the development of deep learning, for example, in domains like medicine and finance which require interpretable models. While it is challenging to analyze and interpret complicated deep neural networks, visualization is good at bridging between abstract data and intuitive representations. Visual analytics for deep learning is a rapidly growing research field. To help users better understand this field, we present a mini-survey including a user-based taxonomy that covers state-of-the-art works of the field. Regarding the requirements of different types of users (beginners, practitioners, developers, and experts), we categorize the methods and tools by four visualization goals respectively focusing on teaching deep learning concepts, architecture assessment, tools for debugging and improving models, and visual explanation. Notably, we present a table consisting of the name of the method or tool, the year, the visualization goal, and the types of networks to which the method or tool can be applied, to assist users in finding available tools and methods quickly. To emphasize the importance of visual explanation for deep learning, we introduce the studies in this research field in detail. Keywords: Deep learning, Visualization, Interpretatio