58 research outputs found
Efficient XAI Techniques: A Taxonomic Survey
Recently, there has been a growing demand for the deployment of Explainable
Artificial Intelligence (XAI) algorithms in real-world applications. However,
traditional XAI methods typically suffer from a high computational complexity
problem, which discourages the deployment of real-time systems to meet the
time-demanding requirements of real-world scenarios. Although many approaches
have been proposed to improve the efficiency of XAI methods, a comprehensive
understanding of the achievements and challenges is still needed. To this end,
in this paper we provide a review of efficient XAI. Specifically, we categorize
existing techniques of XAI acceleration into efficient non-amortized and
efficient amortized methods. The efficient non-amortized methods focus on
data-centric or model-centric acceleration upon each individual instance. In
contrast, amortized methods focus on learning a unified distribution of model
explanations, following the predictive, generative, or reinforcement
frameworks, to rapidly derive multiple model explanations. We also analyze the
limitations of an efficient XAI pipeline from the perspectives of the training
phase, the deployment phase, and the use scenarios. Finally, we summarize the
challenges of deploying XAI acceleration methods to real-world scenarios,
overcoming the trade-off between faithfulness and efficiency, and the selection
of different acceleration methods.Comment: 15 pages, 3 figure
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
Combating fake news and misinformation propagation is a challenging task in
the post-truth era. News feed and search algorithms could potentially lead to
unintentional large-scale propagation of false and fabricated information with
users being exposed to algorithmically selected false content. Our research
investigates the effects of an Explainable AI assistant embedded in news review
platforms for combating the propagation of fake news. We design a news
reviewing and sharing interface, create a dataset of news stories, and train
four interpretable fake news detection algorithms to study the effects of
algorithmic transparency on end-users. We present evaluation results and
analysis from multiple controlled crowdsourced studies. For a deeper
understanding of Explainable AI systems, we discuss interactions between user
engagement, mental model, trust, and performance measures in the process of
explaining. The study results indicate that explanations helped participants to
build appropriate mental models of the intelligent assistants in different
conditions and adjust their trust accordingly for model limitations
Explainability for Large Language Models: A Survey
Large language models (LLMs) have demonstrated impressive capabilities in
natural language processing. However, their internal mechanisms are still
unclear and this lack of transparency poses unwanted risks for downstream
applications. Therefore, understanding and explaining these models is crucial
for elucidating their behaviors, limitations, and social impacts. In this
paper, we introduce a taxonomy of explainability techniques and provide a
structured overview of methods for explaining Transformer-based language
models. We categorize techniques based on the training paradigms of LLMs:
traditional fine-tuning-based paradigm and prompting-based paradigm. For each
paradigm, we summarize the goals and dominant approaches for generating local
explanations of individual predictions and global explanations of overall model
knowledge. We also discuss metrics for evaluating generated explanations, and
discuss how explanations can be leveraged to debug models and improve
performance. Lastly, we examine key challenges and emerging opportunities for
explanation techniques in the era of LLMs in comparison to conventional machine
learning models
- …