772 research outputs found
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
Combating fake news and misinformation propagation is a challenging task in
the post-truth era. News feed and search algorithms could potentially lead to
unintentional large-scale propagation of false and fabricated information with
users being exposed to algorithmically selected false content. Our research
investigates the effects of an Explainable AI assistant embedded in news review
platforms for combating the propagation of fake news. We design a news
reviewing and sharing interface, create a dataset of news stories, and train
four interpretable fake news detection algorithms to study the effects of
algorithmic transparency on end-users. We present evaluation results and
analysis from multiple controlled crowdsourced studies. For a deeper
understanding of Explainable AI systems, we discuss interactions between user
engagement, mental model, trust, and performance measures in the process of
explaining. The study results indicate that explanations helped participants to
build appropriate mental models of the intelligent assistants in different
conditions and adjust their trust accordingly for model limitations
Detecting Multimedia Generated by Large AI Models: A Survey
The rapid advancement of Large AI Models (LAIMs), particularly diffusion
models and large language models, has marked a new era where AI-generated
multimedia is increasingly integrated into various aspects of daily life.
Although beneficial in numerous fields, this content presents significant
risks, including potential misuse, societal disruptions, and ethical concerns.
Consequently, detecting multimedia generated by LAIMs has become crucial, with
a marked rise in related research. Despite this, there remains a notable gap in
systematic surveys that focus specifically on detecting LAIM-generated
multimedia. Addressing this, we provide the first survey to comprehensively
cover existing research on detecting multimedia (such as text, images, videos,
audio, and multimodal content) created by LAIMs. Specifically, we introduce a
novel taxonomy for detection methods, categorized by media modality, and
aligned with two perspectives: pure detection (aiming to enhance detection
performance) and beyond detection (adding attributes like generalizability,
robustness, and interpretability to detectors). Additionally, we have presented
a brief overview of generation mechanisms, public datasets, and online
detection tools to provide a valuable resource for researchers and
practitioners in this field. Furthermore, we identify current challenges in
detection and propose directions for future research that address unexplored,
ongoing, and emerging issues in detecting multimedia generated by LAIMs. Our
aim for this survey is to fill an academic gap and contribute to global AI
security efforts, helping to ensure the integrity of information in the digital
realm. The project link is
https://github.com/Purdue-M2/Detect-LAIM-generated-Multimedia-Survey
XAI-CF -- Examining the Role of Explainable Artificial Intelligence in Cyber Forensics
With the rise of complex cyber devices Cyber Forensics (CF) is facing many
new challenges. For example, there are dozens of systems running on
smartphones, each with more than millions of downloadable applications. Sifting
through this large amount of data and making sense requires new techniques,
such as from the field of Artificial Intelligence (AI). To apply these
techniques successfully in CF, we need to justify and explain the results to
the stakeholders of CF, such as forensic analysts and members of the court, for
them to make an informed decision. If we want to apply AI successfully in CF,
there is a need to develop trust in AI systems. Some other factors in accepting
the use of AI in CF are to make AI authentic, interpretable, understandable,
and interactive. This way, AI systems will be more acceptable to the public and
ensure alignment with legal standards. An explainable AI (XAI) system can play
this role in CF, and we call such a system XAI-CF. XAI-CF is indispensable and
is still in its infancy. In this paper, we explore and make a case for the
significance and advantages of XAI-CF. We strongly emphasize the need to build
a successful and practical XAI-CF system and discuss some of the main
requirements and prerequisites of such a system. We present a formal definition
of the terms CF and XAI-CF and a comprehensive literature review of previous
works that apply and utilize XAI to build and increase trust in CF. We discuss
some challenges facing XAI-CF. We also provide some concrete solutions to these
challenges. We identify key insights and future research directions for
building XAI applications for CF. This paper is an effort to explore and
familiarize the readers with the role of XAI applications in CF, and we believe
that our work provides a promising basis for future researchers interested in
XAI-CF
Explainable and Interpretable Face Presentation Attack Detection Methods
Decision support systems based on machine learning (ML) techniques are excelling in most artificial intelligence (AI) fields, over-performing other AI methods, as well as humans. However, challenges still exist that do not favour the dominance of AI in some applications. This proposal focuses on a critical one: lack of transparency and explainability, reducing trust and accountability of an AI system. The fact that most AI methods still operate as complex black boxes, makes the inner processes which sustain their predictions still unattainable. The awareness around these observations foster the need to regulate many sensitive domains where AI has been applied in order to interpret, explain and audit the reliability of the ML based systems.
Although modern-day biometric recognition (BR) systems are already benefiting from the performance gains achieved with AI (which can account for and learn subtle changes in the person to be authenticated or statistical mismatches between samples), it is still in the dark ages of black box models, without reaping the benefits of the mismatches between samples), it is still in the dark ages of black box models, without reaping the benefits of the XAI field. This work will focus on studying AI explainability in the field of biometrics focusing in particular use cases in BR, such as verification/ identification of individuals and liveness detection (LD) (aka, antispoofing).
The main goals of this work are: i) to become acquainted with the state-of-the-art in explainability and biometric recognition and PAD methods; ii) to develop an experimental work xxxxx
Tasks 1st semester
(1) Study of the state of the art- bibliography review on state of the art for presentation attack detection
(2) Get acquainted with the previous work of the group in the topic
(3) Data preparation and data pre-processing
(3) Define the experimental protocol, including performance metrics
(4) Perform baseline experiments
(5) Write monography
Tasks 2nd semester
(1) Update on the state of the art
(2) Data preparation and data pre-processing
(3) Propose and implement a methodology for interpretability in biometrics
(4) Evaluation of the performance and comparison with baseline and state of the art approaches
(5) Dissertation writing
Referências bibliográficas principais: (*)
[Doshi17] B. Kim and F. Doshi-Velez, "Interpretable machine learning: The fuss, the concrete and the questions," 2017
[Mol19] Christoph Molnar. Interpretable Machine Learning. 2019
[Sei18] C. Seibold, W. Samek, A. Hilsmann, and P. Eisert, "Accurate and robust neural networks for security related applications exampled by face morphing attacks," arXiv preprint arXiv:1806.04265, 2018
[Seq20] Sequeira, Ana F., João T. Pinto, Wilson Silva, Tiago Gonçalves and Cardoso, Jaime S., "Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?", 8th IWBF2020
[Wilson18] W. Silva, K. Fernandes, M. J. Cardoso, and J. S. Cardoso, "Towards complementary explanations using deep neural networks," in Understanding and Interpreting Machine Learning in MICA. Springer, 2018
[Wilson19] W. Silva, K. Fernandes, and J. S. Cardoso, "How to produce complementary explanations using an Ensemble Model," in IJCNN. 2019
[Wilson19A] W. Silva, M. J. Cardoso, and J. S. Cardoso, "Image captioning as a proxy for Explainable Decisions" in Understanding and Interpreting Machine Learning in MICA, 2019 (Submitted
Towards an Understanding and Explanation for Mixed-Initiative Artificial Scientific Text Detection
Large language models (LLMs) have gained popularity in various fields for
their exceptional capability of generating human-like text. Their potential
misuse has raised social concerns about plagiarism in academic contexts.
However, effective artificial scientific text detection is a non-trivial task
due to several challenges, including 1) the lack of a clear understanding of
the differences between machine-generated and human-written scientific text, 2)
the poor generalization performance of existing methods caused by
out-of-distribution issues, and 3) the limited support for human-machine
collaboration with sufficient interpretability during the detection process. In
this paper, we first identify the critical distinctions between
machine-generated and human-written scientific text through a quantitative
experiment. Then, we propose a mixed-initiative workflow that combines human
experts' prior knowledge with machine intelligence, along with a visual
analytics prototype to facilitate efficient and trustworthy scientific text
detection. Finally, we demonstrate the effectiveness of our approach through
two case studies and a controlled user study with proficient researchers. We
also provide design implications for interactive artificial text detection
tools in high-stakes decision-making scenarios
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
The proliferation of DeepFake technology is a rising challenge in today's
society, owing to more powerful and accessible generation methods. To counter
this, the research community has developed detectors of ever-increasing
accuracy. However, the ability to explain the decisions of such models to users
is lacking behind and is considered an accessory in large-scale benchmarks,
despite being a crucial requirement for the correct deployment of automated
tools for content moderation. We attribute the issue to the reliance on
qualitative comparisons and the lack of established metrics. We describe a
simple set of metrics to evaluate the visual quality and informativeness of
explanations of video DeepFake classifiers from a human-centric perspective.
With these metrics, we compare common approaches to improve explanation quality
and discuss their effect on both classification and explanation performance on
the recent DFDC and DFD datasets.Comment: Accepted at BMVC 2022, code repository at
https://github.com/baldassarreFe/deepfake-detectio
NOTION OF EXPLAINABLE ARTIFICIAL INTELLIGENCE - AN EMPIRICAL INVESTIGATION FROM A USER\u27S PERSPECTIVE
The growing attention on artificial intelligence-based decision-making has led to research interest in the explainability and interpretability of machine learning models, algorithmic transparency, and comprehensibility. This renewed attention on XAI advocates the need to investigate end user-centric explainable AI, due to the universal adoption of AI-based systems at the root level. Therefore, this paper investigates user-centric explainable AI from a recommendation systems context. We conducted focus group interviews to collect qualitative data on the recommendation system. We asked participants about the end users\u27 comprehension of a recommended item, its probable explanation and their opinion of making a recommendation explainable. Our finding reveals end users want a non-technical and tailor-made explanation with on-demand supplementary information. Moreover, we also observed users would like to have an explanation about personal data usage, detailed user feedback, authentic and reliable explanations. Finally, we proposed a synthesized framework that will include end users in the XAI development process
XAI for All: Can Large Language Models Simplify Explainable AI?
The field of Explainable Artificial Intelligence (XAI) often focuses on users
with a strong technical background, making it challenging for non-experts to
understand XAI methods. This paper presents "x-[plAIn]", a new approach to make
XAI more accessible to a wider audience through a custom Large Language Model
(LLM), developed using ChatGPT Builder. Our goal was to design a model that can
generate clear, concise summaries of various XAI methods, tailored for
different audiences, including business professionals and academics. The key
feature of our model is its ability to adapt explanations to match each
audience group's knowledge level and interests. Our approach still offers
timely insights, facilitating the decision-making process by the end users.
Results from our use-case studies show that our model is effective in providing
easy-to-understand, audience-specific explanations, regardless of the XAI
method used. This adaptability improves the accessibility of XAI, bridging the
gap between complex AI technologies and their practical applications. Our
findings indicate a promising direction for LLMs in making advanced AI concepts
more accessible to a diverse range of users
- …