15 research outputs found
XAI-Driven CNN for Diabetic Retinopathy Detection
Diabetes, a chronic metabolic disorder, poses a significant health threat with potentially severe consequences, including diabetic retinopathy, a leading cause of blindness. In this project, we tackle this threat by developing a Convolutional Neural Network (CNN) to support the diagnosis based on eye images. The aim is early detection and intervention to mitigate the effects of diabetes on eye health. To enhance transparency and interpretability, we incorporate explainable AI techniques. This research not only contributes to the early diagnosis of diabetic eye disease but also advances our understanding of how deep learning models arrive at their decisions, fostering trust and clinical applicability in healthcare diagnostics.
Our results show that our CNN model performs exceptionally well in classifying ocular images, attaining a 91% accuracy rate. Furthermore, we implemented explainable AI techniques, such as LIME (Local Interpretable Model-agnostic Explanations), which improves the transparency of our model’s decision-making. The areas of interest in the eye images were clarified for us by LIME, which enhanced our understanding of the model’s predictions. The high accuracy and interpretability of our approach demonstrate its potential for clinical applications and the broader field of healthcare diagnostics
Explaining Machine Learning DGA Detectors from DNS Traffic Data
One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker. This attack is made by leveraging the Domain Name System (DNS) technology through Domain Generation Algorithms (DGAs), a stealthy connection strategy that yet leaves suspicious data patterns. To detect such threats, advances in their analysis have been made. For the majority, they found Machine Learning (ML) as a solution, which can be highly effective in analyzing and classifying massive amounts of data. Although strongly performing, ML models have a certain degree of obscurity in their decision-making process. To cope with this problem, a branch of ML known as Explainable ML tries to break down the black-box nature of classifiers and make them interpretable and human-readable. This work addresses the problem of Explainable ML in the context of botnet and DGA detection, which at the best of our knowledge, is the first to concretely break down the decisions of ML classifiers when devised for botnet/DGA detection, therefore providing global and local explanations
Analysis of Trustworthiness in Machine Learning and Deep Learning
Trustworthy Machine Learning (TML) represents a set of mechanisms and explainable layers, which enrich the learning model in order to be clear, understood, thus trusted by users. A literature review has been conducted in this paper to provide a comprehensive analysis on TML perception. A quantitative study accompanied with qualitative observations have been discussed by categorizing machine learning algorithms and emphasising deep learning ones, the latter models have achieved very high performance as real-world function approximators (e.g., natural language and signal processing, robotics, etc.). However, to be fully adapted by humans, a level of transparency needs to be guaranteed which makes the task harder regarding recent techniques (e.g., fully connected layers in neural net-works, dynamic bias, parallelism, etc.). The paper covered both academics and practitioners works, some promising results have been covered, the goal is a high trade-off transparency/accuracy achievement towards a reliable learning approach
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?
As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable
AI (XAI) has become critical for transparency and trust among users. A
significant challenge in XAI is catering to diverse users, such as data
scientists, domain experts, and end-users. Recent research has started to
investigate how users' characteristics impact interactions with and user
experience of explanations, with a view to personalizing XAI. However, are we
heading down a rabbit hole by focusing on unimportant details? Our research
aimed to investigate how user characteristics are related to using,
understanding, and trusting an AI system that provides explanations. Our
empirical study with 149 participants who interacted with an XAI system that
flagged inappropriate comments showed that very few user characteristics
mattered; only age and the personality trait openness influenced actual
understanding. Our work provides evidence to reorient user-focused XAI research
and question the pursuit of personalized XAI based on fine-grained user
characteristics.Comment: 20 pages, 4 tables, 2 figure
An Exploratory Discussion on Electric Cars and Sustainable Innovation.
This study provides an exploratory discussion to reveal the authors’ perspectives regarding previous academic discussions. In combination with the development of innovative environmentally friendly products, electric vehicles will continue to be an important research topic in the field of innovation. Currently, given the unprecedented challenge of COVID-19, humanity has been charged with the task of developing sustainable business strategies and promoting environmentally friendly business practices. The research surrounding electric vehicles, an important example of innovation, has been enriched by many academic discussions, but it remains important to critically evaluate the development concept of electric vehicles from the perspective of innovation novelty, to examine the factors that support the innovation and to identify issues for future discussions and research. Accordingly, this exploratory study unravels the debate on innovation surrounding electric vehicles and proposes several key issues for future research. Electric vehicles are a new product characterised by two major features – innovation and sustainability – and their development is coupled with a growing interest in environmental issues. Based on the authors’ observations, this study identifies the key factors that support the growth of the industry and presents arguments for reconciling the themes of research and development acceptability and sustainability. It is hoped that the key issues presented in this paper will serve as an effective guide for future research
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
Artificial intelligence (AI) models are increasingly finding applications in
the field of medicine. Concerns have been raised about the explainability of
the decisions that are made by these AI models. In this article, we give a
systematic analysis of explainable artificial intelligence (XAI), with a
primary focus on models that are currently being used in the field of
healthcare. The literature search is conducted following the preferred
reporting items for systematic reviews and meta-analyses (PRISMA) standards for
relevant work published from 1 January 2012 to 02 February 2022. The review
analyzes the prevailing trends in XAI and lays out the major directions in
which research is headed. We investigate the why, how, and when of the uses of
these XAI models and their implications. We present a comprehensive examination
of XAI methodologies as well as an explanation of how a trustworthy AI can be
derived from describing AI models for healthcare fields. The discussion of this
work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE
Transactions on Artificial Intelligenc
Recommended from our members
Securing mobile edge computing using hybrid deep learning method
In recent years, Mobile Edge Computing (MEC) has revolutionized the landscape of the telecommunication industry by offering low-latency, high-bandwidth, and real-time processing. With this advancement comes a broad range of security challenges, the most prominent of which is Distributed Denial of Service (DDoS) attacks, which threaten the availability and performance of MEC’s services. In most cases, Intrusion Detection Systems (IDSs), a security tool that monitors networks and systems for suspicious activity and notify administrators in real time of potential cyber threats, have relied on shallow Machine Learning (ML) models that are limited in their abilities to identify and mitigate DDoS attacks. This article highlights the drawbacks of current IDS solutions, primarily their reliance on shallow ML techniques, and proposes a novel hybrid Autoencoder–Multi-Layer Perceptron (AE–MLP) model for intrusion detection as a solution against DDoS attacks in the MEC environment. The proposed hybrid AE–MLP model leverages autoencoders’ feature extraction capabilities to capture intricate patterns and anomalies within network traffic data. This extracted knowledge is then fed into a Multi-Layer Perceptron (MLP) network, enabling deep learning techniques to further analyze and classify potential threats. By integrating both AE and MLP, the hybrid model achieves higher accuracy and robustness in identifying DDoS attacks while minimizing false positives. As a result of extensive experiments using the recently released NF-UQ-NIDS-V2 dataset, which contains a wide range of DDoS attacks, our results demonstrate that the proposed hybrid AE–MLP model achieves a high accuracy of 99.98%. Based on the results, the hybrid approach performs better than several similar techniques