64 research outputs found
Explainable Artificial Intelligence (XAI) towards Model Personality in NLP task
In recent years, the development of Deep Learning in the field of Natural Language Processing, especially in sentiment analysis, has achieved significant progress and success. It is because of the availability of large amounts of text data and the ability of deep learning techniques to produce sophisticated predictive results from various data features. However, the sophisticated predictions that are not accompanied by sufficient information on what is happening in the model will be a major setback. Therefore, the significant development of the Deep Learning model must be accompanied by the development of the XAI method, which helps provide information about what drives the model to get predictable results. Simple Bidirectional LSTM and complex Bi-GRU-LSTM-CNN model for Sentiment Analysis were proposed in the present research. Both models were analyzed further using three different XAI methods (LIME, SHAP, and Anchor) in which they were used and compared to two proposed models, proving that XAI is not limited to giving information about what happens in the model but can also help us to understand and distinguish models’ personality and behaviour
CycleGAN for Interpretable Online EMT Compensation
Purpose: Electromagnetic Tracking (EMT) can partially replace X-ray guidance
in minimally invasive procedures, reducing radiation in the OR. However, in
this hybrid setting, EMT is disturbed by metallic distortion caused by the
X-ray device. We plan to make hybrid navigation clinical reality to reduce
radiation exposure for patients and surgeons, by compensating EMT error.
Methods: Our online compensation strategy exploits cycle-consistent
generative adversarial neural networks (CycleGAN). 3D positions are translated
from various bedside environments to their bench equivalents. Domain-translated
points are fine-tuned to reduce error in the bench domain. We evaluate our
compensation approach in a phantom experiment.
Results: Since the domain-translation approach maps distorted points to their
lab equivalents, predictions are consistent among different C-arm environments.
Error is successfully reduced in all evaluation environments. Our qualitative
phantom experiment demonstrates that our approach generalizes well to an unseen
C-arm environment.
Conclusion: Adversarial, cycle-consistent training is an explicable,
consistent and thus interpretable approach for online error compensation.
Qualitative assessment of EMT error compensation gives a glimpse to the
potential of our method for rotational error compensation.Comment: Conditionally accepted for publication in IJCARS & presentation at
IPCA
Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective
We are used to the availability of big data generated in nearly all fields of
science as a consequence of technological progress. However, the analysis of
such data possess vast challenges. One of these relates to the explainability
of artificial intelligence (AI) or machine learning methods. Currently, many of
such methods are non-transparent with respect to their working mechanism and
for this reason are called black box models, most notably deep learning
methods. However, it has been realized that this constitutes severe problems
for a number of fields including the health sciences and criminal justice and
arguments have been brought forward in favor of an explainable AI. In this
paper, we do not assume the usual perspective presenting explainable AI as it
should be, but rather we provide a discussion what explainable AI can be. The
difference is that we do not present wishful thinking but reality grounded
properties in relation to a scientific theory beyond physics
Towards Explainable and Trustworthy AI for Decision Support in Medicine: An Overview of Methods and Good Practices
Artificial Intelligence (AI) is defined as intelligence exhibited by machines, such as electronic computers. It can involve reasoning, problem solving, learning and knowledge representation, which are mostly in focus in the medical domain. Other forms of intelligence, including autonomous behavior, are also parts of AI. Data driven methods for decision support have been employed in the medical domain for some time. Machine learning (ML) is used for a wide range of complex tasks across many sectors of the industry. However, a broader spectrum of AI, including deep learning (DL) as well as autonomous agents, have been recently gaining more focus and have risen expectation for solving numerous problems in the medical domain. A barrier towards AI adoption, or rather a concern, is trust in AI, which is often hindered by issues like lack of understanding of a black-box model function, or lack of credibility related to reporting of results. Explainability and interpretability are prerequisites for the development of AI-based systems that are lawful, ethical and robust. In this respect, this paper presents an overview of concepts, best practices, and success stories, and opens the discussion for multidisciplinary work towards establishing trustworthy AI
A Robust and Explainable Data-Driven Anomaly Detection Approach For Power Electronics
Timely and accurate detection of anomalies in power electronics is becoming
increasingly critical for maintaining complex production systems. Robust and
explainable strategies help decrease system downtime and preempt or mitigate
infrastructure cyberattacks. This work begins by explaining the types of
uncertainty present in current datasets and machine learning algorithm outputs.
Three techniques for combating these uncertainties are then introduced and
analyzed. We further present two anomaly detection and classification
approaches, namely the Matrix Profile algorithm and anomaly transformer, which
are applied in the context of a power electronic converter dataset.
Specifically, the Matrix Profile algorithm is shown to be well suited as a
generalizable approach for detecting real-time anomalies in streaming
time-series data. The STUMPY python library implementation of the iterative
Matrix Profile is used for the creation of the detector. A series of custom
filters is created and added to the detector to tune its sensitivity, recall,
and detection accuracy. Our numerical results show that, with simple parameter
tuning, the detector provides high accuracy and performance in a variety of
fault scenarios
- …