36 research outputs found
A Taxonomy of Explainable Bayesian Networks
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made
Defining and Detecting Toxicity on Social Media: Context and Knowledge are Key
As the role of online platforms has become increasingly prominent for communication, toxic behaviors, such as cyberbullying and harassment, have been rampant in the last decade. On the other hand, online toxicity is multi-dimensional and sensitive in nature, which makes its detection challenging. As the impact of exposure to online toxicity can lead to serious implications for individuals and communities, reliable models and algorithms are required for detecting and understanding such communications. In this paper We define toxicity to provide a foundation drawing social theories. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions
Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions
The development of theory, frameworks and tools for Explainable AI (XAI) is a very active area of research these days, and articulating any kind of coherence on a vision and challenges is itself a challenge. At least two sometimes complementary and colliding threads have emerged. The first focuses on the development of pragmatic tools for increasing the transparency of automatically learned prediction models, as for instance by deep or reinforcement learning. The second is aimed at anticipating the negative impact of opaque models with the desire to regulate or control impactful consequences of incorrect predictions, especially in sensitive areas like medicine and law. The formulation of methods to augment the construction of predictive models with domain knowledge can provide support for producing human understandable explanations for predictions. This runs in parallel with AI regulatory concerns, like the European Union General Data Protection Regulation, which sets standards for the production of explanations from automated or semi-automated decision making. Despite the fact that all this research activity is the growing acknowledgement that the topic of explainability is essential, it is important to recall that it is also among the oldest fields of computer science. In fact, early AI was re-traceable, interpretable, thus understandable by and explainable to humans. The goal of this research is to articulate the big picture ideas and their role in advancing the development of XAI systems, to acknowledge their historical roots, and to emphasise the biggest challenges to moving forward
The Privacy Pillar -- A Conceptual Framework for Foundation Model-based Systems
AI and its relevant technologies, including machine learning, deep learning,
chatbots, virtual assistants, and others, are currently undergoing a profound
transformation of development and organizational processes within companies.
Foundation models present both significant challenges and incredible
opportunities. In this context, ensuring the quality attributes of foundation
model-based systems is of paramount importance, and with a particular focus on
the challenging issue of privacy due to the sensitive nature of the data and
information involved. However, there is currently a lack of consensus regarding
the comprehensive scope of both technical and non-technical issues that the
privacy evaluation process should encompass. Additionally, there is uncertainty
about which existing methods are best suited to effectively address these
privacy concerns. In response to this challenge, this paper introduces a novel
conceptual framework that integrates various responsible AI patterns from
multiple perspectives, with the specific aim of safeguarding privacy.Comment: 10 page
Wider Vision: Enriching Convolutional Neural Networks via Alignment to External Knowledge Bases
Deep learning models suffer from opaqueness. For Convolutional Neural
Networks (CNNs), current research strategies for explaining models focus on the
target classes within the associated training dataset. As a result, the
understanding of hidden feature map activations is limited by the
discriminative knowledge gleaned during training. The aim of our work is to
explain and expand CNNs models via the mirroring or alignment of CNN to an
external knowledge base. This will allow us to give a semantic context or label
for each visual feature. We can match CNN feature activations to nodes in our
external knowledge base. This supports knowledge-based interpretation of the
features associated with model decisions. To demonstrate our approach, we build
two separate graphs. We use an entity alignment method to align the feature
nodes in a CNN with the nodes in a ConceptNet based knowledge graph. We then
measure the proximity of CNN graph nodes to semantically meaningful knowledge
base nodes. Our results show that in the aligned embedding space, nodes from
the knowledge graph are close to the CNN feature nodes that have similar
meanings, indicating that nodes from an external knowledge base can act as
explanatory semantic references for features in the model. We analyse a variety
of graph building methods in order to improve the results from our embedding
space. We further demonstrate that by using hierarchical relationships from our
external knowledge base, we can locate new unseen classes outside the CNN
training set in our embeddings space, based on visual feature activations. This
suggests that we can adapt our approach to identify unseen classes based on CNN
feature activations. Our demonstrated approach of aligning a CNN with an
external knowledge base paves the way to reason about and beyond the trained
model, with future adaptations to explainable models and zero-shot learning
Bias in knowledge graphs - An empirical study with movie recommendation and different language editions of DBpedia
Public knowledge graphs such as DBpedia and Wikidata have been recognized as interesting sources of background knowledge to build content-based recommender systems. They can be used to add information about the items to be recommended and links between those. While quite a few approaches for exploiting knowledge graphs have been proposed, most of them aim at optimizing the recommendation strategy while using a fixed knowledge graph. In this paper, we take a different approach, i.e., we fix the recommendation strategy and observe changes when using different underlying knowledge graphs. Particularly, we use different language editions of DBpedia. We show that the usage of different knowledge graphs does not only lead to differently biased recommender systems, but also to recommender systems that differ in performance for particular fields of recommendations