1,051 research outputs found
Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives
Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future
Abductive Design of BDI Agent-based Digital Twins of Organizations
For a Digital Twin - a precise, virtual representation of a physical counterpart - of a human-like system to be faithful and complete, it must appeal to a notion of anthropomorphism (i.e., attributing human behaviour to non-human entities) to imitate (1) the externally visible behaviour and (2) the internal workings of that system. Although the Belief-Desire-Intention (BDI) paradigm was not developed for this purpose, it has been used successfully in human modeling applications. In this sense, we introduce in this thesis the notion of abductive design of BDI agent-based Digital Twins of organizations, which builds on two powerful reasoning disciplines: reverse engineering (to recreate the visible behaviour of the target system) and goal-driven eXplainable Artificial Intelligence (XAI) (for viewing the behaviour of the target system through the lens of BDI agents). Precisely speaking, the overall problem we are trying to address in this thesis is to “Find a BDI agent program that best explains (in the sense of formal abduction) the behaviour of a target system based on its past experiences . To do so, we propose three goal-driven XAI techniques: (1) abductive design of BDI agents, (2) leveraging imperfect explanations and (3) mining belief-based explanations. The resulting approach suggests that using goal-driven XAI to generate Digital Twins of organizations in the form of BDI agents can be effective, even in a setting with limited information about the target system’s behaviour
Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations
Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep capacity high. This dissertation investigates training a Deep Convolutional Q-learning agent across 20 Atari games, in solving a control task, physics task, and simulating addition, while intentionally reducing Experience Replay capacity from 1×106 to 5×102 . It was found that over 40% in the reduction of Experience Replay size is allowed for 18 of 23 simulations tested, offering a practical path to resource-efficient DRL. To illuminate agent decisions and align them with game mechanics, a novel method is employed: visualizing Experience Replay via Deep SHAP Explainer. This approach fosters comprehension and transparent, interpretable explanations, though any capacity reduction must be cautious to avoid overfitting. This study demonstrates the feasibility of reducing Experience Replay and advocates for transparent, interpretable decision explanations using the Deep SHAP Explainer to promote enhancing resource efficiency in Experience Replay
Active Learning for Reducing Labeling Effort in Text Classification Tasks
Labeling data can be an expensive task as it is usually performed manually by
domain experts. This is cumbersome for deep learning, as it is dependent on
large labeled datasets. Active learning (AL) is a paradigm that aims to reduce
labeling effort by only using the data which the used model deems most
informative. Little research has been done on AL in a text classification
setting and next to none has involved the more recent, state-of-the-art Natural
Language Processing (NLP) models. Here, we present an empirical study that
compares different uncertainty-based algorithms with BERT as the used
classifier. We evaluate the algorithms on two NLP classification datasets:
Stanford Sentiment Treebank and KvK-Frontpages. Additionally, we explore
heuristics that aim to solve presupposed problems of uncertainty-based AL;
namely, that it is unscalable and that it is prone to selecting outliers.
Furthermore, we explore the influence of the query-pool size on the performance
of AL. Whereas it was found that the proposed heuristics for AL did not improve
performance of AL; our results show that using uncertainty-based AL with
BERT outperforms random sampling of data. This difference in
performance can decrease as the query-pool size gets larger.Comment: Accepted as a conference paper at the joint 33rd Benelux Conference
on Artificial Intelligence and the 30th Belgian Dutch Conference on Machine
Learning (BNAIC/BENELEARN 2021). This camera-ready version submitted to
BNAIC/BENELEARN, adds several improvements including a more thorough
discussion of related work plus an extended discussion section. 28 pages
including references and appendice
Context-Aware Composition of Agent Policies by Markov Decision Process Entity Embeddings and Agent Ensembles
Computational agents support humans in many areas of life and are therefore
found in heterogeneous contexts. This means they operate in rapidly changing
environments and can be confronted with huge state and action spaces. In order
to perform services and carry out activities in a goal-oriented manner, agents
require prior knowledge and therefore have to develop and pursue
context-dependent policies. However, prescribing policies in advance is limited
and inflexible, especially in dynamically changing environments. Moreover, the
context of an agent determines its choice of actions. Since the environments
can be stochastic and complex in terms of the number of states and feasible
actions, activities are usually modelled in a simplified way by Markov decision
processes so that, e.g., agents with reinforcement learning are able to learn
policies, that help to capture the context and act accordingly to optimally
perform activities. However, training policies for all possible contexts using
reinforcement learning is time-consuming. A requirement and challenge for
agents is to learn strategies quickly and respond immediately in cross-context
environments and applications, e.g., the Internet, service robotics,
cyber-physical systems. In this work, we propose a novel simulation-based
approach that enables a) the representation of heterogeneous contexts through
knowledge graphs and entity embeddings and b) the context-aware composition of
policies on demand by ensembles of agents running in parallel. The evaluation
we conducted with the "Virtual Home" dataset indicates that agents with a need
to switch seamlessly between different contexts, can request on-demand composed
policies that lead to the successful completion of context-appropriate
activities without having to learn these policies in lengthy training steps and
episodes, in contrast to agents that use reinforcement learning.Comment: 30 pages, 11 figures, 9 tables, 3 listings, Re-submitted to Semantic
Web Journal, Currently, under revie
Using Multi-Relational Embeddings as Knowledge Graph Representations for Robotics Applications
User demonstrations of robot tasks in everyday environments, such as households, can be brittle due in part to the dynamic, diverse, and complex properties of those environments. Humans can find solutions in ambiguous or unfamiliar situations by using a wealth of common-sense knowledge about their domains to make informed generalizations. For example, likely locations for food in a novel household. Prior work has shown that robots can benefit from reasoning about this type of semantic knowledge, which can be modeled as a knowledge graph of interrelated facts that define whether a relationship exists between two entities. Semantic reasoning about domain knowledge using knowledge graph representations has improved the robustness and usability of end user robots by enabling more fault tolerant task execution. Knowledge graph representations define the underlying representation of facts, how facts are organized, and implement semantic reasoning by defining the possible computations over facts (e.g. association, fact-prediction).
This thesis examines the use of multi-relational embeddings as knowledge graph representations within the context of robust task execution and develops methods to explain the inferences of and sequentially train multi-relational embeddings. This thesis contributes: (i) a survey of knowledge graph representations that model semantic domain knowledge in robotics, (ii) the development and evaluation of our knowledge graph representation based on multi-relational embeddings, (iii) the integration of our knowledge graph representation into a robot architecture to improve robust task execution, (iv) the development and evaluation of methods to sequentially update multi-relational embeddings, and (v) the development and evaluation of an inference reconciliation framework for multi-relational embeddings.Ph.D
- …