54,253 research outputs found
Discovering logical knowledge in non-symbolic domains
Deep learning and symbolic artificial intelligence remain the two main paradigms in Artificial Intelligence (AI), each presenting their own strengths and weaknesses. Artificial agents should integrate both of these aspects of AI in order to show general intelligence and solve complex problems in real-world scenarios; similarly to how humans use both the analytical left side and the intuitive right side of their brain in their lives. However, one of the main obstacles hindering this integration is the Symbol Grounding Problem [144], which is the capacity to map physical world observations to a set of symbols. In this thesis, we combine symbolic reasoning and deep learning in order to better represent and reason with abstract knowledge. In particular, we focus on solving non-symbolic-state Reinforcement Learning environments using a symbolic logical domain. We consider different configurations: (i) unknown knowledge of both the symbol grounding function and the symbolic logical domain, (ii) unknown knowledge of the symbol grounding function and prior knowledge of the domain, (iii) imperfect knowledge of the symbols grounding function and unknown knowledge of the domain. We develop algorithms and neural network architectures that are general enough to be applied to different kinds of environments, which we test on both continuous-state control problems and image-based environments. Specifically, we develop two kinds of architectures: one for Markovian RL tasks and one for non-Markovian RL domains. The first is based on model-based RL and representation learning, and is inspired by the substantial prior work in state abstraction for RL [115]. The second is mainly based on recurrent neural networks and continuous relaxations of temporal logic domains. In particular, the first approach extracts a symbolic STRIPS-like abstraction for control problems. For the second approach, we explore connections between recurrent neural networks and finite state machines, and we define Visual Reward Machines, an extension to non-symbolic domains of Reward Machines [27], which are a popular approach to non-Markovian RL tasks
CBR and MBR techniques: review for an application in the emergencies domain
The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system.
RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to:
a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions
b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location.
In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations.
This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Before and Beyond Representation: Towards an enactive conception of the Palaeolithic image\ud
For most archaeologists the meaning of prehistoric art appears to be grounded upon, if not synonymous with, the notion of representation and symbolism. This paper explores the possibility that the depictions we see already 30,000 years before present, for instance, at the caves of Chauvet and Lascaux, before and beyond representing the world, they first bring forth a new process of acting within this world and at the same time of thinking about it. It is argued that the unique ability of those early depictions to disrupt or question the ways the world is experienced under normal conditions makes possible for the visual apparatus to interrogate itself and thus acquire a sense of perceptual awareness not previously available. \u
Discovering Predictive Relational Object Symbols with Symbolic Attentive Layers
In this paper, we propose and realize a new deep learning architecture for
discovering symbolic representations for objects and their relations based on
the self-supervised continuous interaction of a manipulator robot with multiple
objects on a tabletop environment. The key feature of the model is that it can
handle a changing number number of objects naturally and map the object-object
relations into symbolic domain explicitly. In the model, we employ a
self-attention layer that computes discrete attention weights from object
features, which are treated as relational symbols between objects. These
relational symbols are then used to aggregate the learned object symbols and
predict the effects of executed actions on each object. The result is a
pipeline that allows the formation of object symbols and relational symbols
from a dataset of object features, actions, and effects in an end-to-end
manner. We compare the performance of our proposed architecture with
state-of-the-art symbol discovery methods in a simulated tabletop environment
where the robot needs to discover symbols related to the relative positions of
objects to predict the observed effect successfully. Our experiments show that
the proposed architecture performs better than other baselines in effect
prediction while forming not only object symbols but also relational symbols.
Furthermore, we analyze the learned symbols and relational patterns between
objects to learn about how the model interprets the environment. Our analysis
shows that the learned symbols relate to the relative positions of objects,
object types, and their horizontal alignment on the table, which reflect the
regularities in the environment.Comment: arXiv admin note: text overlap with arXiv:2208.0102
Neurosymbolic Reinforcement Learning and Planning: A Survey
The area of Neurosymbolic Artificial Intelligence (Neurosymbolic AI) is
rapidly developing and has become a popular research topic, encompassing
sub-fields such as Neurosymbolic Deep Learning (Neurosymbolic DL) and
Neurosymbolic Reinforcement Learning (Neurosymbolic RL). Compared to
traditional learning methods, Neurosymbolic AI offers significant advantages by
simplifying complexity and providing transparency and explainability.
Reinforcement Learning(RL), a long-standing Artificial Intelligence(AI) concept
that mimics human behavior using rewards and punishment, is a fundamental
component of Neurosymbolic RL, a recent integration of the two fields that has
yielded promising results. The aim of this paper is to contribute to the
emerging field of Neurosymbolic RL by conducting a literature survey. Our
evaluation focuses on the three components that constitute Neurosymbolic RL:
neural, symbolic, and RL. We categorize works based on the role played by the
neural and symbolic parts in RL, into three taxonomies:Learning for Reasoning,
Reasoning for Learning and Learning-Reasoning. These categories are further
divided into sub-categories based on their applications. Furthermore, we
analyze the RL components of each research work, including the state space,
action space, policy module, and RL algorithm. Additionally, we identify
research opportunities and challenges in various applications within this
dynamic field.Comment: 16 pages, 9 figures, IEEE Transactions on Artificial Intelligenc
Integrating Symbolic and Neural Processing in a Self-Organizing Architechture for Pattern Recognition and Prediction
British Petroleum (89A-1204); Defense Advanced Research Projects Agency (N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225
Creativity and the Brain
Neurocognitive approach to higher cognitive functions that bridges the gap between psychological and neural level of description is introduced. Relevant facts about the brain, working memory and representation of symbols in the brain are summarized. Putative brain processes responsible for problem solving, intuition, skill learning and automatization are described. The role of non-dominant brain hemisphere in solving problems requiring insight is conjectured. Two factors seem to be essential for creativity: imagination constrained by experience, and filtering that selects most interesting solutions. Experiments with paired words association are analyzed in details and evidence for stochastic resonance effects is found. Brain activity in the process of invention of novel words is proposed as the simplest way to understand creativity using experimental and computational means. Perspectives on computational models of creativity are discussed
Goal Space Abstraction in Hierarchical Reinforcement Learning via Set-Based Reachability Analysis
Open-ended learning benefits immensely from the use of symbolic methods for
goal representation as they offer ways to structure knowledge for efficient and
transferable learning. However, the existing Hierarchical Reinforcement
Learning (HRL) approaches relying on symbolic reasoning are often limited as
they require a manual goal representation. The challenge in autonomously
discovering a symbolic goal representation is that it must preserve critical
information, such as the environment dynamics. In this paper, we propose a
developmental mechanism for goal discovery via an emergent representation that
abstracts (i.e., groups together) sets of environment states that have similar
roles in the task. We introduce a Feudal HRL algorithm that concurrently learns
both the goal representation and a hierarchical policy. The algorithm uses
symbolic reachability analysis for neural networks to approximate the
transition relation among sets of states and to refine the goal representation.
We evaluate our approach on complex navigation tasks, showing the learned
representation is interpretable, transferrable and results in data efficient
learning
- …