48,846 research outputs found
Towards Safe Autonomous Driving Policies using a Neuro-Symbolic Deep Reinforcement Learning Approach
The dynamic nature of driving environments and the presence of diverse road
users pose significant challenges for decision-making in autonomous driving.
Deep reinforcement learning (DRL) has emerged as a popular approach to tackle
this problem. However, the application of existing DRL solutions is mainly
confined to simulated environments due to safety concerns, impeding their
deployment in real-world. To overcome this limitation, this paper introduces a
novel neuro-symbolic model-free DRL approach, called DRL with Symbolic Logics
(DRLSL) that combines the strengths of DRL (learning from experience) and
symbolic first-order logics knowledge-driven reasoning) to enable safe learning
in real-time interactions of autonomous driving within real environments. This
innovative approach provides a means to learn autonomous driving policies by
actively engaging with the physical environment while ensuring safety. We have
implemented the DRLSL framework in autonomous driving using the highD dataset
and demonstrated that our method successfully avoids unsafe actions during both
the training and testing phases. Furthermore, our results indicate that DRLSL
achieves faster convergence during training and exhibits better
generalizability to new driving scenarios compared to traditional DRL methods.Comment: 15 pages, 9 figures, 1 table, 1 algorithm. Under review as a journal
paper at IEEE transactions on Intelligent Transportation System
SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning
Deep reinforcement learning (DRL) has gained great success by learning
directly from high-dimensional sensory inputs, yet is notorious for the lack of
interpretability. Interpretability of the subtasks is critical in hierarchical
decision-making as it increases the transparency of black-box-style DRL
approach and helps the RL practitioners to understand the high-level behavior
of the system better. In this paper, we introduce symbolic planning into DRL
and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can
handle both high-dimensional sensory inputs and symbolic planning. The
task-level interpretability is enabled by relating symbolic actions to
options.This framework features a planner -- controller -- meta-controller
architecture, which takes charge of subtask scheduling, data-driven subtask
learning, and subtask evaluation, respectively. The three components
cross-fertilize each other and eventually converge to an optimal symbolic plan
along with the learned subtasks, bringing together the advantages of long-term
planning capability with symbolic knowledge and end-to-end reinforcement
learning directly from a high-dimensional sensory input. Experimental results
validate the interpretability of subtasks, along with improved data efficiency
compared with state-of-the-art approaches
Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework
In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017
conference (Lisbon, Portugal
ToyArchitecture: Unsupervised Learning of Interpretable Models of the World
Research in Artificial Intelligence (AI) has focused mostly on two extremes:
either on small improvements in narrow AI domains, or on universal theoretical
frameworks which are usually uncomputable, incompatible with theories of
biological intelligence, or lack practical implementations. The goal of this
work is to combine the main advantages of the two: to follow a big picture
view, while providing a particular theory and its implementation. In contrast
with purely theoretical approaches, the resulting architecture should be usable
in realistic settings, but also form the core of a framework containing all the
basic mechanisms, into which it should be easier to integrate additional
required functionality.
In this paper, we present a novel, purposely simple, and interpretable
hierarchical architecture which combines multiple different mechanisms into one
system: unsupervised learning of a model of the world, learning the influence
of one's own actions on the world, model-based reinforcement learning,
hierarchical planning and plan execution, and symbolic/sub-symbolic integration
in general. The learned model is stored in the form of hierarchical
representations with the following properties: 1) they are increasingly more
abstract, but can retain details when needed, and 2) they are easy to
manipulate in their local and symbolic-like form, thus also allowing one to
observe the learning process at each level of abstraction. On all levels of the
system, the representation of the data can be interpreted in both a symbolic
and a sub-symbolic manner. This enables the architecture to learn efficiently
using sub-symbolic methods and to employ symbolic inference.Comment: Revision: changed the pdftitl
- …