102 research outputs found
Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework
In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017
conference (Lisbon, Portugal
Active Learning based on Data Uncertainty and Model Sensitivity
Robots can rapidly acquire new skills from demonstrations. However, during
generalisation of skills or transitioning across fundamentally different
skills, it is unclear whether the robot has the necessary knowledge to perform
the task. Failing to detect missing information often leads to abrupt movements
or to collisions with the environment. Active learning can quantify the
uncertainty of performing the task and, in general, locate regions of missing
information. We introduce a novel algorithm for active learning and demonstrate
its utility for generating smooth trajectories. Our approach is based on deep
generative models and metric learning in latent spaces. It relies on the
Jacobian of the likelihood to detect non-smooth transitions in the latent
space, i.e., transitions that lead to abrupt changes in the movement of the
robot. When non-smooth transitions are detected, our algorithm asks for an
additional demonstration from that specific region. The newly acquired
knowledge modifies the data manifold and allows for learning a latent
representation for generating smooth movements. We demonstrate the efficacy of
our approach on generalising elementary skills, transitioning across different
skills, and implicitly avoiding collisions with the environment. For our
experiments, we use a simulated pendulum where we observe its motion from
images and a 7-DoF anthropomorphic arm.Comment: Published on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Syste
Automatic Curriculum Learning For Deep RL: A Short Survey
Automatic Curriculum Learning (ACL) has become a cornerstone of recent
successes in Deep Reinforcement Learning (DRL).These methods shape the learning
trajectories of agents by challenging them with tasks adapted to their
capacities. In recent years, they have been used to improve sample efficiency
and asymptotic performance, to organize exploration, to encourage
generalization or to solve sparse reward problems, among others. The ambition
of this work is dual: 1) to present a compact and accessible introduction to
the Automatic Curriculum Learning literature and 2) to draw a bigger picture of
the current state of the art in ACL to encourage the cross-breeding of existing
concepts and the emergence of new ideas.Comment: Accepted at IJCAI202
Explauto: an open-source Python library to study autonomous exploration in developmental robotics
International audienceWe present an open-source Python library, called Explauto, providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well as a platform for teaching and scientific diffusion. The library is available at this address: https://github.com/flowersteam/explaut
Towards hierarchical curiosity-driven exploration of sensorimotor models
International audienceCuriosity-driven exploration mechanisms have been proposed to allow robots to actively explore high dimensional sensorimotor spaces in an open-ended manner [1], [2]. In such setups, competence-based intrinsic motivations show better results than knowledge-based exploration mechanisms which only monitor the learner's prediction performance [2], [3]. With competence-based intrinsic motivations, the learner explores its sensor space with a bias toward regions which are predicted to yield a high competence progress. Also, throughout its life, a developmental robot has to incrementally explore skills that add up to the hierarchy of previously learned skills, with a constraint being the cost of experimentation. Thus, a hierarchical exploration architecture could allow to reuse the sensorimotor models previously explored and to combine them to explore more efficiently new complex sensorimotor models. Here, we rely more specifically on the R-IAC and SAGG-RIAC series of architectures [3]. These architectures allow the learning of a single mapping between a motor and a sensor space with a competence-based intrinsic motivation. We describe some ways to extend these architectures with different tasks spaces that can be explored in a hierarchical manner, and mechanisms to handle this hierarchy of sensorimotor models that all need to be explored with an adequate amount of trials. We also describe an interactive task to evaluate the hierarchical learning mechanisms, where a robot has to explore its motor space in order to push an object to different locations. The robot can first explore how to make movements with its hand and then reuse this skill to explore the task of pushing an object
- …