2,513 research outputs found
Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning
We present a developmental framework based on a long-term memory and
reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This
architecture allows a robot to optimize autonomously hyper-parameters that need
to be tuned from any action and/or vision module, treated as a black-box. The
learning can take advantage of past experiences (stored in the episodic and
procedural memories) in order to warm-start the exploration using a set of
hyper-parameters previously optimized from objects similar to the new unknown
one (stored in a semantic memory). As example, the system has been used to
optimized 9 continuous hyper-parameters of a professional software (Kamido)
both in simulation and with a real robot (industrial robotic arm Fanuc) with a
total of 13 different objects. The robot is able to find a good object-specific
optimization in 68 (simulation) or 40 (real) trials. In simulation, we
demonstrate the benefit of the transfer learning based on visual similarity, as
opposed to an amnesic learning (i.e. learning from scratch all the time).
Moreover, with the real robot, we show that the method consistently outperforms
the manual optimization from an expert with less than 2 hours of training time
to achieve more than 88% of success
Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsically rewarding
for children and crucial for their cognitive development.
Can a machine be endowed with such an intrinsic motivation
system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robotâs activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations
which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology.
Key words: Active learning, autonomy, behavior, complexity,
curiosity, development, developmental trajectory, epigenetic
robotics, intrinsic motivation, learning, reinforcement learning,
values
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots
Building a humanlike integrative artificial cognitive system, that is, an
artificial general intelligence, is one of the goals in artificial intelligence
and developmental robotics. Furthermore, a computational model that enables an
artificial cognitive system to achieve cognitive development will be an
excellent reference for brain and cognitive science. This paper describes the
development of a cognitive architecture using probabilistic generative models
(PGMs) to fully mirror the human cognitive system. The integrative model is
called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In
this paper, the process of building the WB-PGM and learning from the human
brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network
DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Robots are still limited to controlled conditions, that the robot designer
knows with enough details to endow the robot with the appropriate models or
behaviors. Learning algorithms add some flexibility with the ability to
discover the appropriate behavior given either some demonstrations or a reward
to guide its exploration with a reinforcement learning algorithm. Reinforcement
learning algorithms rely on the definition of state and action spaces that
define reachable behaviors. Their adaptation capability critically depends on
the representations of these spaces: small and discrete spaces result in fast
learning while large and continuous spaces are challenging and either require a
long training period or prevent the robot from converging to an appropriate
behavior. Beside the operational cycle of policy execution and the learning
cycle, which works at a slower time scale to acquire new policies, we introduce
the redescription cycle, a third cycle working at an even slower time scale to
generate or adapt the required representations to the robot, its environment
and the task. We introduce the challenges raised by this cycle and we present
DREAM (Deferred Restructuring of Experience in Autonomous Machines), a
developmental cognitive architecture to bootstrap this redescription process
stage by stage, build new state representations with appropriate motivations,
and transfer the acquired knowledge across domains or tasks or even across
robots. We describe results obtained so far with this approach and end up with
a discussion of the questions it raises in Neuroscience
A perspective on lifelong open-ended learning autonomy for robotics through cognitive architectures
[Abstract]: This paper addresses the problem of achieving lifelong open-ended learning autonomy in robotics, and how different cognitive architectures provide functionalities that support it. To this end, we analyze a set of well-known cognitive architectures in the literature considering the different components they address and how they implement them. Among the main functionalities that are taken as relevant for lifelong open-ended learning autonomy are the fact that architectures must contemplate learning, and the availability of contextual memory systems, motivations or attention. Additionally, we try to establish which of them were actually applied to real robot scenarios. It transpires that in their current form, none of them are completely ready to address this challenge, but some of them do provide some indications on the paths to follow in some of the aspects they contemplate. It can be gleaned that for lifelong open-ended learning autonomy, motivational systems that allow finding domain-dependent goals from general internal drives, contextual long-term memory systems that all allow for associative learning and retrieval of knowledge, and robust learning systems would be the main components required. Nevertheless, other components, such as attention mechanisms or representation management systems, would greatly facilitate operation in complex domains.Ministerio de Ciencia e InnovaciĂłn; PID2021-126220OB-I00Xunta de Galicia; EDC431C-2021/39ConsellerĂa de Cultura, EducaciĂłn, FormaciĂłn Profesional e Universidades; ED431G 2019/0
Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework
In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017
conference (Lisbon, Portugal
Motivation as a Tool for Designing Lifelong Learning Robots
[Abstract] Designing robots has usually implied knowing beforehand the tasks to be carried out and in what domains. However, in the case of fully autonomous robots this is not possible. Autonomous robots need to operate in an open-ended manner, that is, deciding on the most interesting goals to achieve in domains that are not known at design time. This obviously poses a challenge from the point of view of designing the robot control structure. In particular, the main question that arises is how to endow the robot with a designer defined purpose and with means to translate that purpose into operational decisions without any knowledge of what situations the robot will find itself in. In this paper, we provide a formalization of motivation from an engineering perspective that allows for the structured design of purposeful robots. This formalization is based on a definition of the concepts of robot needs and drives, which are related through experience to the appropriate goals in specific domains. To illustrate the process, a motivational system to guide the operation of a real robot is constructed using this approach. A series of experiments carried out over it are discussed providing some insights on the design of purposeful motivated operation.This work was partially funded by the EUâs H2020 research programme (grant No 640891 DREAM), Ministerio de Ciencia, InnovaciĂłn y Universidades of Spain/FEDER (grant RTI2018-101114-B-I00), Xunta de Galicia and FEDER (grant ED431C 2017/12), and by the Spanish Ministry of Education, Culture and Sports through the FPU grant of Alejandro RomeroXunta de Galicia; ED431C 2017/1
- âŠ