959 research outputs found
Embodied Robot Models for Interdisciplinary Emotion Research
Due to their complex nature, emotions cannot be properly understood from the perspective of a single discipline. In this paper, I discuss how the use of robots as models is beneficial for interdisciplinary emotion research. Addressing this issue through the lens of my own research, I focus on a critical analysis of embodied robots models of different aspects of emotion, relate them to theories in psychology and neuroscience, and provide representative examples. I discuss concrete ways in which embodied robot models can be used to carry out interdisciplinary emotion research, assessing their contributions: as hypothetical models, and as operational models of specific emotional phenomena, of general emotion principles, and of specific emotion ``dimensions''. I conclude by discussing the advantages of using embodied robot models over other models.Peer reviewe
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Development of Cognitive Capabilities in Humanoid Robots
Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate
grand challenge for science and technology in general, and especially for the
computational intelligence community. Recent theories in autonomous cognitive
systems have focused on the close integration (grounding) of communication with
perception, categorisation and action. Cognitive systems are essential for
integrated multi-platform systems that are capable of sensing and communicating.
This thesis presents a cognitive system for a humanoid robot that integrates
abilities such as object detection and recognition, which are merged with natural
language understanding and refined motor controls. The work includes three
studies; (1) the use of generic manipulation of objects using the NMFT algorithm,
by successfully testing the extension of the NMFT to control robot behaviour; (2) a
study of the development of a robotic simulator; (3) robotic simulation experiments
showing that a humanoid robot is able to acquire complex behavioural, cognitive,
and linguistic skills through individual and social learning. The robot is able to
learn to handle and manipulate objects autonomously, to cooperate with human
users, and to adapt its abilities to changes in internal and environmental conditions.
The model and the experimental results reported in this thesis, emphasise the
importance of embodied cognition, i.e. the humanoid robot's physical interaction
between its body and the environment
The influence of dopamine on prediction, action and learning
In this thesis I explore functions of the neuromodulator dopamine in the context
of autonomous learning and behaviour. I first investigate dopaminergic influence
within a simulated agent-based model, demonstrating how modulation of
synaptic plasticity can enable reward-mediated learning that is both adaptive and
self-limiting. I describe how this mechanism is driven by the dynamics of agentenvironment
interaction and consequently suggest roles for both complex spontaneous
neuronal activity and specific neuroanatomy in the expression of early, exploratory
behaviour. I then show how the observed response of dopamine neurons
in the mammalian basal ganglia may also be modelled by similar processes involving
dopaminergic neuromodulation and cortical spike-pattern representation within
an architecture of counteracting excitatory and inhibitory neural pathways, reflecting
gross mammalian neuroanatomy. Significantly, I demonstrate how combined
modulation of synaptic plasticity and neuronal excitability enables specific (timely)
spike-patterns to be recognised and selectively responded to by efferent neural populations,
therefore providing a novel spike-timing based implementation of the hypothetical
‘serial-compound’ representation suggested by temporal difference learning.
I subsequently discuss more recent work, focused upon modelling those complex
spike-patterns observed in cortex. Here, I describe neural features likely to contribute
to the expression of such activity and subsequently present novel simulation
software allowing for interactive exploration of these factors, in a more comprehensive
neural model that implements both dynamical synapses and dopaminergic
neuromodulation. I conclude by describing how the work presented ultimately suggests
an integrated theory of autonomous learning, in which direct coupling of agent
and environment supports a predictive coding mechanism, bootstrapped in early
development by a more fundamental process of trial-and-error learning
A new class of neural architectures to model episodic memory : computational studies of distal reward learning
A computational cognitive neuroscience model is proposed, which models episodic memory based on the mammalian brain. A computational neural architecture instantiates the proposed model and is tested on a particular task of distal reward learning. Categorical Neural Semantic Theory informs the architecture design. To experiment upon the computational brain model, embodiment and an environment in which the embodiment exists are simulated. This simulated environment realizes the Morris Water Maze task, a well established biological experimental test of distal reward learning. The embodied neural architecture is treated as a virtual rat and the environment it acts in as a virtual water tank. Performance levels of the neural architectures are evaluated through analysis of embodied behavior in the distal reward learning task. Comparison is made to biological rat experimental data, as well as comparison to other published models. In addition, differences in performance are compared between the normal and categorically informed versions of the architecture
Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey
Building autonomous machines that can explore open-ended environments,
discover possible interactions and build repertoires of skills is a general
objective of artificial intelligence. Developmental approaches argue that this
can only be achieved by : intrinsically motivated learning
agents that can learn to represent, generate, select and solve their own
problems. In recent years, the convergence of developmental approaches with
deep reinforcement learning (RL) methods has been leading to the emergence of a
new field: . Developmental RL is
concerned with the use of deep RL algorithms to tackle a developmental problem
-- the -
. The self-generation of goals requires the learning
of compact goal encodings as well as their associated goal-achievement
functions. This raises new challenges compared to standard RL algorithms
originally designed to tackle pre-defined sets of goals using external reward
signals. The present paper introduces developmental RL and proposes a
computational framework based on goal-conditioned RL to tackle the
intrinsically motivated skills acquisition problem. It proceeds to present a
typology of the various goal representations used in the literature, before
reviewing existing methods to learn to represent and prioritize goals in
autonomous systems. We finally close the paper by discussing some open
challenges in the quest of intrinsically motivated skills acquisition
- …