1,120 research outputs found
Affordances in Psychology, Neuroscience, and Robotics: A Survey
The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics
Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsically rewarding
for children and crucial for their cognitive development.
Can a machine be endowed with such an intrinsic motivation
system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations
which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology.
Key words: Active learning, autonomy, behavior, complexity,
curiosity, development, developmental trajectory, epigenetic
robotics, intrinsic motivation, learning, reinforcement learning,
values
Discovering Communication
What kind of motivation drives child language development? This
article presents a computational model and a robotic experiment to articulate
the hypothesis that children discover communication as a result
of exploring and playing with their environment. The considered
robotic agent is intrinsically motivated towards situations in which
it optimally progresses in learning. To experience optimal learning
progress, it must avoid situations already familiar but also situations
where nothing can be learnt. The robot is placed in an environment in
which both communicating and non-communicating objects are present.
As a consequence of its intrinsic motivation, the robot explores this environment
in an organized manner focusing first on non-communicative
activities and then discovering the learning potential of certain types of
interactive behaviour. In this experiment, the agent ends up being interested
by communication through vocal interactions without having
a specific drive for communication
Learning Dimensions: Lessons from Field Studies
In this paper, we describe work to investigate the creation of engaging programming learning experiences. Background research informed the design of four fieldwork studies involving a range of age groups to explore how programming tasks could best be framed to motivate learners. Our empirical findings from these four studies, described here, contributed to the design of a set of programming "Learning Dimensions" (LDs). The LDs provide educators with insights to support key design decisions for the creation of engaging programming learning experiences. This paper describes the background to the identification of these LDs and how they could address the design and delivery of highly engaging programming learning tasks. A web application has been authored to support educators in the application of the LDs to their lesson design
Empirical experiments on intrinsic motivations and action acquisition: results, evaluation, and redefinition
This document presents Deliverable D3.2 of the EU-funded Integrated Project "IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots", contract n. FP7-ICT-IP-231722.The aims of the deliverable, as given in the original IM-CLEVER proposal were to identify new key empirical phenomena and processes, allowing the design of a second set of experiments. This report covers: (1) novelty detection and discovery of when/what/how of agency in experiments with humans ("joystick experiment") and Parkinson patients. (2) how object properties that stimulate intrinsically motivated interaction and facilitate the acquisition of adaptive knowledge and skills in monkeys and children ("board experiment")
Self-Supervised Learning of Action Affordances as Interaction Modes
When humans perform a task with an articulated object, they interact with the
object only in a handful of ways, while the space of all possible interactions
is nearly endless. This is because humans have prior knowledge about what
interactions are likely to be successful, i.e., to open a new door we first try
the handle. While learning such priors without supervision is easy for humans,
it is notoriously hard for machines. In this work, we tackle unsupervised
learning of priors of useful interactions with articulated objects, which we
call interaction modes. In contrast to the prior art, we use no supervision or
privileged information; we only assume access to the depth sensor in the
simulator to learn the interaction modes. More precisely, we define a
successful interaction as the one changing the visual environment substantially
and learn a generative model of such interactions, that can be conditioned on
the desired goal state of the object. In our experiments, we show that our
model covers most of the human interaction modes, outperforms existing
state-of-the-art methods for affordance learning, and can generalize to objects
never seen during training. Additionally, we show promising results in the
goal-conditional setup, where our model can be quickly fine-tuned to perform a
given task. We show in the experiments that such affordance learning predicts
interaction which covers most modes of interaction for the querying articulated
object and can be fine-tuned to a goal-conditional model. For supplementary:
https://actaim.github.io
- …