3,675 research outputs found
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the
immediately accessible parts of its environment, over the course of a few
months creating a model of peripersonal space useful for reaching and grasping
objects around it. Drawing on constraints from the empirical literature on
infant behavior, we present a preliminary computational model of this learning
process, implemented and evaluated on a physical robot. The learning agent
explores the relationship between the configuration space of the arm, sensing
joint angles through proprioception, and its visual perceptions of the hand and
grippers. The resulting knowledge is represented as the peripersonal space
(PPS) graph, where nodes represent states of the arm, edges represent safe
movements, and paths represent safe trajectories from one pose to another. In
our model, the learning process is driven by intrinsic motivation. When
repeatedly performing an action, the agent learns the typical result, but also
detects unusual outcomes, and is motivated to learn how to make those unusual
results reliable. Arm motions typically leave the static background unchanged,
but occasionally bump an object, changing its static position. The reach action
is learned as a reliable way to bump and move an object in the environment.
Similarly, once a reliable reach action is learned, it typically makes a
quasi-static change in the environment, moving an object from one static
position to another. The unusual outcome is that the object is accidentally
grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically
with the hand. Learning to make grasps reliable is more complex than for
reaches, but we demonstrate significant progress. Our current results are steps
toward autonomous sensorimotor learning of motion, reaching, and grasping in
peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure
Sensorimotor representation learning for an "active self" in robots: A model survey
Safe human-robot interactions require robots to be able to learn how to
behave appropriately in \sout{humans' world} \rev{spaces populated by people}
and thus to cope with the challenges posed by our dynamic and unstructured
environment, rather than being provided a rigid set of rules for operations. In
humans, these capabilities are thought to be related to our ability to perceive
our body in space, sensing the location of our limbs during movement, being
aware of other objects and agents, and controlling our body parts to interact
with them intentionally. Toward the next generation of robots with bio-inspired
capacities, in this paper, we first review the developmental processes of
underlying mechanisms of these abilities: The sensory representations of body
schema, peripersonal space, and the active self in humans. Second, we provide a
survey of robotics models of these sensory representations and robotics models
of the self; and we compare these models with the human counterparts. Finally,
we analyse what is missing from these robotics models and propose a theoretical
computational framework, which aims to allow the emergence of the sense of self
in artificial agents by developing sensory representations through
self-exploration
Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey
Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by a form of intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move a specified object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, bumping an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasping reliable is more complex than for reaching, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation
The wheelchair as a full-body tool extending the peripersonal space
Dedicated multisensory mechanisms in the brain represent peripersonal space (PPS), a limited portion of space immediately surrounding the body. Previous studies have illustrated the malleability of PPS representation through hand-object interaction, showing that tool use extends the limits of the hand-centered PPS. In the present study we investigated the effects of a special tool, the wheelchair, in extending the action possibilities of the whole body. We used a behavioral measure to quantify the extension of the PPS around the body before and after Active (Experiment 1) and Passive (Experiment 2) training with a wheelchair and when participants were blindfolded (Experiment 3). Results suggest that a wheelchair-mediated passive exploration of far space extended PPS representation. This effect was specifically related to the possibility of receiving information from the environment through vision, since no extension effect was found when participants were blindfolded. Surprisingly, the active motor training did not induce any modification in PPS representation, probably because the wheelchair maneuver was demanding for non-expert users and thus they may have prioritized processing of information from close to the wheelchair rather than at far spatial locations. Our results suggest that plasticity in PPS representation after tool use seems not to strictly depend on active use of the tool itself, but is triggered by simultaneous processing of information from the body and the space where the body acts in the environment, which is more extended in the case of wheelchair use. These results contribute to our understanding of the mechanisms underlying body environment interaction for developing and improving applications of assistive technological devices in different clinical populations
Creation of a mobile game about environmental sustainability
This document aims to provide a insightful description about the process of
building a mobile videogame about environmental sustainability using Unity
and a detailed explanation of two other approaches that the thesis authors
found to be not adequate.
The report, though, does not aim to be a step-by step guide on how to
build such a game, as it would prove very tedious and unproductive. It has
been chosen to only provide insight on key gameplay elements and technical
decisions that will enable future endeavours in this eld to build a similar
project or extend the current one with easiness
- …