995 research outputs found
Exploring virtual reality object perception following sensory-motor interactions with different visuo-haptic collider properties.
Interacting with the environment often requires the integration of visual and haptic information. Notably, perceiving external objects depends on how our brain binds sensory inputs into a unitary experience. The feedback provided by objects when we interact (through our movements) with them might then influence our perception. In VR, the interaction with an object can be dissociated by the size of the object itself by means of 'colliders' (interactive spaces surrounding the objects). The present study investigates possible after-effects in size discrimination for virtual objects after exposure to a prolonged interaction characterized by visual and haptic incongruencies. A total of 96 participants participated in this virtual reality study. Participants were distributed into four groups, in which they were required to perform a size discrimination task between two cubes before and after 15 min of a visuomotor task involving the interaction with the same virtual cubes. Each group interacted with a different cube where the visual (normal vs. small collider) and the virtual cube's haptic (vibration vs. no vibration) features were manipulated. The quality of interaction (number of touches and trials performed) was used as a dependent variable to investigate the performance in the visuomotor task. To measure bias in size perception, we compared changes in point of subjective equality (PSE) before and after the task in the four groups. The results showed that a small visual collider decreased manipulation performance, regardless of the presence or not of the haptic signal. However, change in PSE was found only in the group exposed to the small visual collider with haptic feedback, leading to increased perception of the cube size. This after-effect was absent in the only visual incongruency condition, suggesting that haptic information and multisensory integration played a crucial role in inducing perceptual changes. The results are discussed considering the recent findings in visual-haptic integration during multisensory information processing in real and virtual environments
An embodied and grounded perspective on concepts
By the mainstream view in psychology and neuroscience, concepts are informational units, rather stable, and are represented in propositional format.
In the view I will outline, instead, concepts correspond to patterns of activation of the perception, action and emotional systems which are typically activated when we interact with the entities they refer to. Starting from this embodied and grounded approach to concepts, I will focus on different research lines and present some experimental evidence concerning concepts of objects, concepts of actions, and abstract concepts. I will argue that, in order to account for abstract concepts, embodied and grounded theories should be extended
The implications of embodiment for behavior and cognition: animal and robotic case studies
In this paper, we will argue that if we want to understand the function of
the brain (or the control in the case of robots), we must understand how the
brain is embedded into the physical system, and how the organism interacts with
the real world. While embodiment has often been used in its trivial meaning,
i.e. 'intelligence requires a body', the concept has deeper and more important
implications, concerned with the relation between physical and information
(neural, control) processes. A number of case studies are presented to
illustrate the concept. These involve animals and robots and are concentrated
around locomotion, grasping, and visual perception. A theoretical scheme that
can be used to embed the diverse case studies will be presented. Finally, we
will establish a link between the low-level sensory-motor processes and
cognition. We will present an embodied view on categorization, and propose the
concepts of 'body schema' and 'forward models' as a natural extension of the
embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of
Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5
Objects, words, and actions. Some reasons why embodied models are badly needed in cognitive psychology
In the present chapter we report experiments on the relationships between visual objects and action and between words and actions. Results show that seeing an object activates motor information, and that also language is grounded in perceptual and motor systems. They are discussed within the framework of embodied cognitive science. We argue that models able to reproduce the experiments should be embodied organisms, whose brain is simulated with neural networks and whose body is as similar as possible to humans\u27 body. We also claim that embodied models are badly needed in cognitive psychology, as they could help to solve some open issues. Finally, we discuss potential implications of the use of embodied models for embodied theories of cognition
Divisions Within the Posterior Parietal Cortex Help Touch Meet Vision
The parietal cortex is divided into two major functional regions: the anterior parietal cortex that includes primary somatosensory cortex, and the posterior parietal cortex (PPC) that includes the rest of the parietal lobe. The PPC contains multiple representations of space. In Dijkerman and de Haan’s (see record 2007-13802-022) model, higher spatial representations are separate from PPC functions. This model should be developed further so that the functions of the somatosensory system are integrated with specific functions within the PPC and higher spatial representations. Through this further specification of the model, one can make better predictions regarding functional interactions between somatosensory and visual systems
Central role of somatosensory processes in sexual arousal as identified by neuroimaging techniques
Research on the neural correlates of sexual arousal is a growing field of research in affective neuroscience. A new approach studying the correlation between the hemodynamic cerebral response and autonomic genital response has enabled distinct brain areas to be identified according to their role in inducing penile erection, on the one hand, and in representing penile sensation, on the othe
The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics
The topic of vision-based grasping is being widely studied using various techniques and
with different goals in humans and in other primates. The fundamental related findings are
reviewed in this paper, with the aim of providing researchers from different fields, including
intelligent robotics and neural computation, a comprehensive but accessible view on the
subject. A detailed description of the principal sensorimotor processes and the brain areas
involved in them is provided following a functional perspective, in order to make this survey
especially useful for computational modeling and bio-inspired robotic application
Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
- …