304 research outputs found
Motor processes in mental rotation
Much indirect evidence supports the hypothesis that transformations of mental images are at least in part guided by motor processes, even in the case of images of abstract objects rather than of body parts. For example, rotation may be guided by processes that also prime one to see results of a specific motor action. We directly test the hypothesis by means of a dual-task paradigm in which subjects perform the Cooper-Shepard mental rotation task while executing an unseen motor rotation in a given direction and at a previously learned speed. Four results support the inference that mental rotation relies on motor processes. First, motor rotation that is compatible with mental rotation results in faster times and fewer errors in the imagery task than when the two rotations are incompatible. Second, the angle through which subjects rotate their mental images, and the angle through which they rotate a joystick handle are correlated, but only if the directions of the two rotations are compatible. Third, motor rotation modifies the classical inverted V-shaped mental rotation response time function, favoring the direction of the motor rotation; indeed, in some cases motor rotation even shifts the location of the minimum of this curve in the direction of the motor rotation. Fourth, the preceding effect is sensitive not only to the direction of the motor rotation, but also to the motor speed. A change in the speed of motor rotation can correspondingly slow down or speed up the mental rotation
Combined Induction of Rubber-Hand Illusion and Out-of-Body Experiences
The emergence of self-consciousness depends on several processes: those of body ownership, attributing self-identity to the body, and those of self-location, localizing our sense of self. Studies of phenomena like the rubber-hand illusion (RHi) and out-of-body experience (OBE) investigate these processes, respectively for representations of a body-part and the full-body. It is supposed that RHi only target processes related to body-part representations, while OBE only relates to full-body representations. The fundamental question whether the body-part and the full-body illusions relate to each other is nevertheless insufficiently investigated. In search for a link between body-part and full-body illusions in the brain we developed a behavioral task combining adapted versions of the RHi and OBE. Furthermore, for the investigation of this putative link we investigated the role of sensory and motor cues. We established a spatial dissociation between visual and proprioceptive feedback of a hand perceived through virtual reality in rest or action. Two experimental measures were introduced: one for the body-part illusion, the proprioceptive drift of the perceived localization of the hand, and one for the full-body illusion, the shift in subjective-straight-ahead (SSA). In the rest and action conditions it was observed that the proprioceptive drift of the left hand and the shift in SSA toward the manipulation side are equivalent. The combined effect was dependent on the manipulation of the visual representation of body parts, rejecting any main or even modulatory role for relevant motor programs. Our study demonstrates for the first time that there is a systematic relationship between the body-part illusion and the full-body illusion, as shown by our measures. This suggests a link between the representations in the brain of a body-part and the full-body, and consequently a common mechanism underpinning both forms of ownership and self-location
Integration of navigation and action selection functionalities in a computational model of cortico-basal ganglia-thalamo-cortical loops
This article describes a biomimetic control architecture affording an animat
both action selection and navigation functionalities. It satisfies the survival
constraint of an artificial metabolism and supports several complementary
navigation strategies. It builds upon an action selection model based on the
basal ganglia of the vertebrate brain, using two interconnected cortico-basal
ganglia-thalamo-cortical loops: a ventral one concerned with appetitive actions
and a dorsal one dedicated to consummatory actions. The performances of the
resulting model are evaluated in simulation. The experiments assess the
prolonged survival permitted by the use of high level navigation strategies and
the complementarity of navigation strategies in dynamic environments. The
correctness of the behavioral choices in situations of antagonistic or
synergetic internal states are also tested. Finally, the modelling choices are
discussed with regard to their biomimetic plausibility, while the experimental
results are estimated in terms of animat adaptivity
On the mechanical contribution of head stabilization to passive dynamics of anthropometric walkers
During the steady gait, humans stabilize their head around the vertical
orientation. While there are sensori-cognitive explanations for this
phenomenon, its mechanical e fect on the body dynamics remains un-explored. In
this study, we take profit from the similarities that human steady gait share
with the locomotion of passive dynamics robots. We introduce a simplified
anthropometric D model to reproduce a broad walking dynamics. In a previous
study, we showed heuristically that the presence of a stabilized head-neck
system significantly influences the dynamics of walking. This paper gives new
insights that lead to understanding this mechanical e fect. In particular, we
introduce an original cart upper-body model that allows to better understand
the mechanical interest of head stabilization when walking, and we study how
this e fect is sensitive to the choice of control parameters
Steering a humanoid robot by its head
We present a novel method of guiding a humanoid robot, including stepping, by allowing a user to move its head. The motivation behind this approach comes from research in the field of human neuroscience. In human locomotion it has been found that the head plays a very important role in guiding and planning motion. We use this idea to generate humanoid whole-body motion derived purely as a result of moving the head joint. The input to move the head joint is provided by a user via a 6D mouse. The algorithm presented in this study judges when further head movement leads to instability, and then generates stepping motions to stabilize the robot. By providing the software with autonomy to decide when and where to step, the user is allowed to simply steer the robot head (via visual feedback) without worrying about stability. We illustrate our results by presenting experiments conducted in simulation, as well as on our robot, HRP2
The autoscopic flying avatar: a new paradigm to study bilocated presence in mixed reality
This position paper presents the project "Becoming Avatar" deals with avatarial immersion [1] addressed through an interdisciplinary experimental approach. Its goal, at the crossroad of the creation of images and interactive technology, of virtual reality, neurophysiology and information and communication sciences, is to develop a device and a media scenario to support the hypothesis of a split state and to objectify the situation of bilocation [2]. Being present both here in front of the screen and over there, beyond the screen, which is shown by empirical studies of video games and by artists and metaverse explorers in Second Life. This type of state resonates in neurophysiology with the artificial "Out-of-Body Experiences" sensations produced with the aid of virtual reality equipment on healthy subjects.
The production includes the development of a scientific experimental facility for physiological measurements and a public installation allowing someone to live a non-ordinary experience of split self. The common feature to both aspects of the project is based on the original idea of integrating video and 3D technology in order to experiment a situation of flight in mixed reality. The subject is literally invited to "become an avatar", indeed, he sees his own image, filmed from behind, inlaid into a synthetic world where he will be able to move freely and experiment different events. This autoscopic system of immersion was imagined in 2012 by E. Pereny and worked again in 2013-2014 with Pr A. Berthoz and E.A. Amato, to be developed and finalized with N. Galinotti and G. Gorisse, with Jams sessions integrating students
How the Learning Path and the Very Structure of a Multifloored Environment Influence Human Spatial Memory
International audienceFew studies have explored how humans memorize landmarks in complex multifloored buildings. They have observed that participants memorize an environment either by floors or by vertical columns, influenced by the learning path. However, the influence of the building's actual structure is not yet known. In order to investigate this influence, we conducted an experiment using an object-in-place protocol in a cylindrical building to contrast with previous experiments which used rectilinear environments. Two groups of 15 participants were taken on a tour with a first person perspective through a virtual cylindrical three-floored building. They followed either a route discovering floors one at a time, or a route discovering columns (by simulated lifts across floors). They then underwent a series of trials, in which they viewed a camera movement reproducing either a segment of the learning path (familiar trials), or performing a shortcut relative to the learning trajectory (novel trials). We observed that regardless of the learning path, participants better memorized the building by floors, and only participants who had discovered the building by columns also memorized it by columns. This expands on previous results obtained in a rectilinear building, where the learning path favoured the memory of its horizontal and vertical layout. Taken together, these results suggest that both learning mode and an environment's structure influence the spatial memory of complex multifloored buildings
Driver trust and reliance on a navigation system: Effect of graphical display
International audienceThe present study investigates the influence of in-car navigation system graphicâs appearance on driver trust and reliance on the system. Two navigation systems were used: one with a realistic interface and one with a symbolic interface. During driving sessions on a simulator, the systems committed some guidance incoherencies regarding road signs present in the virtual environment. Subjectâs trust and reliance on navigation systems were measured and compared between both systems. Result showed a higher level of trust for the realistic appearance system than for the symbolic one during the whole experiment. The presence of incoherencies decreased trust level for both systems but without any significant difference. No difference in systemâs reliance was found but two groups of subjects were identified. One group is highly relying on both navigation systemsâ indication when incoherence occurs whereas the other group was not. This study highlights the interaction of subjective items, as system graphical appearance, on user trust. Further experiments using a modified experimental setup may be needed to analyze precisely the influence on user relianceCette Ă©tude analyse lâinfluence de lâapparence graphique dâun systĂšme dâaide Ă la navigation sur le niveau de confiance et dâutilisation du systĂšme par le conducteur. Deux systĂšmes dâaide sont utilisĂ©s : un avec une interface graphique rĂ©aliste, et un avec une interface graphique simpliste. Durant des sessions de conduite rĂ©alisĂ©es sur simulateur, des incohĂ©rences dans le guidage du systĂšme vis-Ă -vis des panneaux prĂ©sent dans lâenvironnement routier seront commises. Le niveau de confiance des sujets envers le systĂšme et son utilisation sont enregistrĂ©s et comparĂ©s entre les deux systĂšmes dâaide Ă la navigation. Les rĂ©sultats montrent un niveau de confiance plus Ă©levĂ© tout au long de lâexpĂ©rience pour le systĂšme avec une interface graphique rĂ©aliste. La prĂ©sence dâincohĂ©rences de guidage engendre bien une diminution du niveau de confiance mais sans diffĂ©rence notable entre les deux systĂšmes. Aucune diffĂ©rence du niveau dâutilisation nâest enregistrĂ©e mais deux groupes de sujets sont identifiĂ©s. Un groupe de sujets se fie largement aux directions indiquĂ©es par les deux systĂšmes lors des incohĂ©rences, alors que lâautre groupe non. Cette Ă©tude souligne les interactions dâĂ©lĂ©ments subjectifs, comme lâapparence graphique dâun systĂšme, sur le niveau de confiance de lâutilisateur. Une autre phase expĂ©rimentale utilisant un protocole modifiĂ© serait nĂ©cessaire pour analyser en dĂ©tail lâinfluence sur le niveau dâutilisation du systĂšme
- âŠ