16 research outputs found

    Does Proprioception Influence Human Spatial Cognition? A Study on Individuals With Massive Deafferentation.

    Get PDF
    When navigating in a spatial environment or when hearing its description, we can develop a mental model which may be represented in the central nervous system in different coordinate systems such as an egocentric or allocentric reference frame. The way in which sensory experience influences the preferred reference frame has been studied with a particular interest for the role of vision. The present study investigated the influence of proprioception on human spatial cognition. To do so, we compared the abilities to form spatial models of two rare participants chronically deprived of proprioception (GL and IW) and healthy control participants. Participants listened to verbal descriptions of a spatial environment, and their ability to form and use a mental model was assessed with a distance-comparison task and a free-recall task. Given that the loss of proprioception has been suggested to specifically impair the egocentric reference frame, the deafferented individuals were expected to perform worse than controls when the spatial environment was described in an egocentric reference frame. Results revealed that in both tasks, one deafferented individual (GL) made more errors than controls while the other (IW) made less errors. On average, both GL and IW were slower to respond than controls, and reaction time was more variable for IW. Additionally, we found that GL but not IW was impaired compared to controls in visuo-spatial imagery, which was assessed with the Minnesota Paper Form Board Test. Overall, the main finding of this study is that proprioception can influence the time necessary to use spatial representations while other factors such as visuo-spatial abilities can influence the capacity to form accurate spatial representations

    Gestural auditory and visual interactive platform

    No full text
    This paper introduces GAVIP, an interactive and immersive platform allowing for audio-visual virtual objects to be controlled in real-time by physical gestures and with a high degree of intermodal coherency. The focus is particularly put on two scenarios exploring the interaction between a user and the audio, visual, and spatial synthesis of a virtual world. This platform can be seen as an extended virtual musical instrument that allows an interaction with three modalities: the audio, visual and spatial modality. Intermodal coherency is thus of particular importance in this context. Possibilities and limitations offered by the two developed scenarios are discussed and future work presented
    corecore