Space, time and motion in a multisensory world

Abstract

When interacting with environmental events, humans acquire information from different senses and combine these inputs within a coherent representation of the world. The present doctoral thesis aims at investigating how humans represent space, time, and motion through auditory and visual sensory modalities. It has been widely demonstrated a predisposition of different sensory systems towards the processing of different domains of representation, with hearing that prevails in representing the time domain and vision that is the most reliable sense for processing the space domain. Given this strong link between sensory modality and domain of representation, one objective of this thesis is to deepen the knowledge of the neural organization of multisensory spatial and temporal skills in healthy adults. In addition, by using blindness as a model to unravel the role of vision in the development of spatio-temporal abilities, this thesis explores the interaction of the spatial and temporal domains in the acoustic motion perception of early blind individuals. The interplay between space and time has also been explained as the result of humans performing actions in the surrounding environment since to carry out goal-directed motor behaviors it is useful for a person to associate the spatial and temporal information of one’s target into a shared mental map. In this regard, the present project also questions how the brain processes spatio-temporal cues of external events when it comes to manually intercepting moving objects with one hand. Finally, in light of the above results, this dissertation incorporates the development of a novel portable device, named MultiTab, for the behavioral evaluation of the processing of space, time, and motor responses, through the visual and acoustic sensory modality. For the purposes of this thesis, four methodological approaches have been employed: i) electroencephalogram (EEG) technique, to explore the cortical activation associated with multisensory spatial and temporal tasks; ii) psychophysical methods, to measure the relationship between stimuli in motion and the acoustic speed perception of blind and sighted individuals; iii) motion capture techniques, to measure indices of movements during an object’s interception task; iv) design and technical-behavioral validation of a new portable device. Studies of the present dissertation indicate the following results. First, this thesis highlights an early cortical gain modulation of sensory areas that depends on the domain of representation to process, with auditory areas mainly involved in the multisensory processing of temporal inputs, and visual areas of spatial inputs. Moreover, for the spatial domain specifically, the neural modulation of visual areas is also influenced by the kind of spatial layout representing multisensory stimuli. Second, this project shows that lack of vision influences the ability to process the speed of moving sounds by altering how blind individuals make use of the sounds’ temporal features. This result suggests that visual experience in the first years of life is a crucial factor when dealing with combined spatio-temporal information. Third, data of this thesis demonstrate that typically developing individuals manually intercepting a moving object with one hand take into consideration the item’s spatio-temporal cues, by adjusting their interceptive movements according to the object’s speed. Finally, the design and validation of MultiTab show its utility in the evaluation of multisensory processing such as the manual localization of audiovisual spatialized stimuli. Overall, findings from this thesis contribute to a more in-depth picture of how the human brain represents space, time, and motion through different senses. Moreover, they provide promising implications in exploring novel technological methods for the assessment and training of these dimensions in typical and atypical populations

    Similar works