172 research outputs found

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Coaching Imagery to Athletes with Aphantasia

    Get PDF
    We administered the Plymouth Sensory Imagery Questionnaire (Psi-Q) which tests multi-sensory imagery, to athletes (n=329) from 9 different sports to locate poor/aphantasic (baseline scores <4.2/10) imagers with the aim to subsequently enhance imagery ability. The low imagery sample (n=27) were randomly split into two groups who received the intervention: Functional Imagery Training (FIT), either immediately, or delayed by one month at which point the delayed group were tested again on the Psi-Q. All participants were tested after FIT delivery and six months post intervention. The delayed group showed no significant change between baseline and the start of FIT delivery but both groups imagery score improved significantly (p=0.001) after the intervention which was maintained six months post intervention. This indicates that imagery can be trained, with those who identify as having aphantasia (although one participant did not improve on visual scores), and improvements maintained in poor imagers. Follow up interviews (n=22) on sporting application revealed that the majority now use imagery daily on process goals. Recommendations are given for ways to assess and train imagery in an applied sport setting

    From sensory perception to spatial cognition

    Get PDF
    To interact with the environmet, it is crucial to have a clear space representation. Several findings have shown that the space around our body is split in several portions, which are differentially coded by the brain. Evidences of such subdivision have been reported by studies on people affected by neglect, on space near (peripersonal) and far (extrapersonal) to the body position and considering space around specific different portion of the body. Moreover, recent studies showed that sensory modalities are at the base of important cognitive skills. However, it is still unclear if each sensory modality has a different role in the development of cognitive skills in the several portions of space around the body. Recent works showed that the visual modality is crucial for the development of spatial representation. This idea is supported by studies on blind individuals showing that visual information is fundamental for the development of auditory spatial representation. For example, blind individuals are not able to perform the spatial bisection task, a task that requires to build an auditory spatial metric, a skill that sighted children acquire around 6 years of age. Based these prior researches, we hypothesize that if different sensory modalities have a role on the devlopment of different cognitive skills, then we should be able to find a clear correlation between availability of the sensory modality and the cognitive skill associated. In particular we hypothesize that the visual information is crucial for the development of auditory space represnetation; if this is true, we should find different spatial skill between front and back spaces. In this thesis, I provide evidences that spaces around our body are differently influenced by sensory modalities. Our results suggest that visual input have a pivotal role in the development of auditory spatial representation and that this applies only to the frontal space. Indeed sighted people are less accurated in spatial task only in space where vision is not present (i.e. the back), while blind people show no differences between front and back spaces. On the other hand, people tend to report sounds in the back space, suggesting that the role of hearing in allertness could be more important in the back than frontal spaces. Finally, we show that natural training, stressing the integration of audio motor stimuli, can restore spatial cognition, opening new possibility for rehabilitation programs. Spatial cognition is a well studied topic. However, we think our findings fill the gap regarding how the different availibility of sensory information, across spaces, causes the development of different cognitive skills in these spaces. This work is the starting point to understand the strategies that the brain adopts to maximize its resources by processing, in the more efficient way, as much information as possible

    The Role of Vision on Spatial Competence

    Get PDF
    Several pieces of evidence indicate that visual experience during development is fundamental to acquire long-term spatial capabilities. For instance, reaching abilities tend to emerge at 5 months of age in sighted infants, while only later at 10 months of age in blind infants. Moreover, other spatial skills such as auditory localization and haptic orientation discrimination tend to be delayed or impaired in visually impaired children, with a huge impact on the development of sighted-like perceptual and cognitive asset. Here, we report an overview of studies showing that the lack of vision can interfere with the development of coherent multisensory spatial representations and highlight the contribution of current research in designing new tools to support the acquisition of spatial capabilities during childhood

    Seeing with ears: how we create an auditory representation of space with echoes and its relation with other senses

    Get PDF
    Spatial perception is the capability that allows us to learn about the environment. All our senses are involved in creating a representation of the external world. When we create the representation of space we rely primarily on visual information, but it is the integration with the other senses that allows us a more global and truthful representation of it. While the influence of vision and the integration of different senses among each other in spatial perception has been widely investigated, many questions remain about the role of the acoustic system in space perception and how it can be influenced by the other senses. Give an answer to these questions on healthy people can help to better understand whether the same \u201crules\u201d can be applied to, for example, people that have lost vision in the early stages of development. Understanding how spatial perception works in blind people from birth is essential to then develop rehabilitative methodologies or technologies to help these people to provide for lack of vision, since vision is the main source of spatial information. For this reason, one of the main scientific objective of this thesis is to increase knowledge about auditory spatial perception in sighted and visually impaired people, thanks to the development of new tasks to assess spatial abilities. Moreover, I focus my attention on a recent investigative topic in humans, i.e. echolocation. Echolocation has a great potential in terms of improvement regarding space and navigation skills for people with visual disabilities. Several studies demonstrate how the use of this technique can be favorable in the absence of vision, both on the level perceptual level and also at the social level. Based in the importance of echolocation, we developed some tasks to test the ability of novice people and we undergo the participants to an echolocation training to see how long does it take to manage this technique (in simple task). Instead of using blind individuals, we decide to test the ability of novice sighted people to see whether technique is blind related or not and whether it is possible to create a representation of space using echolocatio

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    The role of sensorimotor incongruence in pathological pain

    Get PDF

    Peripersonal space representation in the first year of life: a behavioural and electroencephalographic investigation of the perception of unimodal and multimodal events taking place in the space surrounding the body

    Get PDF
    In my PhD research project, I wanted to investigate infants’ representation of the peripersonal space, which is the portion of environment between the self and the others. In the last three decades research provided evidence on newborns’ and infants’ perception of their own bodies and of other individuals, whereas not many studies investigated infants’ perception of the portion of space where they can interact with both others and objects, namely the peripersonal space. Considering the importance of the peripersonal space, especially in light of its defensive and interactive functions, I decided to investigate the development of its representation focusing on two aspects. On one side, I wanted to study how newborns and infants processed the space around them, if they differentiated between near and far space, possibly perceiving and integrating depth cues across sensory modalities and when and how they started to respond to different movements occurring in the space surrounding their bodies. On the other side, I was interested in understanding whether already at birth the peripersonal space could be considered as a delimited portion of space with special characteristics and, relatedly, if its boundaries could be determined. In order to respond to my first question, I investigated newborns’ and infants’ looking behaviour in response to visual and audio-visual stimuli depicting different trajectories taking place in the space immediately surrounding their body. Taken together, the results of these studies demonstrated that humans show, since the earliest stages of their development, a rudimentary processing of the space surrounding them. Newborns seemed, in fact, to already differentiate the space around them, through an efficient discrimination of different moving trajectories and a visual preference for those directed towards their own body, possibly due to their higher adaptive relevance. They also seemed to integrate multimodal, audio-visual information about stimuli moving in the near space, showing a facilitated processing of congruent audio-visual approaching stimuli. Furthermore, the results of these studies could help understand the development of the integration of multimodal stimuli with an adaptive valence during infancy. When newborns’ and infants were presented with unimodal, visual stimuli, they all directed their visual preferences to the stimuli moving towards their bodies. Conversely, their pattern of looking times was more complex when they were presented with congruent and incongruent audiovisual stimuli. Right after birth infants showed a spontaneous visual preference for congruent audio-visual stimuli, which was challenged by a similarly strong visual preference for adaptively important visual stimuli moving towards their bodies. The looking behaviours of 5-month-old infants, instead, seemed to be driven only by a spontaneous preference for multimodal congruent stimuli, i.e. depicting motion along the same trajectory, irrespective of the adaptive value of the information conveyed by either of the two sensory components of the stimulus. Nine-month-old infants, finally, seemed to flexibly integrate multisensory integration principles with the necessity of directing their attention to ethologically salient stimuli, as shown by the fact that their visual preference for unexpected, incongruent audio-visual stimuli was challenged by the simultaneous presence of adaptively relevant stimuli. Similarly to what happened with newborns, presenting 9-month-old infants with the two categories of preferred stimuli simultaneously led to the absence of a visual preference. Within my project I also investigated the electroencephalographic correlates of the processing of unimodal, visual and auditory, stimuli depicting different trajectories in a sample of 5-month-old infants. The results seemed to provide evidence in support of the role of the primary sensory cortices in the processing of crossmodal stimuli. Furthermore, they seemed to support the possibility that infants’ brain could allocate, already during the earliest stages of processing, different amounts of attention to stimuli with different adaptive valence. Two further studies addressed my second question, namely whether already at birth the peripersonal space could be considered as a delimited portion of space with special characteristics and if its boundaries could be determined. In these studies I measured newborns’ saccadic reaction times (RTs) to tactile stimuli presented simultaneously to a sound perceived at different distances from their body. The results showed that newborns’ RTs were modulated by the perceived position of the sound and that their modulation was very similar to that shown by adults, suggesting that the boundary of newborns’ peripersonal space could be identified in the perceived sound position in whose correspondence the drop of RTs happened. This suggested that at birth the space immediately surrounding the body seems to be already invested of a special salience and characterised by a more efficient integration of multimodal stimuli. As a consequence, it might be considered as a rudimentary representation of the peripersonal space, possibly serving, as a working space representation, early interactions between newly born humans and their environment. Overall, these findings provide a first understanding of how humans start to process the space surrounding them, which, importantly, is the space linking them with others and the space where their first interactions will take place

    Tool-use: An open window into body representation and its plasticity

    Get PDF
    Martel M, Cardinali L, Roy AC, Farnè A. Tool-use: An open window into body representation and its plasticity. Cognitive Neuropsychology. 2016;33(1-2):82-101.Over the last decades, scientists have questioned the origin of the exquisite human mastery of tools. Seminal studies in monkeys, healthy participants and brain-damaged patients have primarily focused on the plastic changes that tool-use induces on spatial representations. More recently, we focused on the modifications tool-use must exert on the sensorimotor system and highlighted plastic changes at the level of the body representation used by the brain to control our movements, i.e., the Body Schema. Evidence is emerging for tool-use to affect also more visually and conceptually based representations of the body, such as the Body Image. Here we offer a critical review of the way different tool-use paradigms have been, and should be, used to try disentangling the critical features that are responsible for tool incorporation into different body representations. We will conclude that tool-use may offer a very valuable means to investigate high-order body representations and their plasticity
    • …
    corecore