95 research outputs found

    Cognitive map formation supported by auditory, haptic, and multimodal information in persons with blindness

    Get PDF
    For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    The haptic perception of spatial orientations

    Get PDF
    This review examines the isotropy of the perception of spatial orientations in the haptic system. It shows the existence of an oblique effect (i.e., a better perception of vertical and horizontal orientations than oblique orientations) in a spatial plane intrinsic to the haptic system, determined by the gravitational cues and the cognitive resources and defined in a subjective frame of reference. Similar results are observed from infancy to adulthood. In 3D space, the haptic processing of orientations is also anisotropic and seems to use both egocentric and allocentric cues. Taken together, these results revealed that the haptic oblique effect occurs when the sensory motor traces associated with exploratory movement are represented more abstractly at a cognitive level

    Cognitive map formation through tactile map navigation in visually impaired and sighted persons

    Get PDF
    The human brain can form cognitive maps of a spatial environment, which can support wayfinding. In this study, we investigated cognitive map formation of an environment presented in the tactile modality, in visually impaired and sighted persons. In addition, we assessed the acquisition of route and survey knowledge. Ten persons with a visual impairment (PVIs) and ten sighted control participants learned a tactile map of a city-like environment. The map included five marked locations associated with different items. Participants subsequently estimated distances between item pairs, performed a direction pointing task, reproduced routes between items and recalled item locations. In addition, we conducted questionnaires to assess general navigational abilities and the use of route or survey strategies. Overall, participants in both groups performed well on the spatial tasks. Our results did not show differences in performance between PVIs and sighted persons, indicating that both groups formed an equally accurate cognitive map. Furthermore, we found that the groups generally used similar navigational strategies, which correlated with performance on some of the tasks, and acquired similar and accurate route and survey knowledge. We therefore suggest that PVIs are able to employ a route as well as survey strategy if they have the opportunity to access route-like as well as map-like information such as on a tactile map

    Memory for sounds: novel technological solutions for the evaluation of mnestic skills

    Get PDF
    Working memory (WM) plays a crucial role in helping individuals to perform everyday activities. The neural structures underlying this system continue to develop during infancy and reach maturity only late in development. Despite useful insights into visual memory mechanisms, audio-spatial memory has not been thoroughly investigated, especially in children and congenitally blind individuals. The main scientific objective of this thesis was to increase knowledge of spatial WM and imagery abilities in the auditory modality. We focused on how these skills change during typical development and on the consequences of early visual deprivation. Our first hypothesis was that the changes in WM functionality and spatial skills occurring in the early years of life, influence the ability to remember and associate spatialized sounds or to explore and learn acoustic spatial layouts. Since vision plays a crucial role in spatial cognition (ThinusBlanc and Gaunet, 1997), we expected blind individuals to encounter specific difficulties when asked to process and manipulate spatial information retained in memory, as already observed in the haptic modality (Cattaneo et al., 2008; Vecchi, 1998). Although some studies demonstrated the superior performance of the blind in various verbal-memory tasks (Amedi et al., 2003; Po\u17e\ue1r, 1982; R\uf6der et al., 2001), very little is known on how they remember and manipulate acoustic spatial information. The investigation of auditory cognition often requires specially adapted hardware and software solutions rarely available on the market. For example, in the case of studying cognitive functions that involve auditory spatial information, multiple acoustic spatial locations are required, such as numerous speakers or dedicated virtual acoustics. Thus, to the aim of this thesis, we took advantage of novel technological solutions developed explicitly for delivering non-visual spatialized stimuli. We worked on the software development of a vertical array of speakers (ARENA2D), an audio-tactile tablet (Audiobrush), and we designed a system based on an acoustic virtual reality (VR) simulation. These novel solutions were used to adapt validated clinical procedures (Corsi-Block test) and games (the card game Memory) to the auditory domain, to be also performed by visually impaired individuals. Thanks to the technologies developed in these years, we could investigate these topics and observed that audio-spatial memory abilities are strongly affected by the developmental stage and the lack of visual experience

    Making a stronger case for comparative research to investigate the behavioral and neurological bases of three-dimensional navigation

    Get PDF
    The rich diversity of avian natural history provides exciting possibilities for comparative research aimed at understanding three-dimensional navigation. We propose some hypotheses relating differences in natural history to potential behavioral and neurological adaptations possessed by contrasting bird species. This comparative approach may offer unique insights into some of the important questions raised by Jeffery et al

    How much spatial information is lost in the sensory substitution process? Comparing visual, tactile, and auditory approaches

    Get PDF
    Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study novice users discriminated the location of two objects at 1.2m using devices that transformed a 16x 8 depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2cm, 8cm, and 29cm using these visual, auditory, and haptic SSDs respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g. 16 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this ‘modality gap’ found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g. visuospatial into visual)

    Making a stronger case for comparative research to investigate the behavioral and neurological bases of three-dimensional navigation

    Get PDF
    The rich diversity of avian natural history provides exciting possibilities for comparative research aimed at understanding three-dimensional navigation. We propose some hypotheses relating differences in natural history to potential behavioral and neurological adaptations possessed by contrasting bird species. This comparative approach may offer unique insights into some of the important questions raised by Jeffery et al
    • 

    corecore