82 research outputs found

    The effect of musical expertise on the representation of space

    Get PDF
    Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and a horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the Spatial-Musical Association of Response Codes (SMARC) effect), with the strength of these associations depending on individuals’ musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal) rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization

    Virtual environments for the transfer of navigation skills in the blind: a comparison of directed instruction vs. video game based learning approaches

    Get PDF
    For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation

    Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind

    Get PDF
    For individuals who are blind, navigating independently in an unfamiliar environment represents a considerable challenge. Inspired by the rising popularity of video games, we have developed a novel approach to train navigation and spatial cognition skills in adolescents who are blind. Audio-based Environment Simulator (AbES) is a software application that allows for the virtual exploration of an existing building set in an action video game metaphor. Using this ludic-based approach to learning, we investigated the ability and efficacy of adolescents with early onset blindness to acquire spatial information gained from the exploration of a target virtual indoor environment. Following game play, participants were assessed on their ability to transfer and mentally manipulate acquired spatial information on a set of navigation tasks carried out in the real environment. Success in transfer of navigation skill performance was markedly high suggesting that interacting with AbES leads to the generation of an accurate spatial mental representation. Furthermore, there was a positive correlation between success in game play and navigation task performance. The role of virtual environments and gaming in the development of mental spatial representations is also discussed. We conclude that this game based learning approach can facilitate the transfer of spatial knowledge and further, can be used by individuals who are blind for the purposes of navigation in real-world environments

    Neuroplasticity Associated with Tactile Language Communication in a Deaf-Blind Subject

    Get PDF
    A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas

    Perception and discrimination of real-life emotional vocalizations in early blind individuals

    Get PDF
    IntroductionThe capacity to understand others’ emotions and react accordingly is a key social ability. However, it may be compromised in case of a profound sensory loss that limits the contribution of available contextual cues (e.g., facial expression, gestures, body posture) to interpret emotions expressed by others. In this study, we specifically investigated whether early blindness affects the capacity to interpret emotional vocalizations, whose valence may be difficult to recognize without a meaningful context.MethodsWe asked a group of early blind (N = 22) and sighted controls (N = 22) to evaluate the valence and the intensity of spontaneous fearful and joyful non-verbal vocalizations.ResultsOur data showed that emotional vocalizations presented alone (i.e., with no contextual information) are similarly ambiguous for blind and sighted individuals but are perceived as more intense by the former possibly reflecting their higher saliency when visual experience is unavailable.DisussionOur study contributes to a better understanding of how sensory experience shapes ememotion recognition
    corecore