61 research outputs found

    Preservation and plasticity in the neural basis of numerical thinking in blindness

    Get PDF
    Numerical reasoning pervades modern human culture and depends on a fronto-parietal network, a key node of which is the intraparietal sulcus (IPS). In this dissertation I investigate how visual experience shapes the cognitive and neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. In Chapter 2, I ask how the cognitive basis of numerical thinking is shaped by visual experience. I test whether the precision of approximate number representations develops normally in the absence of vision and test whether the relationship between numerical approximation and math abilities is preserved in congenital blindness. In Chapter 3, I ask how the neural basis of symbolic number reasoning is modified by visual experience by studying neural responses to symbolic math in congenitally blind individuals. This initial investigation revealed that the fronto-parietal number system is preserved in blindness but that some “visual” cortices are recruited for symbolic number processing in blindness. The following chapters unpack these two patterns preservation and plasticity. In Chapter 4, I use resting-state data to ask whether functional connectivity with higher-cognitive networks is a potential mechanism by which “visual” cortices are reorganized in blindness. In Chapter 5, I work with individuals who became blind as adults to determine whether visual cortex plasticity for numerical functions is possible in the adult cortex or whether it is restricted to sensitive periods in development. In Chapter 6, I investigated whether the IPS and newly identified number-responsive “visual” area of congenitally blind individuals possess population codes that distinguish between different quantities. I find that the behavioral signatures of numerical reasoning are indistinguishable across congenitally blind and sighted groups and that the fronto-parietal number network, in particular the IPS, is preserved in the absence of vision. A dorsal occipital region showed the same functional profile as the IPS number system in congenitally blind individuals. Number-related plasticity was restricted to a sensitive period in development as it was not observed in adult-onset blind individuals. Furthermore, in congenital blindness, sub-specialization of the “visual” cortex for math and language processing followed the functional connectivity patterns of “visual” cortex

    Orthographic priming in Braille reading as evidence for task-specific reorganization in the ventral visual cortex of the congenitally blind

    Get PDF
    The task-specific principle asserts that, following deafness or blindness, the deprived cortex is reorganized in a manner such that the task of a given area is preserved even though its input modality has been switched. Accordingly, tactile reading engages the ventral occipitotemporal cortex (vOT) in the blind in a similar way to regular reading in the sighted. Others, however, show that the vOT of the blind processes spoken sentence structure, which suggests that the task-specific principle might not apply to vOT. The strongest evidence for the vOT's engagement in sighted reading comes from orthographic repetition-suppression studies. Here, congenitally blind adults were tested in an fMRI repetition-suppression paradigm. Results reveal a double dissociation, with tactile orthographic priming in the vOT and auditory priming in general language areas. Reconciling our finding with other evidence, we propose that the vOT in the blind serves multiple functions, one of which, orthographic processing, overlaps with its function in the sighted

    Neuroplasticity, neural reuse, and the language module

    Get PDF
    What conception of mental architecture can survive the evidence of neuroplasticity and neural reuse in the human brain? In particular, what sorts of modules are compatible with this evidence? I aim to show how developmental and adult neuroplasticity, as well as evidence of pervasive neural reuse, forces us to revise the standard conception of modularity and spells the end of a hardwired and dedicated language module. I argue from principles of both neural reuse and neural redundancy that language is facilitated by a composite of modules (or module-like entities), few if any of which are likely to be linguistically special, and that neuroplasticity provides evidence that (in key respects and to an appreciable extent) few if any of them ought to be considered developmentally robust, though their development does seem to be constrained by features intrinsic to particular regions of cortex (manifesting as domain-specific predispositions or acquisition biases). In the course of doing so I articulate a schematically and neurobiologically precise framework for understanding modules and their supramodular interactions

    Experience Dependent Plasticity over short and long timescales

    Get PDF
    The brain is constantly changing. Genetically specified developmental pathways interact with extrinsic factors including illness, injury and learning to shape the brain. This thesis presents two projects on experience dependent plasticity over different timescales. Exerting its effect across years, deafness provides a model of long term crossmodal plasticity. In the first part of this thesis I ask how deafness affects the thalamus. Diffusion weighted imaging was used to segment the thalamus and with probabilistic tractography, thalamo-cortical connections were traced. Microstructural properties of visual and frontal thalamic segmentations, thalamo-cortical tracts throughout the brain, apart from the temporal thalamo-cortical tract were altered. The neuroanatomical sequelae of deafness are evident throughout the brain. Deaf people have enhanced peripheral vision, facilitating a protective orienting mechanism when hearing cannot be relied upon. Widefield population receptive field (pRF) modeling with fMRI was completed to examine the functional and structural properties of primary visual cortex. Deaf participants had enlarged pRF profiles and thinner cortex in peripheral visual regions, again emphasizing plasticity across many years. In the second part I examine plasticity over the course of days. Visuomotor transformations translate visual input to motor actions, and its neural instantiation might change with training. We used a pattern component model on fMRI data to reveal a gradient of visual to motor information from occipital to parietal to motor cortex. Strikingly, we observed motor coding in visual cortex and visual coding in motor cortex. More tentatively, our results suggest that during sensorimotor skill learning there is decreased dependence on visual cortex as motor cortex learns the novel visuomotor mapping. In summary, I show crossmodal processing and plasticity in regions previously considered not to exhibit these properties, both in long- and short-term plasticity. This work emphasizes the contribution that computational neuroimaging can provide to the field of experience dependent plasticity

    Understanding space by moving through it: neural networks of motion- and space processing in humans

    Get PDF
    Humans explore the world by moving in it, whether moving their whole body as during walking or driving a car, or moving their arm to explore the immediate environment. During movement, self-motion cues arise from the sensorimotor system comprising vestibular, proprioceptive, visual and motor cues, which provide information about direction and speed of the movement. Such cues allow the body to keep track of its location while it moves through space. Sensorimotor signals providing self-motion information can therefore serve as a source for spatial processing in the brain. This thesis is an inquiry into human brain systems of movement and motion processing in a number of different sensory and motor modalities using functional magnetic resonance imaging (fMRI). By characterizing connections between these systems and the spatial representation system in the brain, this thesis investigated how humans understand space by moving through it. In the first study of this thesis, the recollection networks of whole-body movement were explored. Brain activation was measured during the retrieval of active and passive self-motion and retrieval of observing another person performing these tasks. Primary sensorimotor areas dominated the recollection network of active movement, while higher association areas in parietal and mid-occipital cortex were recruited during the recollection of passive transport. Common to both self-motion conditions were bilateral activations in the posterior medial temporal lobe (MTL). No MTL activations were observed during recollection of movement observation. Considering that on a behavioral level, both active and passive self-motion provide sufficient information for spatial estimations, the common activation in MTL might represent the common physiological substrate for such estimations. The second study investigated processing in the 'parahippocampal place area' (PPA), a region in the posterior MTL, during haptic exploration of spatial layout. The PPA in known to respond strongly to visuo-spatial layout. The study explored if this region is processing visuo-spatial layout specifically or spatial layout in general, independent from the encoding sensory modality. In both a cohort of sighted and blind participants, activation patterns in PPA were measured while participants haptically explored the spatial layout of model scenes or the shape of information-matched objects. Both in sighted and blind individuals, PPA activity was greater during layout exploration than during object-shape exploration. While PPA activity in the sighted could also be caused by a transformation of haptic information into a mental visual image of the layout, two points speak against this: Firstly, no increase in connectivity between the visual cortex and the PPA were observed, which would be expected if visual imagery took place. Secondly, blind participates, who cannot resort to visual imagery, showed the same pattern of PPA activity. Together, these results suggest that the PPA processes spatial layout information independent from the encoding modality. The third and last study addressed error accumulation in motion processing on different levels of the visual system. Using novel analysis methods of fMRI data, possible links between physiological properties in hMT+ and V1 and inter-individual differences in perceptual performance were explored. A correlation between noise characteristics and performance score was found in hMT+ but not V1. Better performance correlated with greater signal variability in hMT+. Though neurophysiological variability is traditionally seen as detrimental for behavioral accuracy, the results of this thesis contribute to the increasing evidence which suggests the opposite: that more efficient processing under certain circumstances can be related to more noise in neurophysiological signals. In summary, the results of this doctoral thesis contribute to our current understanding of motion and movement processing in the brain and its interface with spatial processing networks. The posterior MTL appears to be a key region for both self-motion and spatial processing. The results further indicate that physiological characteristics on the level of category-specific processing but not primary encoding reflect behavioral judgments on motion. This thesis also makes methodological contributions to the field of neuroimaging: it was found that the analysis of signal variability is a good gauge for analysing inter-individual physiological differences, while superior head-movement correction techniques have to be developed before pattern classification can be used to this end

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    THE INTERACTION OF HAPTIC IMAGERY WITH HAPTIC PERCEPTION FOR SIGHTED AND VISUALLY IMPAIRED CONSUMERS

    Get PDF
    Consumers evaluate products in the market place using their senses and often form mental representations of product properties. These mental representations have been studied extensively. Imagery has been shown to interact with perception within many perceptual modalities including vision, auditory, olfactory, and motor. This dissertation draws on the vast visual imagery literature to examine imagery in the haptic, or touch, modality. Two studies were undertaken to examine the relationship between haptic imagery and haptic perception The first study is based on studies from cognitive psychology that have used similar methods for examining visual imagery and visual perception. In study 1, sighted and visually impaired participants were asked to evaluate objects haptically, to form a haptic image of that object during a short interval, and then to compare the haptic image to a second object. In Study 2, sighted and visually impaired participants listened to five radio advertisements containing imagery phrases from multiple modalities. After listening to the advertisements, participants were asked to recall the ad content and assess both the ad and the product while haptically evaluating the product in the ad. Though results were mixed and further exploration will be necessary, these studies offer broad implications for consumer use of haptic imagery in shopping environments. The implications for both sighted and blind consumers are discussed
    corecore