4,333 research outputs found

    Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision

    Get PDF
    Epub 2022 May 2hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.The project was funded in part by a European Research Council starting grant MADVIS (Project 337573) awarded to O.C., the Belgian Excellence of Science (EOS) program (Project 30991544) awarded to O.C., a Flagship ERA-NET grant SoundSight (FRS-FNRS PINT-MULTI R.8008.19) awarded to O.C., and by the European Union Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 701250 awarded to V.O. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. A.G.-A. is supported by the Wallonie Bruxelles International Excellence Fellowship and the FSR Incoming PostDoc Fellowship by Université Catholique de Louvain. O.C. is a research associate, C.B. is postdoctoral researcher, and M.R. is a research fellow at the Fond National de la Recherche Scientifique de Belgique (FRS-FNRS)

    Multivoxel Pattern Analysis Reveals Auditory Motion Information in MT+ of Both Congenitally Blind and Sighted Individuals

    Get PDF
    Cross-modal plasticity refers to the recruitment of cortical regions involved in the processing of one modality (e.g. vision) for processing other modalities (e.g. audition). The principles determining how and where cross-modal plasticity occurs remain poorly understood. Here, we investigate these principles by testing responses to auditory motion in visual motion area MT+ of congenitally blind and sighted individuals. Replicating previous reports, we find that MT+ as a whole shows a strong and selective responses to auditory motion in congenitally blind but not sighted individuals, suggesting that the emergence of this univariate response depends on experience. Importantly, however, multivoxel pattern analyses showed that MT+ contained information about different auditory motion conditions in both blind and sighted individuals. These results were specific to MT+ and not found in early visual cortex. Basic sensitivity to auditory motion in MT+ is thus experience-independent, which may be a basis for the region's strong cross-modal recruitment in congenital blindness

    Novel methods to evaluate blindsight and develop rehabilitation strategies for patients with cortical blindness

    Full text link
    20 à 57 % des victimes d'un accident vasculaire cérébral (AVC) sont diagnostiqués aves des déficits visuels qui réduisent considérablement leur qualité de vie. Parmi les cas extrêmes de déficits visuels, nous retrouvons les cécités corticales (CC) qui se manifestent lorsque la région visuelle primaire (V1) est atteinte. Jusqu'à présent, il n'existe aucune approche permettant d'induire la restauration visuelle des fonctions et, dans la plupart des cas, la plasticité est insuffisante pour permettre une récupération spontanée. Par conséquent, alors que la perte de la vue est considérée comme permanente, des fonctions inconscientes mais importantes, connues sous le nom de vision aveugle (blindsight), pourraient être utiles pour les stratégies de réhabilitation visuelle, ce qui suscite un vif intérêt dans le domaine des neurosciences cognitives. La vision aveugle est un phénomène rare qui dépeint une dissociation entre la performance et la conscience, principalement étudiée dans des études de cas. Dans le premier chapitre de cette thèse, nous avons abordé plusieurs questions concernant notre compréhension de la vision aveugle. Comme nous le soutenons, une telle compréhension pourrait avoir une influence significative sur la réhabilitation clinique des patients souffrant de CC. Par conséquent, nous proposons une stratégie unique pour la réhabilitation visuelle qui utilise les principes du jeu vidéo pour cibler et potentialiser les mécanismes neuronaux dans le cadre de l'espace de travail neuronal global, qui est expliqué théoriquement dans l'étude 1 et décrit méthodologiquement dans l'étude 5. En d'autres termes, nous proposons que les études de cas, en conjonction avec des critères méthodologiques améliorés, puissent identifier les substrats neuronaux qui soutiennent la vision aveugle et inconsciente. Ainsi, le travail de cette thèse a fourni trois expériences empiriques (études 2, 3 et 4) en utilisant de nouveaux standards dans l'analyse électrophysiologique qui décrivent les cas de patients SJ présentant une cécité pour les scènes complexes naturelles affectives et ML présentant une cécité pour les stimuli de mouvement. Dans les études 2 et 3, nous avons donc sondé les substrats neuronaux sous-corticaux et corticaux soutenant la cécité affective de SJ en utilisant la MEG et nous avons comparé ces corrélats à sa perception consciente. L’étude 4 nous a permis de caractériser les substrats de la détection automatique des changements en l'absence de conscience visuelle, mesurée par la négativité de discordance (en anglais visual mismatch negativity : vMMN) chez ML et dans un groupe neurotypique. Nous concluons en proposant la vMMN comme biomarqueur neuronal du traitement inconscient dans la vision normale et altérée indépendante des évaluations comportementales. Grâce à ces procédures, nous avons pu aborder certains débats ouverts dans la littérature sur la vision aveugle et sonder l'existence de voies neurales secondaires soutenant le comportement inconscient. En conclusion, cette thèse propose de combiner les perspectives empiriques et cliniques en utilisant des avancées méthodologiques et de nouvelles méthodes pour comprendre et cibler les substrats neurophysiologiques sous-jacents à la vision aveugle. Il est important de noter que le cadre offert par cette thèse de doctorat pourrait aider les études futures à construire des outils thérapeutiques ciblés efficaces et des stratégies de réhabilitation multimodale.20 to 57% of victims of a cerebrovascular accident (CVA) develop visual deficits that considerably reduce their quality of life. Among the extreme cases of visual deficits, we find cortical blindness (CC) which manifests when the primary visual region (V1) is affected. Until now, there is no approach that induces restoration of visual function and in most cases, plasticity is insufficient to allow spontaneous recovery. Therefore, while sight loss is considered permanent, unconscious yet important functions, known as blindsight, could be of use for visual rehabilitation strategies raising strong interest in cognitive neurosciences. Blindsight is a rare phenomenon that portrays a dissociation between performance and consciousness mainly investigated in case reports. In the first chapter of this thesis, we’ve addressed multiple issues about our comprehension of blindsight and conscious perception. As we argue, such understanding might have a significant influence on clinical rehabilitation patients suffering from CB. Therefore, we propose a unique strategy for visual rehabilitation that uses video game principles to target and potentiate neural mechanisms within the global neuronal workspace framework, which is theoretically explained in study 1 and methodologically described in study 5. In other words, we propose that case reports, in conjunction with improved methodological criteria, might identify the neural substrates that support blindsight and unconscious processing. Thus, the work in this Ph.D. work provided three empirical experiments (studies 2, 3, and 4) that used new standards in electrophysiological analyses as they describe the cases of patients SJ presenting blindsight for affective natural complex scenes and ML presenting blindsight for motion stimuli. In studies 2 and 3, we probed the subcortical and cortical neural substrates supporting SJ’s affective blindsight using MEG as we compared these unconscious correlates to his conscious perception. Study 4 characterizes the substrates of automatic detection of changes in the absence of visual awareness as measured by the visual mismatch negativity (vMMN) in ML and a neurotypical group. We conclude by proposing the vMMN as a neural biomarker of unconscious processing in normal and altered vision independent of behavioral assessments. As a result of these procedures, we were able to address certain open debates in the blindsight literature and probe the existence of secondary neural pathways supporting unconscious behavior. In conclusion, this thesis proposes to combine empirical and clinical perspectives by using methodological advances and novel methods to understand and target the neurophysiological substrates underlying blindsight. Importantly, the framework offered by this doctoral dissertation might help future studies build efficient targeted therapeutic tools and multimodal rehabilitation training

    Visual perceptual stability and the processing of self-motion information: neurophysiology, psychophysics and neuropsychology

    Get PDF
    While we move through our environment, we constantly have to deal with new sensory input. Especially the visual system has to deal with an ever-changing input signal, since we continuously move our eyes. For example, we change our direction of gaze about three times every second to a new area within our visual field with a fast, ballistic eye movement called a saccade. As a consequence, the entire projection of the surrounding world on our retina moves. Yet, we do not perceive this shift consciously. Instead, we have the impression of a stable world around us, in which objects have a well-defined location. In my thesis I aimed to investigate the underlying neural mechanisms of the visual perceptual stability of our environment. One hypothesis is that there is a coordinate transformation of the retinocentric input signal to a craniocentric (egocentric) and eventually even to a world centered (allocentric) frame of reference. Such a transformation into a craniocentric reference frame requires information about both the location of a stimulus on the retina and the current eye position within the head. The physicist Hermann von Helmholtz was one of the first who suggested that such an eye-position signal is available in the brain as an internal copy of the motor plan, which is sent to the eye muscles. This so-called efference copy allows the brain to classify actions as self-generated and differentiate them from being externally triggered. If we are the creator of an action, we are able to predict its outcome and can take this prediction into consideration for the further processing. For example, if the projection of the environment moves across the retina due to an eye movement, the shift is registered as self-induced and the brain maintains a stable percept of the world. However, if one gently pushes the eye from the side with a finger, we perceive a moving environment. Along the same lines, it is necessary to correctly attribute the movement of the visual field to our own self-motion, e.g. to perform eye movements accounting for the additional influences of our movements. The first study of my thesis shows that the perceived location of a stimulus might indeed be a combination of two independent neuronal signals, i.e. the position of the stimulus on the retina and information about the current eye-position or eye-movement, respectively. In this experiment, the mislocalization of briefly presented stimuli, which is characteristic for each type of eye-movement, leads to a perceptual localization of stimuli within the area of the blind spot on the retina. Yet, this is the region where the optic nerve leaves the eye, meaning that there are no photoreceptors available to convert light into neuronal signals. Physically, subjects should be blind for stimuli presented in this part of the visual field. In fact, a combination of the actual stimulus position with the specific, error-inducing eye-movement information is able to explain the experimentally measured behavior. The second study in my thesis investigates the underlying neural mechanism of the mislocalization of briefly presented stimuli during eye-movements. Many previous studies using animal models (the rhesus monkey) revealed internal representations of eye-position signals in various brain regions and therefore confirmed the hypothesis of an efference copy signal within the brain. Although these eye-position signals basically reflect the actual eye-position with good accuracy, there are also some spatial and temporal inaccuracies. These erroneous representations have been previously suggested as the source of perceptual mislocalization during saccades. The second study of my thesis extends this hypothesis to the mislocalization during smooth pursuit eye-movements. We usually perform such an eye movement when we want to continuously track a moving object with our eyes. I showed that the activity of neurons in the ventral intraparietal area of the rhesus monkey adequately represents the actual eye-position during smooth pursuit. However, there was a constant lead of the internal eye-position signal as compared to the real eye-position in direction of the ongoing eye-movement. In combination with a distortion of the visual map due to an uneven allocation of attention in direction of the future stimulus position, this results in a mislocalization pattern during smooth pursuit, which almost exactly resembles those typically measured in psychophysical experiments. Hence, on the one hand the efference copy of the eye-position signal provides the required signal to perform a coordinate transformation in order to preserve a stable perception of our environment. On the other hand small inaccuracies within this signal seem to cause perceptual errors when the visual system is experimentally pushed to its limits. The efference copy also plays a role in dysfunctions of the brain in neurological or psychiatric diseases. For example, many symptoms of schizophrenia patients could be explained by an impaired efference copy mechanism and a resulting misattribution of agency to self- and externally-produced actions. Following this hypothesis, the typically observed auditory hallucinations in these patients might be the result of an erroneously assigned agency of their own thoughts. To make a detailed analysis of this potentially impaired efference copy mechanism possible, the third study of my thesis investigated eye movements of schizophrenia patients and tried to step outside the limited capabilities of laboratory setups into the real world. This study showed that results of previous laboratory studies only partly resemble those obtained in the real world. For example, schizophrenia patients, when compared to healthy controls, usually show a more inaccurate smooth pursuit eye-movement in the laboratory. Yet, in the real world when they track a stationary object with their eyes while they are moving towards it, there are no differences between patients and healthy controls, although both types of eye-movements are closely related. This might be due to the fact that patients were able to use additional sources of information in the real world, e.g. self-motion information, to compensate for some of their deficits under certain conditions. Similarly, the fourth study of my thesis showed that typical impairments of eye-movements during healthy aging can be equalized by other sources of information available under natural conditions. At the same time, this work underlined the need of eye-movement measurements in the real world as a complement to laboratory studies to accurately describe the visual system, all mechanisms of perception and their interactions under natural circumstances. For example, experiments in the laboratory usually analyze particularly selected eye-movement parameters within a specific range, such as saccades of a certain amplitude. However, this does not reflect everyday life in which parameters like that are typically continuous and not normally distributed. Furthermore, motion-selective areas in the brain might play a much bigger role in natural environments, since we generally move our head and/or ourselves. To correctly analyze the contribution to and influences on eye-movements, one has to perform eye-movement studies under conditions as realistic as possible. The fifth study of my thesis aimed to investigate a possible application of eye-movement studies in the diagnosis of neuronal diseases. We showed that basic eye-movement parameters like saccade peak-velocity can be used to differentiate patients with Parkinson’s disease from patients with an atypical form of Parkinsonism, progressive supranuclear palsy. This differentiation is of particular importance since both diseases share a similar onset but have a considerably different progression and outcome, requiring different types of therapies. An early differential diagnosis, preferably in a subclinical stage, is needed to ensure the optimal treatment of the patients in order to ease the symptoms and eventually even improve the prognosis. The study showed that mobile eye-trackers are particularly well-suited to investigate eye movements in the daily clinical routine, due to their promising results in differential diagnosis and their easy, fast and reliable handling. In conclusion, my thesis underlines the importance of an interaction of all the different neuroscientific methods such as psychophysics, eye-movement measurements in the real world, electrophysiology and the investigation of neuropsychiatric patients to get a complete picture of how the brain works. The results of my thesis contribute to extent the current knowledge about the processing of information and the perception of our environment in the brain, point towards fields of application of eye-movement measurements and can be used as groundwork for future research

    Understanding space by moving through it: neural networks of motion- and space processing in humans

    Get PDF
    Humans explore the world by moving in it, whether moving their whole body as during walking or driving a car, or moving their arm to explore the immediate environment. During movement, self-motion cues arise from the sensorimotor system comprising vestibular, proprioceptive, visual and motor cues, which provide information about direction and speed of the movement. Such cues allow the body to keep track of its location while it moves through space. Sensorimotor signals providing self-motion information can therefore serve as a source for spatial processing in the brain. This thesis is an inquiry into human brain systems of movement and motion processing in a number of different sensory and motor modalities using functional magnetic resonance imaging (fMRI). By characterizing connections between these systems and the spatial representation system in the brain, this thesis investigated how humans understand space by moving through it. In the first study of this thesis, the recollection networks of whole-body movement were explored. Brain activation was measured during the retrieval of active and passive self-motion and retrieval of observing another person performing these tasks. Primary sensorimotor areas dominated the recollection network of active movement, while higher association areas in parietal and mid-occipital cortex were recruited during the recollection of passive transport. Common to both self-motion conditions were bilateral activations in the posterior medial temporal lobe (MTL). No MTL activations were observed during recollection of movement observation. Considering that on a behavioral level, both active and passive self-motion provide sufficient information for spatial estimations, the common activation in MTL might represent the common physiological substrate for such estimations. The second study investigated processing in the 'parahippocampal place area' (PPA), a region in the posterior MTL, during haptic exploration of spatial layout. The PPA in known to respond strongly to visuo-spatial layout. The study explored if this region is processing visuo-spatial layout specifically or spatial layout in general, independent from the encoding sensory modality. In both a cohort of sighted and blind participants, activation patterns in PPA were measured while participants haptically explored the spatial layout of model scenes or the shape of information-matched objects. Both in sighted and blind individuals, PPA activity was greater during layout exploration than during object-shape exploration. While PPA activity in the sighted could also be caused by a transformation of haptic information into a mental visual image of the layout, two points speak against this: Firstly, no increase in connectivity between the visual cortex and the PPA were observed, which would be expected if visual imagery took place. Secondly, blind participates, who cannot resort to visual imagery, showed the same pattern of PPA activity. Together, these results suggest that the PPA processes spatial layout information independent from the encoding modality. The third and last study addressed error accumulation in motion processing on different levels of the visual system. Using novel analysis methods of fMRI data, possible links between physiological properties in hMT+ and V1 and inter-individual differences in perceptual performance were explored. A correlation between noise characteristics and performance score was found in hMT+ but not V1. Better performance correlated with greater signal variability in hMT+. Though neurophysiological variability is traditionally seen as detrimental for behavioral accuracy, the results of this thesis contribute to the increasing evidence which suggests the opposite: that more efficient processing under certain circumstances can be related to more noise in neurophysiological signals. In summary, the results of this doctoral thesis contribute to our current understanding of motion and movement processing in the brain and its interface with spatial processing networks. The posterior MTL appears to be a key region for both self-motion and spatial processing. The results further indicate that physiological characteristics on the level of category-specific processing but not primary encoding reflect behavioral judgments on motion. This thesis also makes methodological contributions to the field of neuroimaging: it was found that the analysis of signal variability is a good gauge for analysing inter-individual physiological differences, while superior head-movement correction techniques have to be developed before pattern classification can be used to this end

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Decoding natural sounds in early “visual” cortex of congenitally blind individuals

    Get PDF
    Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively, whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode natural sounds from activity patterns in early “visual” areas of congenitally blind individuals who lack visual imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore, the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback, even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3]

    The multisensory function of the human primary visual cortex

    Get PDF
    It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex
    corecore