59 research outputs found

    Functional MRI investigations of cortical mechanisms of auditory spatial attention

    Full text link
    In everyday settings, spatial attention helps listeners isolate and understand individual sound sources. However, the neural mechanisms of auditory spatial attention (ASpA) are only partially understood. This thesis uses within-subject analysis of functional magnetic resonance imaging (fMRI) data to address fundamental questions regarding cortical mechanisms supporting ASpA by applying novel multi-voxel pattern analysis (MVPA) and resting-state functional connectivity (rsFC) approaches. A series of fMRI studies of ASpA were conducted in which subjects performed a one-back task in which they attended to one of two spatially separated streams. Attention modulated blood oxygenation level-dependent (BOLD) activity in multiple areas in the prefrontal, temporal, and parietal cortex, including non-visuotopic intraparietal sulcus (IPS), but not the visuotopic maps in IPS. No spatial bias was detected in any cortical area using standard univariate analysis; however, MVPA revealed that activation patterns in a number of areas, including the auditory cortex, predicted the attended direction. Furthermore, we explored how cognitive task demands and the sensory modality of the inputs influenced activity with a visual one-back task and a visual multiple object tracking (MOT) task. Activity from the visual and auditory one-back tasks overlapped along the fundus of IPS and lateral prefrontal cortex (lPFC). However, there was minimal overlap of activity in the lPFC between the visual MOT task and the two one-back tasks. Finally, we endeavored to identify visual and auditory networks using rsFC. We identified a dorsal visual attention network reliably within individual subjects using visuotopic seeds. Using auditory seeds, we found a prefrontal area nested between segments of the dorsal visual attention network. These findings mark fundamental progress towards elucidating the cortical network controlling ASpA. Our results suggest that similar lPFC structures support both ASpA and its visual counterpart during a spatial one-back task, but that ASpA does not drive visuotopic IPS in the parietal cortex. Furthermore, rsFC reveals that visual and auditory seed regions are functionally connected with non-overlapping lPFC regions, possibly reflecting spatial and temporal cognitive processing biases, respectively. While we find no evidence for a spatiotopic map, the auditory cortex is sensitive to direction of attention in its patterns of activation

    Space, time and motion in a multisensory world

    Get PDF
    When interacting with environmental events, humans acquire information from different senses and combine these inputs within a coherent representation of the world. The present doctoral thesis aims at investigating how humans represent space, time, and motion through auditory and visual sensory modalities. It has been widely demonstrated a predisposition of different sensory systems towards the processing of different domains of representation, with hearing that prevails in representing the time domain and vision that is the most reliable sense for processing the space domain. Given this strong link between sensory modality and domain of representation, one objective of this thesis is to deepen the knowledge of the neural organization of multisensory spatial and temporal skills in healthy adults. In addition, by using blindness as a model to unravel the role of vision in the development of spatio-temporal abilities, this thesis explores the interaction of the spatial and temporal domains in the acoustic motion perception of early blind individuals. The interplay between space and time has also been explained as the result of humans performing actions in the surrounding environment since to carry out goal-directed motor behaviors it is useful for a person to associate the spatial and temporal information of one’s target into a shared mental map. In this regard, the present project also questions how the brain processes spatio-temporal cues of external events when it comes to manually intercepting moving objects with one hand. Finally, in light of the above results, this dissertation incorporates the development of a novel portable device, named MultiTab, for the behavioral evaluation of the processing of space, time, and motor responses, through the visual and acoustic sensory modality. For the purposes of this thesis, four methodological approaches have been employed: i) electroencephalogram (EEG) technique, to explore the cortical activation associated with multisensory spatial and temporal tasks; ii) psychophysical methods, to measure the relationship between stimuli in motion and the acoustic speed perception of blind and sighted individuals; iii) motion capture techniques, to measure indices of movements during an object’s interception task; iv) design and technical-behavioral validation of a new portable device. Studies of the present dissertation indicate the following results. First, this thesis highlights an early cortical gain modulation of sensory areas that depends on the domain of representation to process, with auditory areas mainly involved in the multisensory processing of temporal inputs, and visual areas of spatial inputs. Moreover, for the spatial domain specifically, the neural modulation of visual areas is also influenced by the kind of spatial layout representing multisensory stimuli. Second, this project shows that lack of vision influences the ability to process the speed of moving sounds by altering how blind individuals make use of the sounds’ temporal features. This result suggests that visual experience in the first years of life is a crucial factor when dealing with combined spatio-temporal information. Third, data of this thesis demonstrate that typically developing individuals manually intercepting a moving object with one hand take into consideration the item’s spatio-temporal cues, by adjusting their interceptive movements according to the object’s speed. Finally, the design and validation of MultiTab show its utility in the evaluation of multisensory processing such as the manual localization of audiovisual spatialized stimuli. Overall, findings from this thesis contribute to a more in-depth picture of how the human brain represents space, time, and motion through different senses. Moreover, they provide promising implications in exploring novel technological methods for the assessment and training of these dimensions in typical and atypical populations

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio

    Distilling the neural correlates of conscious somatosensory perception

    Get PDF
    The ability to consciously perceive the world profoundly defines our lives as human beings. Somehow, our brains process information in a way that allows us to become aware of the images, sounds, touches, smells, and tastes surrounding us. Yet our understanding of the neurobiological processes that generate perceptual awareness is very limited. One of the most contested questions in the neuroscientific study of conscious perception is whether awareness arises from the activity of early sensory brain regions, or instead requires later processing in widespread supramodal networks. It has been suggested that the conflicting evidence supporting these two perspectives may be the result of methodological confounds in classical experimental tasks. In order to infer participants’ perceptual awareness in these tasks, they need to report the contents of their perception. This means that the neural signals underlying the emergence of perceptual awareness often cannot be dissociated from pre- and postperceptual processes. Consequently, some of the previously observed effects may not be correlates of awareness after all but instead may have resulted from task requirements. In this thesis, I investigate this possibility in the somatosensory modality. To scrutinise the task dependence of the neural correlates of somatosensory awareness, I developed an experimental paradigm that controls for the most common experimental confounds. In a somatosensory-visual matching task, participants were required to detect electrical target stimuli at ten different intensity levels. Instead of reporting their perception directly, they compared their somatosensory percepts to simultaneously presented visual cues that signalled stimulus presence or absence and then reported a match or mismatch accordingly. As a result, target detection was decorrelated from working memory and reports, the behavioural relevance of detected and undetected stimuli was equated, the influence of attentional processes was mitigated, and perceptual uncertainty was varied in a controlled manner. Results from a functional magnetic resonance imaging (fMRI) study and an electroencephalography (EEG) study showed that, when controlled for task demands, the neural correlates of somatosensory awareness were restricted to relatively early activity (~150 ms) in secondary somatosensory regions. In contrast, late activity (>300 ms) indicative of processing in frontoparietal networks occurred irrespective of stimulus awareness, and activity in anterior insular, anterior cingulate, and supplementary motor cortex was associated with processing perceptual uncertainty and reports. These results add novel evidence to the early-local vs. late-global debate and favour the view that perceptual awareness emerges at the level of modality-specific sensory cortices.Die Fähigkeit zur bewussten Wahrnehmung bestimmt maßgeblich unser Selbstbild als Menschen. Unser Gehirn verarbeitet Informationen auf eine Weise, die es uns ermöglicht, uns der Bilder, Töne, Berührungen, Gerüche und Geschmäcker, die uns umgeben, bewusst zu werden. Unser Verständnis davon, wie neurobiologische Prozesse diese bewusste Wahrnehmung erzeugen, ist jedoch noch sehr begrenzt. Eine der umstrittensten Fragen in der neurowissenschaftlichen Erforschung des perzeptuellen Bewusstseins besteht darin, ob die bewusste Wahrnehmung aus der Aktivität früher sensorischer Hirnregionen entsteht, oder aber die spätere Prozessierung in ausgedehnten supramodalen Netzwerken erfordert. Eine mögliche Erklärung für die widersprüchlichen Ergebnisse, die diesen beiden Perspektiven zugrunde liegen, wird in methodologischen Störfaktoren vermutet, die in klassischen experimentellen Paradigmen auftreten können. Um auf die Wahrnehmung der Versuchspersonen schließen zu können, müssen diese den Inhalt ihrer Wahrnehmung berichten. Das führt dazu, dass neuronale Korrelate bewusster Wahrnehmung häufig nicht sauber von prä- und postperzeptuellen Prozessen getrennt werden können. Folglich könnten einige der zuvor beobachteten Effekte, anstatt tatsächlich bewusste Wahrnehmung widerzuspiegeln, aus den Anforderungen experimenteller Paradigmen entstanden sein. In dieser Arbeit untersuche ich diese Möglichkeit in der somatosensorischen Modalität. Um zu überprüfen, inwiefern neuronale Korrelate bewusster somatosensorischer Wahrnehmung von den Anforderungen experimenteller Aufgaben abhängen, habe ich ein Paradigma entwickelt, dass die häufigsten experimentellen Störfaktoren kontrolliert. In einer somatosensorisch-visuellen Vergleichsaufgabe mussten die Versuchspersonen elektrische Zielreize in zehn verschiedenen Intensitätsstufen detektieren. Anstatt diese jedoch direkt zu berichten, sollten sie ihre somatosensorischen Perzepte mit gleichzeitig präsentierten visuellen Symbolen vergleichen, die entweder Reizanwesenheit oder -abwesenheit signalisierten. Entsprechend wurde dann eine Übereinstimmung oder Nichtübereinstimmung berichtet. Dadurch wurde die Reizwahrnehmung von Arbeitsgedächtnis und Berichterstattung dekorreliert, die Verhaltensrelevanz detektierter und nicht detektierter Reize gleichgesetzt, der Einfluss von Aufmerksamkeitsprozessen reduziert und die mit der Detektion verbundene Unsicherheit auf kontrollierte Weise variiert. Die Ergebnisse aus einer funktionellen Magnetresonanztomographie (fMRT)-Studie und einer Elektroenzephalographie (EEG)-Studie zeigen, dass die neuronalen Korrelate bewusster somatosensorischer Wahrnehmung auf relativ frühe Aktivität (~150 ms) in sekundären somatosensorischen Regionen beschränkt sind, wenn experimentelle Störfaktoren kontrolliert werden. Im Gegensatz dazu trat späte Aktivität (>300 ms), die auf die Verarbeitung in frontoparietalen Netzwerken hindeutet, unabhängig von der Reizwahrnehmung auf, und Aktivität im anterioren insulären, anterioren cingulären und supplementär-motorischen Kortex war mit der Verarbeitung von Detektionsunsicherheit und der Berichterstattung verbunden. Diese Ergebnisse liefern neue Erkenntnisse zur Debatte um die Relevanz früher, lokaler vs. später, globaler Hirnaktivität und unterstützen die Ansicht, dass perzeptuelles Bewusstsein in modalitätsspezifischen sensorischen Kortizes entsteht

    The posterior parietal cortex: a bridge between vision and action

    Get PDF
    The present work takes into account three posterior parietal areas, V6, V6A, and PEc, all operating on different subsets of signals (visual, somatic, motor). The work focuses on the study of their functional properties, to better understand their respective contribution in the neuronal circuits that make possible the interactions between subject and external environment. In the caudalmost pole of parietal lobe there is area V6. Functional data suggest that this area is related to the encoding of both objects motion and ego-motion. However, the sensitivity of V6 neurons to optic flow stimulations has been tested only in human fMRI experiments. Here we addressed this issue by applying on monkey the same experimental protocol used in human studies. The visual stimulation obtained with the Flow Fields stimulus was the most effective and powerful to activate area V6 in monkey, further strengthening this homology between the two primates. The neighboring areas, V6A and PEc, show different cytoarchitecture and connectivity profiles, but are both involved in the control of reaches. We studied the sensory responses present in these areas, and directly compared these.. We also studied the motor related discharges of PEc neurons during reaching movements in 3D space comparing also the direction and depth tuning of PEc cells with those of V6A. The results show that area PEc and V6A share several functional properties. Area PEc, unlike V6A, contains a richer and more complex somatosensory input, and a poorer, although complex visual one. Differences emerged also comparing the motor-related properties for reaches in depth: the incidence of depth modulations in PEc and the temporal pattern of modulation for depth and direction allow to delineate a trend among the two parietal visuomotor areas

    Sensor Fusion in the Perception of Self-Motion

    No full text
    This dissertation has been written at the Max Planck Institute for Biological Cybernetics (Max-Planck-Institut fĂĽr Biologische Kybernetik) in TĂĽbingen in the department of Prof. Dr. Heinrich H. BĂĽlthoff. The work has universitary support by Prof. Dr. GĂĽnther Palm (University of Ulm, Abteilung Neuroinformatik). Main evaluators are Prof. Dr. GĂĽnther Palm, Prof. Dr. Wolfgang Becker (University of Ulm, Sektion Neurophysiologie) and Prof. Dr. Heinrich BĂĽlthoff.amp;lt;bramp;gt;amp;lt;bramp;gt; The goal of this thesis was to investigate the integration of different sensory modalities in the perception of self-motion, by using psychophysical methods. Experiments with healthy human participants were to be designed for and performed in the Motion Lab, which is equipped with a simulator platform and projection screen. Results from psychophysical experiments should be used to refine models of the multisensory integration process, with an mphasis on Bayesian (maximum likelihood) integration mechanisms.amp;lt;bramp;gt;amp;lt;bramp;gt; To put the psychophysical experiments into the larger framework of research on multisensory integration in the brain, results of neuroanatomical and neurophysiological experiments on multisensory integration are also reviewed

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF
    • …
    corecore