9 research outputs found

    To integrate or not to integrate:temporal dynamics of hierarchical Bayesian Causal Inference

    Get PDF
    <div><p>To form a percept of the environment, the brain needs to solve the binding problem—inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain’s uncertainty about the world’s causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.</p></div

    Estrogen receptor alpha and beta differentially mediate C5aR agonist evoked Ca2+-influx in neurons through L-type voltage-gated Ca2+ channels

    Get PDF
    Complement C5a is associated primarily with inflammation. The widespread expression of its receptors, C5aR and C5L2 in neuronal cells, however, suggests additional regulatory roles for C5a in the CNS. C5aR agonist (PL37-MAP) evokes Ca2+-influx in GT1-7 neuronal cell line and the Ca2+-influx is regulated by estradiol. In the present study, we examined further the mechanism of Ca2+-influx and the contribution of the two estrogen receptor (ER) isotypes, ERα and ERβ, to estrogenic modulation of intracellular Ca2+-content. GT1-7 neurons were treated with isotype selective ER agonists for 24 h then C5aR agonist evoked Ca2+-responses were measured by Ca2+-imaging. Transcriptional changes were followed by real-time PCR. We found that not only estradiol (100 pM), but the ERα selective agonist PPT (100 pM) enhanced the PL37-MAP-evoked Ca2+-influx (E2: 215%, PPT: 175%, compared to the PL37-MAP-evoked Ca2+-influx). In contrast, the ERβ selective agonist DPN (100 pM) significantly reduced the Ca2+-influx (32%). Attenuated Ca2+-response (25%) was observed in Ca-free environment and depletion of the Ca2+-pool by CPA eliminated the remaining elevation in the Ca2+-content, demonstrating that the majority of Ca2+ originated from the extracellular compartment. L-type voltage-gated Ca2+-channel (L-VGCC) blocker nifedipine abolished the Ca2+-influx, while R-type Ca2+-channel blocker SNX-482 had no effect, exemplifying the predominant role of L-VGCC in this process. Acute pre-treatments (8 min) with ER agonists did not affect the evoked Ca2+-influx, revealing that the observed effects of estrogens were genomic. Therefore, we checked estrogenic regulation of C5a receptors and L-VGCC subunits. ER agonists increased C5aR mRNA expression, whereas they differentially regulated C5L2. Estradiol decreased transcription of Cav1.3 L-VGCC subunit. Based on these results we propose that estradiol may differentially modulate C5a-induced Ca2+-influx via L-VGCCs in neurons depending on the expression of the two ER isotypes

    Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception

    No full text
    Data, code and stimulus materials for research article: "Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception" by Máté Aller, Heidi Solberg Økland, Lucy J. MacGregor, Helen Blank, and Matthew H. Davis Published in Journal of Neuroscience (https://www.jneurosci.org/content/early/2022/06/24/JNEUROSCI.2476-21.2022) Cite: Aller, M., Solberg Økland, H., MacGregor, L. J., Blank, H., &amp; Davis, M. H. (2022). Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception. Journal of Neuroscience. https://doi.org/10.1523/JNEUROSCI.2476-21.202

    Multisensory integration and recalibration in the human brain

    No full text
    To cope with the challenges posed by our dynamically changing environment we rely on a number of senses as sources of information. The information provided by different senses must be seamlessly merged into an accurate and reliable percept at any moment throughout our lives regardless of the noisyness of our environment and the constantly changing nature of our sensory systems. Our understanding of these processes has expanded exponentially in recent decades; however there is an abundance of questions yet to be answered. The present thesis addresses some of the outstanding questions regarding multisensory integration and recalibration. In Chapter 1, we give an introduction to the background of multisensory integration. In Chapter 2 we review the neural mechanisms of auditory spatial perception. In Chapter 3 we lay methodological foundations for the empirical chapters. In Chapter 4, we investigate whether multisensory integration emerges prior to perceptual awareness. In Chapter 5, we scrutinize the neural dynamics of computations related to Bayesian Causal Inference. In Chapter 6 we examine the spatio-temporal characteristics of the neural processes of multisensory adaptation. Finally, in Chapter 7 we summarise the results of the empirical chapters, discuss their contribution to the literature and outline directions of future research

    Audiovisual adaptation is expressed in spatial and decisional codes.

    No full text
    Funder: EC | EC Seventh Framework Programm | FP7 Ideas: European Research Council (FP7-IDEAS-ERC - Specific Programme: ''Ideas'' Implementing the Seventh Framework Programme of the European Community for Research, Technological Development and Demonstration Activities (2007 to 2013)); Grant(s): ERC-2012-StG_20111109 multsensThe brain adapts dynamically to the changing sensory statistics of its environment. Recent research has started to delineate the neural circuitries and representations that support this cross-sensory plasticity. Combining psychophysics and model-based representational fMRI and EEG we characterized how the adult human brain adapts to misaligned audiovisual signals. We show that audiovisual adaptation is associated with changes in regional BOLD-responses and fine-scale activity patterns in a widespread network from Heschl's gyrus to dorsolateral prefrontal cortices. Audiovisual recalibration relies on distinct spatial and decisional codes that are expressed with opposite gradients and time courses across the auditory processing hierarchy. Early activity patterns in auditory cortices encode sounds in a continuous space that flexibly adapts to misaligned visual inputs. Later activity patterns in frontoparietal cortices code decisional uncertainty consistent with these spatial transformations. Our findings suggest that regions within the auditory processing hierarchy multiplex spatial and decisional codes to adapt flexibly to the changing sensory statistics in the environment

    Perceptual abilities predict individual differences in audiovisual benefit for phonemes, words and sentences

    No full text
    Individuals differ substantially in the benefit they can obtain from visual cues during speech perception. Here, 113 normally-hearing participants between ages 18 and 60 completed a three-part experiment investigating the reliability and predictors of individual audiovisual benefit for acoustically degraded speech. Audiovisual benefit was calculated as the relative intelligibility (at the individual-level) of approximately matched (at the group-level) auditory-only and audiovisual speech for materials at three levels of linguistic structure: meaningful sentences, monosyllabic words, and consonants in minimal syllables. This measure of audiovisual benefit was stable across sessions and materials, suggesting that a shared mechanism of audiovisual integration operates across levels of linguistic structure. Information transmission analyses suggested that this may be related to simple phonetic cue extraction: sentence-level audiovisual benefit was reliably predicted by the relative ability to discriminate place of articulation at the consonant-level. Finally, while unimodal speech perception was related to cognitive measures (matrix reasoning, vocabulary) and demographics (age, gender), audiovisual benefit was predicted uniquely by unimodal perceptual abilities: Better lipreading ability and subclinically poorer hearing (speech reception thresholds) independently predicted enhanced audiovisual benefit. This work has implications for best practices in quantifying audiovisual benefit and research identifying strategies to enhance multimodal communication in hearing loss

    Differential Auditory and Visual Phase-Locking Are Observed during Audio-Visual Benefit and Silent Lip-Reading for Speech Perception.

    No full text
    Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding
    corecore