103 research outputs found

    Dissociable neural correlates of multisensory coherence and selective attention

    Get PDF
    Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using electroencephalography (EEG) while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disc was manipulated to control the audiovisual coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response (ERP) evoked by the transient deviants, independently of AV coherence. Finally, in an exploratory analysis, we identified a spatiotemporal component of ERP, in which temporal coherence enhanced the deviant-evoked responses only in the unattended stream. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.Significance StatementTemporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate AV coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on AV object formation

    Omission responses in local field potentials in rat auditory cortex

    Get PDF
    Background Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. Results Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. Conclusions Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output

    “What” and “when” predictions modulate auditory processing in a mutually congruent manner

    Get PDF
    Introduction: Extracting regularities from ongoing stimulus streams to form predictions is crucial for adaptive behavior. Such regularities exist in terms of the content of the stimuli and their timing, both of which are known to interactively modulate sensory processing. In real-world stimulus streams such as music, regularities can occur at multiple levels, both in terms of contents (e.g., predictions relating to individual notes vs. their more complex groups) and timing (e.g., pertaining to timing between intervals vs. the overall beat of a musical phrase). However, it is unknown whether the brain integrates predictions in a manner that is mutually congruent (e.g., if “beat” timing predictions selectively interact with “what” predictions falling on pulses which define the beat), and whether integrating predictions in different timing conditions relies on dissociable neural correlates. Methods: To address these questions, our study manipulated “what” and “when” predictions at different levels – (local) interval-defining and (global) beat-defining – within the same stimulus stream, while neural activity was recorded using electroencephalogram (EEG) in participants (N = 20) performing a repetition detection task. Results: Our results reveal that temporal predictions based on beat or interval timing modulated mismatch responses to violations of “what” predictions happening at the predicted time points, and that these modulations were shared between types of temporal predictions in terms of the spatiotemporal distribution of EEG signals. Effective connectivity analysis using dynamic causal modeling showed that the integration of “what” and “when” predictions selectively increased connectivity at relatively late cortical processing stages, between the superior temporal gyrus and the fronto-parietal network. Discussion: Taken together, these results suggest that the brain integrates different predictions with a high degree of mutual congruence, but in a shared and distributed cortical network. This finding contrasts with recent studies indicating separable mechanisms for beat-based and memory-based predictive processing

    Emergence of Tuning to Natural Stimulus Statistics along the Central Auditory Pathway

    Get PDF
    We have previously shown that neurons in primary auditory cortex (A1) of anaesthetized (ketamine/medetomidine) ferrets respond more strongly and reliably to dynamic stimuli whose statistics follow "natural" 1/f dynamics than to stimuli exhibiting pitch and amplitude modulations that are faster (1/f(0.5)) or slower (1/f(2)) than 1/f. To investigate where along the central auditory pathway this 1/f-modulation tuning arises, we have now characterized responses of neurons in the central nucleus of the inferior colliculus (ICC) and the ventral division of the mediate geniculate nucleus of the thalamus (MGV) to 1/f(gamma) distributed stimuli with gamma varying between 0.5 and 2.8. We found that, while the great majority of neurons recorded from the ICC showed a strong preference for the most rapidly varying (1/f(0.5) distributed) stimuli, responses from MGV neurons did not exhibit marked or systematic preferences for any particular gamma exponent. Only in A1 did a majority of neurons respond with higher firing rates to stimuli in which gamma takes values near 1. These results indicate that 1/f tuning emerges at forebrain levels of the ascending auditory pathway

    The Changing Landscape for Stroke\ua0Prevention in AF: Findings From the GLORIA-AF Registry Phase 2

    Get PDF
    Background GLORIA-AF (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation) is a prospective, global registry program describing antithrombotic treatment patterns in patients with newly diagnosed nonvalvular atrial fibrillation at risk of stroke. Phase 2 began when dabigatran, the first non\u2013vitamin K antagonist oral anticoagulant (NOAC), became available. Objectives This study sought to describe phase 2 baseline data and compare these with the pre-NOAC era collected during phase 1. Methods During phase 2, 15,641 consenting patients were enrolled (November 2011 to December 2014); 15,092 were eligible. This pre-specified cross-sectional analysis describes eligible patients\u2019 baseline characteristics. Atrial fibrillation disease characteristics, medical outcomes, and concomitant diseases and medications were collected. Data were analyzed using descriptive statistics. Results Of the total patients, 45.5% were female; median age was 71 (interquartile range: 64, 78) years. Patients were from Europe (47.1%), North America (22.5%), Asia (20.3%), Latin America (6.0%), and the Middle East/Africa (4.0%). Most had high stroke risk (CHA2DS2-VASc [Congestive heart failure, Hypertension, Age  6575 years, Diabetes mellitus, previous Stroke, Vascular disease, Age 65 to 74 years, Sex category] score  652; 86.1%); 13.9% had moderate risk (CHA2DS2-VASc = 1). Overall, 79.9% received oral anticoagulants, of whom 47.6% received NOAC and 32.3% vitamin K antagonists (VKA); 12.1% received antiplatelet agents; 7.8% received no antithrombotic treatment. For comparison, the proportion of phase 1 patients (of N = 1,063 all eligible) prescribed VKA was 32.8%, acetylsalicylic acid 41.7%, and no therapy 20.2%. In Europe in phase 2, treatment with NOAC was more common than VKA (52.3% and 37.8%, respectively); 6.0% of patients received antiplatelet treatment; and 3.8% received no antithrombotic treatment. In North America, 52.1%, 26.2%, and 14.0% of patients received NOAC, VKA, and antiplatelet drugs, respectively; 7.5% received no antithrombotic treatment. NOAC use was less common in Asia (27.7%), where 27.5% of patients received VKA, 25.0% antiplatelet drugs, and 19.8% no antithrombotic treatment. Conclusions The baseline data from GLORIA-AF phase 2 demonstrate that in newly diagnosed nonvalvular atrial fibrillation patients, NOAC have been highly adopted into practice, becoming more frequently prescribed than VKA in Europe and North America. Worldwide, however, a large proportion of patients remain undertreated, particularly in Asia and North America. (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients With Atrial Fibrillation [GLORIA-AF]; NCT01468701

    GenderIncongruentMcGurk

    No full text

    NetworkReceptiveFields

    No full text
    Data files relating to the paper "Network receptive field modeling reveals extensive integration and multi-feature selectivity in auditory cortical neurons" Authors: Nicol S. Harper1,2 *, Oliver Schoppe1,3 *, Ben D. B. Willmore1, Zhanfeng F. Cui2, Jan W. H. Schnupp4,1, Andrew J. King1 1 Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK. 2 Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, Oxford OX3 7DQ, UK. 3 Bio-Inspired Information Processing, Technische Universität München, Boltzmannstr. 11, 85748 Garching, Germany. 4 Department of Biomedical Science, City University of Hong Kong. 31 To Yuen Street, 2/F 1B-202, Kowloon Tong, Hong Kong. *Equal contributio

    On the processing of vowels in the mammalian auditory system

    No full text
    The mammalian auditory system generates representations of the physical world in terms of auditory objects. To decide which object class a particular sound belongs to, the auditory system must recognise the patterns of acoustic components that form the acoustic “fingerprint” of the sound’s auditory class. Where in the central auditory system such patterns are detected and what form the neural processing takes that underlies their detection are unanswered questions in sensory neurophysiology. In the research conducted for this thesis I used artificial vowel sounds to explore the neural and perceptual characteristics of auditory object recognition in rats. I recorded cortical responses from the primary auditory cortex (A1) in anaesthetised rats and determined how well the spiking responses, evoked by artificial vowels, resolve the spectral components that define vowel classes in human perception. The recognition of an auditory class rests on the ability to detect the combination of spectral components that all member sounds of the class share. I generated and evaluated models of the integration by A1 responses of the acoustic components that define human vowels classes. The hippocampus is a candidate area for neural responses that are specific to particular object classes. In this thesis I also report the results of a collaboration during which we investigated how the hippocampus responds to vowels in awake behaving animals. Finally, I explored the processing of vowels behaviourally, testing the perceptual ability of rats to discriminate and classify vowels and in particular whether rats use combinations of spectral components to recognise members of vowel classes. For the behavioural training I built a novel integrated housing and training cage that allows rats to train themselves in auditory recognition tasks. Combining the results and methods presented in this thesis will help reveal how the mammalian auditory system recognises auditory objects.This thesis is not currently available in ORA

    The unity hypothesis revisited: can the gender incongruent McGurk effect be disrupted by priming?

    No full text
    The “unity assumption hypothesis” contends that higher-level factors, such as a perceiver’s belief and prior experience, modulate multisensory integration. The McGurk illusion exemplifies such integration. When a visual velar /ga/ is dubbed with an auditory bilabial /ba/, listeners unify the discrepant signals with knowledge that open lips cannot produce /ba/ and a fusion percept /da/ is perceived. Previous research claimed to have falsified this theory by demonstrating the McGurk effect occurs even when a face is dubbed with a gender incongruent voice. But perhaps stronger evidence than just an apparent incongruence between unfamiliar faces and voices is needed to prevent perceptual unity. Here we investigated whether the McGurk illusion with gender incongruent stimuli can be disrupted by priming with appropriate pairing of face and voice. In an online experiment, 89 participants aged 18-62, were randomly allocated to experience experimental trials containing either a male or female face with incongruent gender voice. The number of times participants experienced a McGurk illusion was measured before and after a training block which familiarized them with the true pairings of face and voice. After training and priming, the susceptibility to the McGurk effects decreased significantly on average. The findings support the notion that unity assumptions modulate intersensory bias, and confirm and extend previous studies using gender incongruous McGurk stimuli

    Neural Resolution of Formant Frequencies in the Primary Auditory Cortex of Rats.

    No full text
    Pulse-resonance sounds play an important role in animal communication and auditory object recognition, yet very little is known about the cortical representation of this class of sounds. In this study we shine light on one simple aspect: how well does the firing rate of cortical neurons resolve resonant ("formant") frequencies of vowel-like pulse-resonance sounds. We recorded neural responses in the primary auditory cortex (A1) of anesthetized rats to two-formant pulse-resonance sounds, and estimated their formant resolving power using a statistical kernel smoothing method which takes into account the natural variability of cortical responses. While formant-tuning functions were diverse in structure across different penetrations, most were sensitive to changes in formant frequency, with a frequency resolution comparable to that reported for rat cochlear filters
    • …
    corecore