5,795 research outputs found

    Speech rhythms and multiplexed oscillatory sensory coding in the human brain

    Get PDF
    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations

    Adjustment of interaural-time-difference analysis to sound level

    Get PDF
    To localize low-frequency sound sources in azimuth, the binaural system compares the timing of sound waves at the two ears with microsecond precision. A similarly high precision is also seen in the binaural processing of the envelopes of high-frequency complex sounds. Both for low- and high-frequency sounds, interaural time difference (ITD) acuity is to a large extent independent of sound level. The mechanisms underlying this level-invariant extraction of ITDs by the binaural system are, however, only poorly understood. We use high-frequency pip trains with asymmetric and dichotic pip envelopes in a combined psychophysical, electrophysiological, and modeling approach. Although the dichotic envelopes cannot be physically matched in terms of ITD, the match produced perceptually by humans is very reliable, and it depends systematically on the overall sound level. These data are reflected in neural responses from the gerbil lateral superior olive and lateral lemniscus. The results are predicted in an existing temporal-integration model extended with a level-dependent threshold criterion. These data provide a very sensitive quantification of how the peripheral temporal code is conditioned for binaural analysis

    Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    Get PDF
    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues.SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway

    Temporal modulation transfer functions in the European Starling (Sturnus vulgaris): II. Responses of auditory-nerve fibres

    Get PDF
    The temporal resolution of cochlear-nerve fibres in the European starling was determined with sinusoidally amplitude-modulated noise stimuli similar to those previously used in a psychoacoustic study in this species (Klump and Okanoya, 1991). Temporal modulation transfer curves (TMTFs) were constructed for cochlear afferents allowing a direct comparison with the starling's behavioural performance. On average, the neuron's detection of modulation was less sensitive than that obtained in the behavioural experiments, although the most sensitive cells approached the values determined psychophysically. The shapes of the neural TMTFs generally resembled low-pass or band-pass filter functions, and the shapes of the averaged neural functions were very similar to those obtained in the behavioural study for two different types of stimuli (gated and continuous carrier). Minimum integration times calculated from the upper cut-off frequency of the neural TMTFs had a median of 0.97 ms with a range of 0.25 to 15.9 ms. The relations between the minimum integration times and the tuning characteristics of the cells (tuning curve bandwidth, Q10 dB-value, high- and low-frequency slopes of the tuning curves) are discussed. Finally, we compare the TMTF data recorded in the starling auditory nerve with data from neurophysiological and behavioural observations on temporal resolution using other experimental paradigms in this and other vertebrate species

    Applicability of subcortical EEG metrics of synaptopathy to older listeners with impaired audiograms

    Get PDF
    Emerging evidence suggests that cochlear synaptopathy is a common feature of sensorineural hearing loss, but it is not known to what extent electrophysiological metrics targeting synaptopathy in animals can be applied to people, such as those with impaired audiograms. This study investigates the applicability of subcortical electrophysiological measures associated with synaptopathy, i.e., auditory brainstem responses (ABRs) and envelope following responses (EFRs), to older participants with high-frequency sloping audiograms. The outcomes of this study are important for the development of reliable and sensitive synaptopathy diagnostics in people with normal or impaired outer-hair-cell function. Click-ABRs at different sound pressure levels and EFRs to amplitude-modulated stimuli were recorded, as well as relative EFR and ABR metrics which reduce the influence of individual factors such as head size and noise floor level on the measures. Most tested metrics showed significant differences between the groups and did not always follow the trends expected from synaptopathy. Age was not a reliable predictor for the electrophysiological metrics in the older hearing-impaired group or young normal-hearing control group. This study contributes to a better understanding of how electrophysiological synaptopathy metrics differ in ears with healthy and impaired audiograms, which is an important first step towards unravelling the perceptual consequences of synaptopathy.(C) 2019 Elsevier B.V. All rights reserved

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants

    Neural Representations of Courtship Song in the Drosophila Brain

    Get PDF
    Acoustic communication in drosophilid flies is based on the production and perception of courtship songs, which facilitate mating. Despite decades of research on courtship songs and behavior in Drosophila, central auditory responses have remained uncharacterized. In this study, we report on intracellular recordings from central neurons that innervate the Drosophila antennal mechanosensory and motor center (AMMC), the first relay for auditory information in the fly brain. These neurons produce graded-potential (nonspiking) responses to sound; we compare recordings from AMMC neurons to extracellular recordings of the receptor neuron population [Johnston's organ neurons (JONs)]. We discover that, while steady-state response profiles for tonal and broadband stimuli are significantly transformed between the JON population in the antenna and AMMC neurons in the brain, transient responses to pulses present in natural stimuli (courtship song) are not. For pulse stimuli in particular, AMMC neurons simply low-pass filter the receptor population response, thus preserving low-frequency temporal features (such as the spacing of song pulses) for analysis by postsynaptic neurons. We also compare responses in two closely related Drosophila species, Drosophila melanogaster and Drosophila simulans, and find that pulse song responses are largely similar, despite differences in the spectral content of their songs. Our recordings inform how downstream circuits may read out behaviorally relevant information from central neurons in the AMMC
    • …
    corecore