296 research outputs found
Specificity of the human frequency following response for carrier and modulation frequency assessed using adaptation
The frequency following response (FFR) is a scalp-recorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed
The binaural masking level difference: cortical correlates persist despite severe brain stem atrophy in progressive supranuclear palsy.
Under binaural listening conditions, the detection of target signals within background masking noise is substantially improved when the interaural phase of the target differs from that of the masker. Neural correlates of this binaural masking level difference (BMLD) have been observed in the inferior colliculus and temporal cortex, but it is not known whether degeneration of the inferior colliculus would result in a reduction of the BMLD in humans. We used magnetoencephalography to examine the BMLD in 13 healthy adults and 13 patients with progressive supranuclear palsy (PSP). PSP is associated with severe atrophy of the upper brain stem, including the inferior colliculus, confirmed by voxel-based morphometry of structural MRI. Stimuli comprised in-phase sinusoidal tones presented to both ears at three levels (high, medium, and low) masked by in-phase noise, which rendered the low-level tone inaudible. Critically, the BMLD was measured using a low-level tone presented in opposite phase across ears, making it audible against the noise. The cortical waveforms from bilateral auditory sources revealed significantly larger N1m peaks for the out-of-phase low-level tone compared with the in-phase low-level tone, for both groups, indicating preservation of early cortical correlates of the BMLD in PSP. In PSP a significant delay was observed in the onset of the N1m deflection and the amplitude of the P2m was reduced, but these differences were not restricted to the BMLD condition. The results demonstrate that although PSP causes subtle auditory deficits, binaural processing can survive the presence of significant damage to the upper brain stem.This work has been supported by the Wellcome Trust (Grants 088324 and 088263); Medical Research Council (G0700503 to B. C. P. Ghosh); Guarantors of Brain (to B. C. P. Ghosh); Raymond and Beverley Sackler Trust (to B. C. P. Ghosh); and National Institute of Health Research Cambridge Comprehensive Biomedical Research Centre including the CambridgeBrain Bank.This is the final version of the article. It first appeared from American Physiological Society via http://dx.doi.org/10.1152/jn.00062.201
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization.
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization.SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it
Teologija na tržištu
One task intended to measure sensitivity to temporal fine structure (TFS) involves the discrimination of a harmonic complex tone from a tone in which all harmonics are shifted upwards by the same amount in hertz. Both tones are passed through a fixed bandpass filter centered on the high harmonics to reduce the availability of excitation-pattern cues and a background noise is used to mask combination tones. The role of frequency selectivity in this "TFS1" task was investigated by varying level. Experiment 1 showed that listeners performed more poorly at a high level than at a low level. Experiment 2 included intermediate levels and showed that performance deteriorated for levels above about 57 dB sound pressure level. Experiment 3 estimated the magnitude of excitation-pattern cues from the variation in forward masking of a pure tone as a function of frequency shift in the complex tones. There was negligible variation, except for the lowest level used. The results indicate that the changes in excitation level at threshold for the TFS1 task would be too small to be usable. The results are consistent with the TFS1 task being performed using TFS cues, and with frequency selectivity having an indirect effect on performance via its influence on TFS cues. (C) 2015 Acoustical Society of America
Stream segregation in the anesthetized auditory cortex
Auditory stream segregation describes the way that sounds are perceptually segregated into groups or streams on the basis of perceptual attributes such as pitch or spectral content. For sequences of pure tones, segregation depends on the tones' proximity in frequency and time. In the auditory cortex (and elsewhere) responses to sequences of tones are dependent on stimulus conditions in a similar way to the perception of these stimuli. However, although highly dependent on stimulus conditions, perception is also clearly influenced by factors unrelated to the stimulus, such as attention. Exactly how ‘bottom-up’ sensory processes and non-sensory ‘top-down’ influences interact is still not clear.
Here, we recorded responses to alternating tones (ABAB …) of varying frequency difference (FD) and rate of presentation (PR) in the auditory cortex of anesthetized guinea-pigs. These data complement previous studies, in that top-down processing resulting from conscious perception should be absent or at least considerably attenuated.
Under anesthesia, the responses of cortical neurons to the tone sequences adapted rapidly, in a manner sensitive to both the FD and PR of the sequences. While the responses to tones at frequencies more distant from neuron best frequencies (BFs) decreased as the FD increased, the responses to tones near to BF increased, consistent with a release from adaptation, or forward suppression. Increases in PR resulted in reductions in responses to all tones, but the reduction was greater for tones further from BF. Although asymptotically adapted responses to tones showed behavior that was qualitatively consistent with perceptual stream segregation, responses reached asymptote within 2 s, and responses to all tones were very weak at high PRs (>12 tones per second).
A signal-detection model, driven by the cortical population response, made decisions that were dependent on both FD and PR in ways consistent with perceptual stream segregation. This included showing a range of conditions over which decisions could be made either in favor of perceptual integration or segregation, depending on the model ‘decision criterion’. However, the rate of ‘build-up’ was more rapid than seen perceptually, and at high PR responses to tones were sometimes so weak as to be undetectable by the model.
Under anesthesia, adaptation occurs rapidly, and at high PRs tones are generally poorly represented, which compromises the interpretation of the experiment. However, within these limitations, these results complement experiments in awake animals and humans. They generally support the hypothesis that ‘bottom-up’ sensory processing plays a major role in perceptual organization, and that processes underlying stream segregation are active in the absence of attention
The Effect of Visual Cues on Auditory Stream Segregation in Musicians and Non-Musicians
Background: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. Methods: Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. Conclusions: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cue
Combination of Spectral and Binaurally Created Harmonics in a Common Central Pitch Processor
A fundamental attribute of human hearing is the ability to extract a residue pitch from harmonic complex sounds such as those produced by musical instruments and the human voice. However, the neural mechanisms that underlie this processing are unclear, as are the locations of these mechanisms in the auditory pathway. The ability to extract a residue pitch corresponding to the fundamental frequency from individual harmonics, even when the fundamental component is absent, has been demonstrated separately for conventional pitches and for Huggins pitch (HP), a stimulus without monaural pitch information. HP is created by presenting the same wideband noise to both ears, except for a narrowband frequency region where the noise is decorrelated across the two ears. The present study investigated whether residue pitch can be derived by combining a component derived solely from binaural interaction (HP) with a spectral component for which no binaural processing is required. Fifteen listeners indicated which of two sequentially presented sounds was higher in pitch. Each sound consisted of two “harmonics,” which independently could be either a spectral or a HP component. Component frequencies were chosen such that the relative pitch judgement revealed whether a residue pitch was heard or not. The results showed that listeners were equally likely to perceive a residue pitch when one component was dichotic and the other was spectral as when the components were both spectral or both dichotic. This suggests that there exists a single mechanism for the derivation of residue pitch from binaurally created components and from spectral components, and that this mechanism operates at or after the level of the dorsal nucleus of the lateral lemniscus (brainstem) or the inferior colliculus (midbrain), which receive inputs from the medial superior olive where temporal information from the two ears is first combined
Human Granulocytic Anaplasmosis and Anaplasma phagocytophilum
Understanding how Anaplasma phagocytophilum alters neutrophils will improve diagnosis, treatment, and prevention of this severe illness
Pitch Comparisons between Electrical Stimulation of a Cochlear Implant and Acoustic Stimuli Presented to a Normal-hearing Contralateral Ear
Four cochlear implant users, having normal hearing in the unimplanted ear, compared the pitches of electrical and acoustic stimuli presented to the two ears. Comparisons were between 1,031-pps pulse trains and pure tones or between 12 and 25-pps electric pulse trains and bandpass-filtered acoustic pulse trains of the same rate. Three methods—pitch adjustment, constant stimuli, and interleaved adaptive procedures—were used. For all methods, we showed that the results can be strongly influenced by non-sensory biases arising from the range of acoustic stimuli presented, and proposed a series of checks that should be made to alert the experimenter to those biases. We then showed that the results of comparisons that survived these checks do not deviate consistently from the predictions of a widely-used cochlear frequency-to-place formula or of a computational cochlear model. We also demonstrate that substantial range effects occur with other widely used experimental methods, even for normal-hearing listeners
- …