66 research outputs found

    Are Interaural Time and Level Differences Represented by Independent or Integrated Codes in the Human Auditory Cortex?

    Get PDF
    Sound localization is important for orienting and focusing attention and for segregating sounds from different sources in the environment. In humans, horizontal sound localization mainly relies on interaural differences in sound arrival time and sound level. Despite their perceptual importance, the neural processing of interaural time and level differences (ITDs and ILDs) remains poorly understood. Animal studies suggest that, in the brainstem, ITDs and ILDs are processed independently by different specialized circuits. The aim of the current study was to investigate whether, at higher processing levels, they remain independent or are integrated into a common code of sound laterality. For that, we measured late auditory cortical potentials in response to changes in sound lateralization elicited by perceptually matched changes in ITD and/or ILD. The responses to the ITD and ILD changes exhibited significant morphological differences. At the same time, however, they originated from overlapping areas of the cortex and showed clear evidence for functional coupling. These results suggest that the auditory cortex contains an integrated code of sound laterality, but also retains independent information about ITD and ILD cues. This cue-related information might be used to assess how consistent the cues are, and thus, how likely they would have arisen from the same source

    Is off-frequency overshoot caused by adaptation of suppression?

    Get PDF
    This study is concerned with the mechanism of off-frequency overshoot. Overshoot refers to the phenomenon whereby a brief signal presented at the onset of a masker is easier to detect when the masker is preceded by a “precursor” sound (which is often the same as the masker). Overshoot is most prominent when the masker and precursor have a different frequency than the signal (henceforth referred to as “off-frequency overshoot”). It has been suggested that off-frequency overshoot is based on a similar mechanism as “enhancement,” which refers to the perceptual pop-out of a signal after presentation of a precursor that contains a spectral notch at the signal frequency; both have been proposed to be caused by a reduction in the suppressive masking of the signal as a result of the adaptive effect of the precursor (“adaptation of suppression”). In this study, we measured overshoot, suppression, and adaptation of suppression for a 4-kHz sinusoidal signal and a 4.75-kHz sinusoidal masker and precursor, using the same set of participants. We show that, while the precursor yielded strong overshoot and the masker produced strong suppression, the precursor did not appear to cause any reduction (adaptation) of suppression. Predictions based on an established model of the cochlear input–output function indicate that our failure to obtain any adaptation of suppression is unlikely to represent a false negative outcome. Our results indicate that off-frequency overshoot and enhancement are likely caused by different mechanisms. We argue that overshoot may be due to higher-order perceptual factors such as transient masking or attentional diversion, whereas enhancement may be based on mechanisms similar to those that generate the Zwicker tone

    Auditory attention causes gain enhancement and frequency sharpening at successive stages of cortical processing: evidence from human EEG

    Get PDF
    Previous findings have suggested that auditory attention causes not only enhancement in neural processing gain, but also sharpening in neural frequency tuning in human auditory cortex. The current study was aimed to reexamine these findings, and investigate whether attentional gain enhancement and frequency sharpening emerge at the same or different processing levels, and whether they represent independent or cooperative effects. For that, we examined the pattern of attentional modulation effects on early, sensory-driven cortical auditory-evoked potentials (CAEPs) occurring at different latencies. Attention was manipulated using a dichotic listening task and was thus not selectively directed to specific frequency values. Possible attention-related changes in frequency tuning selectivity were measured with an EEG adaptation paradigm. Our results show marked disparities in attention effects between the earlier N1 CAEP deflection and the subsequent P2 deflection, with the N1 showing a strong gain enhancement effect, but no sharpening, and the P2 showing clear evidence of sharpening, but no independent gain effect. They suggest that gain enhancement and frequency sharpening represent successive stages of a cooperative attentional modulation mechanism, which appears to increase the representational bandwidth of attended versus unattended sounds

    Can Acting Out Online Improve Adolescents’ Well-Being During Contact Restrictions? A First Insight Into the Dysfunctional Role of Cyberbullying and the Need to Belong in Well-Being During COVID-19 Pandemic-Related Contact Restrictions

    Get PDF
    Connecting with peers online to overcome social isolation has become particularly important during the pandemic-related school closures across many countries. In the context of contact restrictions, feelings of isolation and loneliness are more prevalent and the regulation of these negative emotions to maintain a positive well-being challenges adolescents. This is especially the case for those individuals who might have a high need to belong and difficulties in emotional competences. The difficult social situation during contact restrictions, more time for online communication and maladaptive emotion regulation might lead to aggressive communication patterns in the form of cyberbullying perpetration. In an online study with N = 205 adolescents aged 14–19 (M = 15.83, SD = 1.44; 57% girls), we assessed the frequency of online and offline contacts, need to belong, emotion regulation problems, feelings of loneliness, and cyberbullying perpetration as predictors of adolescents’ well-being. In particular, we explored whether cyberbullying perpetration might function as a maladaptive strategy to deal with feelings of loneliness and therefore predicts well-being. This effect was expected to be stronger for those with a higher need to belong and with higher emotion regulation problems. Results of a hierarchical regression analysis revealed that well-being was significantly predicted by less emotion regulation difficulties, less feeling isolated and more cyberbullying perpetration. We also tested whether the need to belong or emotion regulation problems moderated the association between cyberbullying and well-being. While the results for emotion regulation problems were not significant, the moderation effect for the need to belong was significant: For students with a high need to belong, well-being was more strongly related to cyberbullying perpetration than for students with a medium need to belong. For students with a low need to belong, cyberbullying was not significantly associated with well-being. That cyberbullying perpetration predicted well-being positively is rather surprising in the light of previous research showing negative psychosocial outcomes also for cyberbullying perpetrators. The moderation analysis provides a hint at underlying processes: In times of distance learning and contact restrictions, cyberbullying may be a way of coming into contact with others and to regulate loneliness maladaptively

    The neural substrate for binaural masking level differences in the auditory cortex

    Get PDF
    The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population

    Regulation of auditory plasticity during critical periods and following hearing loss

    Get PDF
    Sensory input has profound effects on neuronal organization and sensory maps in the brain. The mechanisms regulating plasticity of the auditory pathway have been revealed by examining the consequences of altered auditory input during both developmental critical periods—when plasticity facilitates the optimization of neural circuits in concert with the external environment—and in adulthood—when hearing loss is linked to the generation of tinnitus. In this review, we summarize research identifying the molecular, cellular, and circuit-level mechanisms regulating neuronal organization and tonotopic map plasticity during developmental critical periods and in adulthood. These mechanisms are shared in both the juvenile and adult brain and along the length of the auditory pathway and serve to regulate disinhibitory networks, synaptic structure and function, as well as structural barriers to plasticity. Regulation of plasticity also involves both neuromodulatory circuits, which link plasticity with learning and attention, as well as ascending and descending auditory circuits, which link the auditory cortex and lower structures. Further work identifying the interplay of molecular and cellular mechanisms associating hearing loss induced plasticity with brain changes observed as part of tinnitus should advance strategies to treat tinnitus by molecularly modulating plasticity

    Neurons in the inferior colliculus of the rat show stimulus-specific adaptation for frequency, but not for intensity

    Get PDF
    Electrophysiological and psychophysical responses to a low-intensity probe sound tend to be suppressed by a preceding high-intensity adaptor sound. Nevertheless, rare low-intensity deviant sounds presented among frequent high-intensity standard sounds in an intensity oddball paradigm can elicit an electroencephalographic mismatch negativity (MMN) response. This has been taken to suggest that the MMN is a correlate of true change or “deviance” detection. A key question is where in the ascending auditory pathway true deviance sensitivity first emerges. Here, we addressed this question by measuring low-intensity deviant responses from single units in the inferior colliculus (IC) of anesthetized rats. If the IC exhibits true deviance sensitivity to intensity, IC neurons should show enhanced responses to low-intensity deviant sounds presented among high-intensity standards. Contrary to this prediction, deviant responses were only enhanced when the standards and deviants differed in frequency. The results could be explained with a model assuming that IC neurons integrate over multiple frequency-tuned channels and that adaptation occurs within each channel independently. We used an adaptation paradigm with multiple repeated adaptors to measure the tuning widths of these adaption channels in relation to the neurons’ overall tuning widths

    Neuroimaging paradigms for tonotopic mapping (II): the influence of acquisition protocol.

    Get PDF
    AbstractNumerous studies on the tonotopic organisation of auditory cortex in humans have employed a wide range of neuroimaging protocols to assess cortical frequency tuning. In the present functional magnetic resonance imaging (fMRI) study, we made a systematic comparison between acquisition protocols with variable levels of interference from acoustic scanner noise. Using sweep stimuli to evoke travelling waves of activation, we measured sound-evoked response signals using sparse, clustered, and continuous imaging protocols that were characterised by inter-scan intervals of 8.8, 2.2, or 0.0s, respectively. With regard to sensitivity to sound-evoked activation, the sparse and clustered protocols performed similarly, and both detected more activation than the continuous method. Qualitatively, tonotopic maps in activated areas proved highly similar, in the sense that the overall pattern of tonotopic gradients was reproducible across all three protocols. However, quantitatively, we observed substantial reductions in response amplitudes to moderately low stimulus frequencies that coincided with regions of strong energy in the scanner noise spectrum for the clustered and continuous protocols compared to the sparse protocol. At the same time, extreme frequencies became over-represented for these two protocols, and high best frequencies became relatively more abundant. Our results indicate that although all three scanning protocols are suitable to determine the layout of tonotopic fields, an exact quantitative assessment of the representation of various sound frequencies is substantially confounded by the presence of scanner noise. In addition, we noticed anomalous signal dynamics in response to our travelling wave paradigm that suggest that the assessment of frequency-dependent tuning is non-trivially influenced by time-dependent (hemo)dynamics when using sweep stimuli

    Is human auditory cortex organization compatible with the monkey model? Contrary evidence from ultra-high-field functional and structural MRI

    Get PDF
    It is commonly assumed that the human auditory cortex is organized similarly to that of macaque monkeys, where the primary region, or “core,” is elongated parallel to the tonotopic axis (main direction of tonotopic gradients), and subdivided across this axis into up to 3 distinct areas (A1, R, and RT), with separate, mirror-symmetric tonotopic gradients. This assumption, however, has not been tested until now. Here, we used high-resolution ultra-high-field (7 T) magnetic resonance imaging (MRI) to delineate the human core and map tonotopy in 24 individual hemispheres. In each hemisphere, we assessed tonotopic gradients using principled, quantitative analysis methods, and delineated the core using 2 independent (functional and structural) MRI criteria. Our results indicate that, contrary to macaques, the human core is elongated perpendicular rather than parallel to the main tonotopic axis, and that this axis contains no more than 2 mirror-reversed gradients within the core region. Previously suggested homologies between these gradients and areas A1 and R in macaques were not supported. Our findings suggest fundamental differences in auditory cortex organization between humans and macaques

    Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation

    Get PDF
    Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing
    corecore