185 research outputs found

    Common principles in the lateralization of auditory cortex structure and function for vocal communication in primates and rodents

    Get PDF
    This review summarizes recent findings on the lateralization of communicative sound processing in the auditory cortex (AC) of humans, non-human primates and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralization in AC, the properties of AC fields and behavioural asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralization, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardize data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing

    Neural substrates and models of omission responses and predictive processes

    Get PDF
    Predictive coding theories argue that deviance detection phenomena, such as mismatch responses and omission responses, are generated by predictive processes with possibly overlapping neural substrates. Molecular imaging and electrophysiology studies of mismatch responses and corollary discharge in the rodent model allowed the development of mechanistic and computational models of these phenomena. These models enable translation between human and non-human animal research and help to uncover fundamental features of change-processing microcircuitry in the neocortex. This microcircuitry is characterized by stimulus-specific adaptation and feedforward inhibition of stimulus-selective populations of pyramidal neurons and interneurons, with specific contributions from different interneuron types. The overlap of the substrates of different types of responses to deviant stimuli remains to be understood. Omission responses, which are observed both in corollary discharge and mismatch response protocols in humans, are underutilized in animal research and may be pivotal in uncovering the substrates of predictive processes. Omission studies comprise a range of methods centered on the withholding of an expected stimulus. This review aims to provide an overview of omission protocols and showcase their potential to integrate and complement the different models and procedures employed to study prediction and deviance detection.This approach may reveal the biological foundations of core concepts of predictive coding, and allow an empirical test of the framework’s promise to unify theoretical models of attention and perception

    Is it tonotopy after all?

    Get PDF
    In this functional MRI study the frequency-dependent localization of acoustically evoked BOLD responses within the human auditory cortex was investigated. A blocked design was employed, consisting of periods of tonal stimulation (random frequency modulations with center frequencies 0.25, 0.5, 4.0, and 8.0 kHz) and resting periods during which only the ambient scanner noise was audible. Multiple frequency-dependent activation sites were reliably demonstrated on the surface of the auditory cortex. The individual gyral pattern of the superior temporal plane (STP), especially the anatomy of Heschl's gyrus (HG), was found to be the major source of interindividual variability. Considering this variability by tracking the frequency responsiveness to the four stimulus frequencies along individual Heschl's gyri yielded medio-lateral gradients of responsiveness to high frequencies medially and low frequencies laterally. It is, however, argued that with regard to the results of electrophysiological and cytoarchitectonical studies in humans and in nonhuman primates, the multiple frequency-dependent activation sites found in the present study as well as in other recent fMRI investigations are no direct indication of tonotopic organization of cytoarchitectonical areas. An alternative interpretation is that the activation sites correspond to different cortical fields, the topological organization of which cannot be resolved with the current spatial resolution of fMRI. In this notion, the detected frequency selectivity of different cortical areas arises from an excess of neurons engaged in the processing of different acoustic features, which are associated with different frequency bands. Differences in the response properties of medial compared to lateral and frontal compared to occipital portions of HG strongly support this notion

    EEG Correlates of Learning From Speech Presented in Environmental Noise

    Get PDF
    How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms

    EEG correlates of learning from speech presented in environmental noise

    Get PDF
    How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms

    Sound Localization in Single-Sided Deaf Participants Provided With a Cochlear Implant

    Get PDF
    Spatial hearing is crucial in real life but deteriorates in participants with severe sensorineural hearing loss or single-sided deafness. This ability can potentially be improved with a unilateral cochlear implant (CI). The present study investigated measures of sound localization in participants with single-sided deafness provided with a CI. Sound localization was measured separately at eight loudspeaker positions (4°, 30°, 60°, and 90°) on the CI side and on the normal-hearing side. Low- and high-frequency noise bursts were used in the tests to investigate possible differences in the processing of interaural time and level differences. Data were compared to normal-hearing adults aged between 20 and 83. In addition, the benefit of the CI in speech understanding in noise was compared to the localization ability. Fifteen out of 18 participants were able to localize signals on the CI side and on the normal-hearing side, although performance was highly variable across participants. Three participants always pointed to the normal-hearing side, irrespective of the location of the signal. The comparison with control data showed that participants had particular difficulties localizing sounds at frontal locations and on the CI side. In contrast to most previous results, participants were able to localize low-frequency signals, although they localized high-frequency signals more accurately. Speech understanding in noise was better with the CI compared to testing without CI, but only at a position where the CI also improved sound localization. Our data suggest that a CI can, to a large extent, restore localization in participants with single-sided deafness. Difficulties may remain at frontal locations and on the CI side. However, speech understanding in noise improves when wearing the CI. The treatment with a CI in these participants might provide real-world benefits, such as improved orientation in traffic and speech understanding in difficult listening situations

    Pitch Processing Sites in the Human Auditory Brain

    Get PDF
    Lateral Heschl's gyrus (HG), a subdivision of the human auditory cortex, is commonly believed to represent a general “pitch center,” responding selectively to the pitch of sounds, irrespective of their spectral characteristics. However, most neuroimaging investigations have used only one specialized pitch-evoking stimulus: iterated-ripple noise (IRN). The present study used a novel experimental design in which a range of different pitch-evoking stimuli were presented to the same listeners. Pitch sites were identified by searching for voxels that responded well to the range of pitch-evoking stimuli. The first result suggested that parts of the planum temporale are more relevant for pitch processing than lateral HG. In some listeners, pitch responses occurred elsewhere, such as the temporo-parieto-occipital junction or prefrontal cortex. The second result demonstrated a different pattern of response to the IRN and raises the possibility that features of IRN unrelated to pitch might contribute to the earlier results. In conclusion, it seems premature to assign special status to lateral HG solely on the basis of neuroactivation patterns. Further work should consider the functional roles of these multiple pitch processing sites within the proposed network

    Histological basis of laminar MRI patterns in high resolution images of fixed human auditory cortex

    Get PDF
    Functional magnetic resonance imaging (fMRI) studies of the auditory region of the temporal lobe would benefit from the availability of image contrast that allowed direct identification of the primary auditory cortex, as this region cannot be accurately located using gyral landmarks alone. Previous work has suggested that the primary area can be identified in magnetic resonance (MR) images because of its relatively high myelin content. However, MR images are also affected by the iron content of the tissue and in this study we sought to confirm that different MR image contrasts did correlate with the myelin content in the grey matter and were not primarily affected by iron content as is the case in the primary visual and somatosensory areas. By imaging blocks of fixed post-mortem cortex in a 7 Tesla scanner and then sectioning them for histological staining we sought to assess the relative contribution of myelin and iron to the grey matter contrast in the auditory region. Evaluating the image contrast in T2*-weighted images and quantitative R2* maps showed a reasonably high correlation between the myelin density of the grey matter and the intensity of the MR images. The correlation with T1-weighted phase sensitive inversion recovery (PSIR) images was better than with the previous two image types, and there were clearly differentiated borders between adjacent cortical areas in these images. A significant amount of iron was present in the auditory region, but did not seem to contribute to the laminar pattern of the cortical grey matter in MR images. Similar levels of iron were present in the grey and white matter and although iron was present in fibres within the grey matter, these fibres were fairly uniformly distributed across the cortex. Thus we conclude that T1- and T2*-weighted imaging sequences do demonstrate the relatively high myelin levels that are characteristic of the deep layers in primary auditory cortex and allow it and some of the surrounding areas to be reliably distinguished
    • 

    corecore