79 research outputs found

    Common principles in the lateralization of auditory cortex structure and function for vocal communication in primates and rodents

    Get PDF
    This review summarizes recent findings on the lateralization of communicative sound processing in the auditory cortex (AC) of humans, non-human primates and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralization in AC, the properties of AC fields and behavioural asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralization, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardize data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing

    Neural substrates and models of omission responses and predictive processes

    Get PDF
    Predictive coding theories argue that deviance detection phenomena, such as mismatch responses and omission responses, are generated by predictive processes with possibly overlapping neural substrates. Molecular imaging and electrophysiology studies of mismatch responses and corollary discharge in the rodent model allowed the development of mechanistic and computational models of these phenomena. These models enable translation between human and non-human animal research and help to uncover fundamental features of change-processing microcircuitry in the neocortex. This microcircuitry is characterized by stimulus-specific adaptation and feedforward inhibition of stimulus-selective populations of pyramidal neurons and interneurons, with specific contributions from different interneuron types. The overlap of the substrates of different types of responses to deviant stimuli remains to be understood. Omission responses, which are observed both in corollary discharge and mismatch response protocols in humans, are underutilized in animal research and may be pivotal in uncovering the substrates of predictive processes. Omission studies comprise a range of methods centered on the withholding of an expected stimulus. This review aims to provide an overview of omission protocols and showcase their potential to integrate and complement the different models and procedures employed to study prediction and deviance detection.This approach may reveal the biological foundations of core concepts of predictive coding, and allow an empirical test of the framework’s promise to unify theoretical models of attention and perception

    Is it tonotopy after all?

    Get PDF
    In this functional MRI study the frequency-dependent localization of acoustically evoked BOLD responses within the human auditory cortex was investigated. A blocked design was employed, consisting of periods of tonal stimulation (random frequency modulations with center frequencies 0.25, 0.5, 4.0, and 8.0 kHz) and resting periods during which only the ambient scanner noise was audible. Multiple frequency-dependent activation sites were reliably demonstrated on the surface of the auditory cortex. The individual gyral pattern of the superior temporal plane (STP), especially the anatomy of Heschl's gyrus (HG), was found to be the major source of interindividual variability. Considering this variability by tracking the frequency responsiveness to the four stimulus frequencies along individual Heschl's gyri yielded medio-lateral gradients of responsiveness to high frequencies medially and low frequencies laterally. It is, however, argued that with regard to the results of electrophysiological and cytoarchitectonical studies in humans and in nonhuman primates, the multiple frequency-dependent activation sites found in the present study as well as in other recent fMRI investigations are no direct indication of tonotopic organization of cytoarchitectonical areas. An alternative interpretation is that the activation sites correspond to different cortical fields, the topological organization of which cannot be resolved with the current spatial resolution of fMRI. In this notion, the detected frequency selectivity of different cortical areas arises from an excess of neurons engaged in the processing of different acoustic features, which are associated with different frequency bands. Differences in the response properties of medial compared to lateral and frontal compared to occipital portions of HG strongly support this notion

    Orienting asymmetries and lateralized processing of sounds in humans

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Lateralized processing of speech is a well studied phenomenon in humans. Both anatomical and neurophysiological studies support the view that nonhuman primates and other animal species also reveal hemispheric differences in areas involved in sound processing. In recent years, an increasing number of studies on a range of taxa have employed an orienting paradigm to investigate lateralized acoustic processing. In this paradigm, sounds are played directly from behind and the direction of turn is recorded. This assay rests on the assumption that a hemispheric asymmetry in processing is coupled to an orienting bias towards the contralateral side. To examine this largely untested assumption, speech stimuli as well as artificial sounds were presented to 224 right-handed human subjects shopping in supermarkets in Germany and in the UK. To verify the lateralized processing of the speech stimuli, we additionally assessed the brain activation in response to presentation of the different stimuli using functional magnetic resonance imaging (fMRI).</p> <p>Results</p> <p>In the naturalistic behavioural experiments, there was no difference in orienting behaviour in relation to the stimulus material (speech, artificial sounds). Contrary to our predictions, subjects revealed a significant left bias, irrespective of the sound category. This left bias was slightly but not significantly stronger in German subjects. The fMRI experiments confirmed that the speech stimuli evoked a significant left lateralized activation in BA44 compared to the artificial sounds.</p> <p>Conclusion</p> <p>These findings suggest that in adult humans, orienting biases are not necessarily coupled with lateralized processing of acoustic stimuli. Our results – as well as the inconsistent orienting biases found in different animal species – suggest that the orienting assay should be used with caution. Apparently, attention biases, experience, and experimental conditions may all affect head turning responses. Because of the complexity of the interaction of factors, the use of the orienting assay to determine lateralized processing of sound stimuli is discouraged.</p

    Hemispheric Specialization in Dogs for Processing Different Acoustic Stimuli

    Get PDF
    Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions

    Cortical Plasticity Induced by Short-Term Multimodal Musical Rhythm Training

    Get PDF
    Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG) response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA) learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A) listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN) in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production

    Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening

    Get PDF
    Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing

    Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality

    Get PDF
    Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation.  Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls.  Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus

    Probing Real Sensory Worlds of Receivers with Unsupervised Clustering

    Get PDF
    The task of an organism to extract information about the external environment from sensory signals is based entirely on the analysis of ongoing afferent spike activity provided by the sense organs. We investigate the processing of auditory stimuli by an acoustic interneuron of insects. In contrast to most previous work we do this by using stimuli and neurophysiological recordings directly in the nocturnal tropical rainforest, where the insect communicates. Different from typical recordings in sound proof laboratories, strong environmental noise from multiple sound sources interferes with the perception of acoustic signals in these realistic scenarios. We apply a recently developed unsupervised machine learning algorithm based on probabilistic inference to find frequently occurring firing patterns in the response of the acoustic interneuron. We can thus ask how much information the central nervous system of the receiver can extract from bursts without ever being told which type and which variants of bursts are characteristic for particular stimuli. Our results show that the reliability of burst coding in the time domain is so high that identical stimuli lead to extremely similar spike pattern responses, even for different preparations on different dates, and even if one of the preparations is recorded outdoors and the other one in the sound proof lab. Simultaneous recordings in two preparations exposed to the same acoustic environment reveal that characteristics of burst patterns are largely preserved among individuals of the same species. Our study shows that burst coding can provide a reliable mechanism for acoustic insects to classify and discriminate signals under very noisy real-world conditions. This gives new insights into the neural mechanisms potentially used by bushcrickets to discriminate conspecific songs from sounds of predators in similar carrier frequency bands
    corecore