72 research outputs found

    Orienting asymmetries in dogs’ responses to different communicatory components of human speech

    Get PDF
    It is well established that in human speech perception the left hemisphere (LH) of the brain is specialized for processing intelligible phonemic (segmental) content (e.g., [1–3]), whereas the right hemisphere (RH) is more sensitive to pro- sodic (suprasegmental) cues [4, 5]. Despite evidence that a range of mammal species show LH specialization when pro- cessing conspecific vocalizations [6], the presence of hemi- spheric biases in domesticated animals’ responses to the communicative components of human speech has never been investigated. Human speech is familiar and relevant to domestic dogs (Canis familiaris), who are known to perceive both segmental phonemic cues [7–10] and supra- segmental speaker-related [11, 12] and emotional [13] proso- dic cues. Using the head-orienting paradigm, we presented dogs with manipulated speech and tones differing in segmental or suprasegmental content and recorded their orienting responses. We found that dogs showed a sig- nificant LH bias when presented with a familiar spoken command in which the salience of meaningful phonemic (segmental) cues was artificially increased but a significant RH bias in response to commands in which the salience of intonational or speaker-related (suprasegmental) vocal cues was increased. Our results provide insights into mech- anisms of interspecific vocal perception in a domesticated mammal and suggest that dogs may share ancestral or convergent hemispheric specializations for processing the different functional communicative components of speech with human listeners

    Comprehending auditory speech:previous and potential contributions of functional MRI

    Get PDF
    Functional neuroimaging revolutionised the study of human language in the late twentieth century, allowing researchers to investigate its underlying cognitive processes in the intact brain. Here, we review how functional MRI (fMRI) in particular has contributed to our understanding of speech comprehension, with a focus on studies of intelligibility. We highlight the use of carefully controlled acoustic stimuli to reveal the underlying hierarchical organisation of speech processing systems and cortical (a)symmetries, and discuss the contributions of novel design and analysis techniques to the contextualisation of perisylvian regions within wider speech processing networks. Within this, we outline the methodological challenges of fMRI as a technique for investigating speech and describe the innovations that have overcome or mitigated these difficulties. Focussing on multivariate approaches to fMRI, we highlight how these techniques have allowed both local neural representations and broader scale brain systems to be described

    Common principles in the lateralization of auditory cortex structure and function for vocal communication in primates and rodents

    Get PDF
    This review summarizes recent findings on the lateralization of communicative sound processing in the auditory cortex (AC) of humans, non-human primates and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralization in AC, the properties of AC fields and behavioural asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralization, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardize data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing

    Repetition enhancement to voice identities in the dog brain

    Get PDF
    In the human speech signal, cues of speech sounds and voice identities are conflated, but they are processed separately in the human brain. The processing of speech sounds and voice identities is typically performed by non-primary auditory regions in humans and non-human primates. Additionally, these processes exhibit functional asymmetry in humans, indicating the involvement of distinct mechanisms. Behavioural studies indicate analogue side biases in dogs, but neural evidence for this functional dissociation is missing. In two experiments, using an fMRI adaptation paradigm, we presented awake dogs with natural human speech that either varied in segmental (change in speech sound) or suprasegmental (change in voice identity) content. In auditory regions, we found a repetition enhancement effect for voice identity processing in a secondary auditory region – the caudal ectosylvian gyrus. The same region did not show repetition effects for speech sounds, nor did the primary auditory cortex exhibit sensitivity to changes either in the segmental or in the suprasegmental content. Furthermore, we did not find evidence for functional asymmetry neither in the processing of speech sounds or voice identities. Our results in dogs corroborate former human and non-human primate evidence on the role of secondary auditory regions in the processing of suprasegmental cues, suggesting similar neural sensitivity to the identity of the vocalizer across the mammalian order

    You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech.

    Get PDF
    Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task

    Language experience impacts brain activation for spoken and signed language in infancy: Insights from unimodal and bimodal bilinguals

    Get PDF
    Recent neuroimaging studies suggest that monolingual infants activate a left lateralised fronto-temporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near infrared spectroscopy (fNIRS) data from 60 hearing infants (4-to-8 months): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, while sign language elicited activation in the right temporo-parietal area. A significant difference in brain lateralisation was observed between groups. Activation in the posterior temporal region was not lateralised in monolinguals and bimodal bilinguals, but right lateralised in response to both language modalities in unimodal bilinguals. This suggests that experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPA) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language, and that unimodal bilingual experience has greater impact on early brain lateralisation than bimodal bilingual experience

    Keelespetsiifiliste stiimulite töötlus erinevatel keelegruppidel: EEG uuring

    Get PDF
    Two almost identical EEG experiments were conducted with about one month between them to examine how the brain processes language specific stimuli among Estonian (n =15, aged 19-27 years) and Russian (n = 15, aged 18-27 years) native speakers. The used stimuli were based on Estonian quantity changes, which are not structurally common for Russian speakers. Two different linguistic stimulus sets (SADA, SAGI) and one physically similar tone stimulus set were used, stimuli differed from each other by duration and tonal change. During the EEG recording, participants had to watch a silent movie while auditory language stimuli were presented in an MMN experimental paradigm to their headphones. An additional speech intelligibility test was conducted on both times and self-reported questionnaire had to be filled before the testing. The tone stimulus elicited a more persistent MMN wave with larger amplitude in both language group, linguistic stimuli elicited a more pronounced MMN response among Estonian native speakers. The study provided a slight support to previous findings, as the Estonians used both durational and pitch cue to discriminate quantities. Only few used conditions elicited MMN among Russian native speakers with no complete clarity if the activity was caused by durational or pitch cue (or both). No consistent lateralization effect was found nor relationships with possible background factors (language abilities, musicality, language experience and time spent in Estonian language environment for Russian native speakers)

    The causal role of left and right superior temporal gyri in speech perception in noise:A Transcranial Magnetic Stimulation Study

    Get PDF
    Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required. We addressed this issue using TMS to selectively disrupt processing in either the left or right superior temporal gyrus (STG) of healthy participants to test whether the left temporal lobe is sufficient or whether both left and right STG are essential. Participants repeated keywords from sentences presented in background noise in a speech reception threshold task while receiving online repetitive TMS separately to the left STG, right STG, or vertex or while receiving no TMS. Results show an equal drop in performance following application of TMS to either left or right STG during the task. A separate group of participants performed a visual discrimination threshold task to control for the confounding side effects of TMS. Results show no effect of TMS on the control task, supporting the notion that the results of Experiment 1 can be attributed to modulation of cortical functioning in STG rather than to side effects associated with online TMS. These results indicate that successful speech perception in everyday listening conditions requires both left and right STG and thus have ramifications for our understanding of the neural organization of spoken language processing

    Towards a cognitive neuroscience of prosody perception and its modulation by alexithymia

    Get PDF
    This dissertation examines what network in the human brain is involved in the perception of prosody and whether activity within this network is modulated by the personality trait alexithymia. The first four chapters of this dissertation reveal that a bihemispheric network consisting of Heschl__s gyrus, the middle superior temporal gyrus, the posterior superior temporal gyrus and the pars opercularis of the inferior frontal gyrus is involved in the perception of emotional prosody. Furthermore, relative right hemispheric specialization for emotional prosody perception can be demonstrated but no hemispheric specialization can be found for linguistic prosody perception. Moreover, hemispheric specialization for emotional prosody perception seems to be driven by hemispheric specialization for non-prosody-specific acoustic dimensions of the speech signal, and not for abstract emotional processing. Additionally, automaticity of processing can be demonstrated for emotional prosody, particularly for anger, but not for emotional music. Last, alexithymia can indeed be demonstrated to modulate activity within the emotional prosody perception network, particularly at relatively early components of the emotional prosody perception pathway. This dissertation is of interest to neurolinguists, (neuro-)phoneticians, psychologists, cognitive neuroscientists, comparative biologists and neurologists specialized in aphasiaUBL - phd migration 201
    corecore