9,667 research outputs found

    Similarities in face and voice cerebral processing

    Get PDF
    In this short paper I illustrate by a few selected examples several compelling similarities in the functional organization of face and voice cerebral processing: (1) Presence of cortical areas selective to face or voice stimuli, also observed in non-human primates, and causally related to perception; (2) Coding of face or voice identity using a “norm-based” scheme; (3) Personality inferences from faces and voices in a same Trustworthiness–Dominance “social space”

    People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus

    Get PDF
    The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face–voice integration. However, the nature of the STS preference for these ‘social stimuli’ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ‘people-selective’ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ‘people-selective, heteromodal’ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ‘social-information processing’ centre

    Explaining Schizophrenia: Auditory Verbal Hallucination and Self‐Monitoring

    Get PDF
    Do self‐monitoring accounts, a dominant account of the positive symptoms of schizophrenia, explain auditory verbal hallucination? In this essay, I argue that the account fails to answer crucial questions any explanation of auditory verbal hallucination must address. Where the account provides a plausible answer, I make the case for an alternative explanation: auditory verbal hallucination is not the result of a failed control mechanism, namely failed self‐monitoring, but, rather, of the persistent automaticity of auditory experience of a voice. My argument emphasizes the importance of careful examination of phenomenology as providing substantive constraints on causal models of the positive symptoms in schizophrenia

    Investigating the Neural Correlates of Voice versus Speech-Sound Directed Information in Pre-School Children

    Get PDF
    Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age = 5.8 y; range 5.2–6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments

    Cerebral processing of voice gender studied using a continuous carryover fMRI design

    Get PDF
    Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception

    Voice and speech perception in autism : a systematic review

    Get PDF
    Autism spectrum disorders (ASD) are characterized by persistent impairments in social communication and interaction, restricted and repetitive behavior. In the original description of autism by Kanner (1943) the presence of emotional impairments was already emphasized (self-absorbed, emotionally cold, distanced, and retracted). However, little research has been conducted focusing on auditory perception of vocal emotional cues, being the audio-visual comprehension most commonly explored instead. Similarly to faces, voices play an important role in social interaction contexts in which individuals with ASD show impairments. The aim of the current systematic review was to integrate evidence from behavioral and neurobiological studies for a more comprehensive understanding of voice processing abnormalities in ASD. Among different types of information that the human voice may provide, we hypothesize particular deficits with vocal affect information processing by individuals with ASD. The relationship between vocal stimuli impairments and disrupted Theory of Mind in Autism is discussed. Moreover, because ASD are characterized by deficits in social reciprocity, further discussion of the abnormal oxytocin system in individuals with ASD is performed as a possible biological marker for abnormal vocal affect information processing and social interaction skills in ASD population

    Mental simulations in comprehension of direct versus indirect speech quotations

    Get PDF
    In human communication, direct speech (e.g., Mary said: ‘I’m hungry’) coincides with vivid paralinguistic demonstrations of the reported speech acts whereas indirect speech (e.g., Mary said [that] she was hungry) provides mere descriptions of what was said. Hence, direct speech is usually more vivid and perceptually engaging than indirect speech. This thesis explores how this vividness distinction between the two reporting styles underlies language comprehension. Using functional magnetic resonance imaging (fMRI), we found that in both silent reading and listening, direct speech elicited higher brain activity in the voice-selective areas of the auditory cortex than indirect speech, consistent with the intuition of an ‘inner voice’ experience during comprehension of direct speech. In the follow-up behavioural investigations, we demonstrated that this ‘inner voice’ experience could be characterised in terms of modulations of speaking rate, reflected in both behavioural articulation (oral reading) and eye-movement patterns (silent reading). Moreover, we observed context-concordant modulations of pitch and loudness in oral reading but not straightforwardly in silent reading. Finally, we obtained preliminary results which show that in addition to reported speakers’ voices, their facial expressions may also be encoded in silent reading of direct speech but not indirect speech. The results show that individuals are more likely to mentally simulate or imagine reported speakers’ voices and perhaps also their facial expressions during comprehension of direct as opposed to indirect speech, indicating a more vivid representation of the former. The findings are in line with the demonstration hypothesis of direct speech (Clark & Gerrig, 1990) and the embodied theories of language comprehension (e.g., Barsalou, 1999; Zwaan, 2004), suggesting that sensory experiences with pragmatically distinct reporting styles underlie language comprehension

    The effect of female voice on verbal processing

    Get PDF
    Previous studies have suggested that female voices may impede verbal processing. For example, words were remembered less well and lexical decision was slower when spoken by a female speaker. The current study tried to replicate this gender effect in an auditory semantic/associative priming task that excluded any effects of speaker variability and extended previous research by examining the role of two voice features important in perceived gender: pitch and formant frequencies. Additionally, listener gender was included in the experimental design. Results show that, contrary to previous findings, there is no evidence that a lexical decision of a target word is slower when spoken by a female speaker than by a male speaker for female and male listeners. Additionally, the semantic/associative priming effect was not affected by speaker gender, neither did female mean pitch or formants predict the semantic/associative priming effect. At the behavioural level, the current study found no evidence for a gender effect in a semantic/associative priming task.Theoretical and Experimental Linguistic
    • 

    corecore