15,204 research outputs found

    Behavioural and neural insights into the recognition and motivational salience of familiar voice identities

    Get PDF
    The majority of voices encountered in everyday life belong to people we know, such as close friends, relatives, or romantic partners. However, research to date has overlooked this type of familiarity when investigating voice identity perception. This thesis aimed to address this gap in the literature, through a detailed investigation of voice perception across different types of familiarity: personally familiar voices, famous voices, and lab-trained voices. The experimental chapters of the thesis cover two broad research topics: 1) Measuring the recognition and representation of personally familiar voice identities in comparison with labtrained identities, and 2) Investigating motivation and reward in relation to hearing personally valued voices compared with unfamiliar voice identities. In the first of these, an exploration of the extent of human voice recognition capabilities was undertaken using personally familiar voices of romantic partners. The perceptual benefits of personal familiarity for voice and speech perception were examined, as well as an investigation into how voice identity representations are formed through exposure to new voice identities. Evidence for highly robust voice representations for personally familiar voices was found in the face of perceptual challenges, which greatly exceeded those found for lab-trained voices of varying levels of familiarity. Conclusions are drawn about the relevance of the amount and type of exposure on speaker recognition, the expertise we have with certain voices, and the framing of familiarity as a continuum rather than a binary categorisation. The second topic utilised voices of famous singers and their “super-fans” as listeners to probe reward and motivational responses to hearing these valued voices, using behavioural and neuroimaging experiments. Listeners were found to work harder, as evidenced by faster reaction times, to hear their musical idol compared to less valued voices in an effort-based decision-making task, and the neural correlates of these effects are reported and examined

    Visual mechanisms for voice‐identity recognition flexibly adjust to auditory noise level

    Get PDF
    Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called 'face-benefit' is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions

    Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa

    Get PDF
    Cortical mapping of cognitive function during infancy is poorly understood in low-income countries due to the lack of transportable neuroimaging methods. We have successfully piloted functional near infrared spectroscopy (fNIRS) as a neuroimaging tool in rural Gambia. Four-to-eight month old infants watched videos of Gambian adults perform social movements, while haemodynamic responses were recorded using fNIRS. We found distinct regions of the posterior superior temporal and inferior frontal cortex that evidenced either visual-social activation or vocally selective activation (vocal > non-vocal). The patterns of selective cortical activation in Gambian infants replicated those observed within similar aged infants in the UK. These are the first reported data on the measurement of localized functional brain activity in young infants in Africa and demonstrate the potential that fNIRS offers for field-based neuroimaging research of cognitive function in resource-poor rural communities

    Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa

    Get PDF
    Cortical mapping of cognitive function during infancy is poorly understood in low-income countries due to the lack of transportable neuroimaging methods. We have successfully piloted functional near infrared spectroscopy (fNIRS) as a neuroimaging tool in rural Gambia. Four-to-eight month old infants watched videos of Gambian adults perform social movements, while haemodynamic responses were recorded using fNIRS. We found distinct regions of the posterior superior temporal and inferior frontal cortex that evidenced either visual-social activation or vocally selective activation (vocal > non-vocal). The patterns of selective cortical activation in Gambian infants replicated those observed within similar aged infants in the UK. These are the first reported data on the measurement of localized functional brain activity in young infants in Africa and demonstrate the potential that fNIRS offers for field-based neuroimaging research of cognitive function in resource-poor rural communities

    Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal

    Get PDF
    Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the acoustic properties of some vocalisations are constrained by physical characteristics of the caller, whereas others are more dynamic, influenced by transient states such as arousal or motivation. This chapter thus reviews how and why particular call types are produced to transmit specific types of information, and how such information may be perceived by receivers. As domestication is thought to have caused a divergence in the vocal behaviour of dogs as compared to the ancestral wolf, evidence of both dog–human and human–dog communication is considered. Overall, it is clear that domestic dogs have the potential to acoustically broadcast a range of information, which is available to conspecific and human receivers. Moreover, dogs are highly attentive to human speech and are able to extract speaker identity, emotional state, and even some types of semantic information

    Reduced neural sensitivity to social stimuli in infants at risk for autism

    Get PDF
    In the hope of discovering early markers of autism, attention has recently turned to the study of infants at risk owing to being the younger siblings of children with autism. Because the condition is highly heritable, later-born siblings of diagnosed children are at substantially higher risk for developing autism or the broader autism phenotype than the general population. Currently, there are no strong predictors of autism in early infancy and diagnosis is not reliable until around 3 years of age. Because indicators of brain functioning may be sensitive predictors, and atypical social interactions are characteristic of the syndrome, we examined whether temporal lobe specialization for processing visual and auditory social stimuli during infancy differs in infants at risk. In a functional near-infrared spectroscopy study, infants aged 4–6 months at risk for autism showed less selective neural responses to social stimuli (auditory and visual) than low-risk controls. These group differences could not be attributed to overall levels of attention, developmental stage or chronological age. Our results provide the first demonstration of specific differences in localizable brain function within the first 6 months of life in a group of infants at risk for autism. Further, these differences closely resemble known patterns of neural atypicality in children and adults with autism. Future work will determine whether these differences in infant neural responses to social stimuli predict either later autism or the broader autism phenotype frequently seen in unaffected family members

    The acoustic bases of human voice identity processing in dogs

    Get PDF
    Speech carries identity-diagnostic acoustic cues that help individuals recognize each other during vocal–social interactions. In humans, fundamental frequency, formant dispersion and harmonics-to-noise ratio serve as characteristics along which speakers can be reliably separated. The ability to infer a speaker’s identity is also adaptive for members of other species (like companion animals) for whom humans (as owners) are relevant. The acoustic bases of speaker recognition in non-humans are unknown. Here, we tested whether dogs can recognize their owner’s voice and whether they rely on the same acoustic parameters for such recognition as humans use to discriminate speakers. Stimuli were pre-recorded sentences spoken by the owner and control persons, played through loudspeakers placed behind two non-transparent screens (with each screen hiding a person). We investigated the association between acoustic distance of speakers (examined along several dimensions relevant in intraspecific voice identification) and dogs’ behavior. Dogs chose their owner’s voice more often than that of control persons’, suggesting that they can identify it. Choosing success and time spent looking in the direction of the owner’s voice were positively associated, showing that looking time is an index of the ease of choice. Acoustic distance of speakers in mean fundamental frequency and jitter were positively associated with looking time, indicating that the shorter the acoustic distance between speakers with regard to these parameters, the harder the decision. So, dogs use these cues to discriminate their owner’s voice from unfamiliar voices. These findings reveal that dogs use some but probably not all acoustic parameters that humans use to identify speakers. Although dogs can detect fine changes in speech, their perceptual system may not be fully attuned to identity-diagnostic cues in the human voice

    What Do We Experience When Listening to a Familiar Language?

    Get PDF
    What do we systematically experience when hearing an utterance in a familiar language? A popular and intuitive answer has it that we experience understanding an utterance or what the speaker said or communicated by uttering a sentence. Understanding a meaning conveyed by the speaker is an important element of linguistic communication that might be experienced in such cases. However, in this paper I argue that two other elements that typically accompany the production of spoken linguistic utterances should be carefully considered when we address the question of what is systematically experienced when we listen to utterances in a familiar language. First, when we listen to a familiar language we register various prosodic phenomena that speakers routinely produce. Second, we typically register stable vocal characteristics of speakers, such as pitch, tempo or accent, that are often systematically related to various properties of the speaker. Thus, the answer to the question of what we typically experience when listening to a familiar language is likely to be a complex one. Dedicated attention is needed to understand the nature and scope of phenomenology that pertains to linguistic communication. This paper lays some groundwork for that project
    • 

    corecore