441 research outputs found

    EXPRESS: Instrumental learning in social interactions: trait learning from faces and voices

    Get PDF
    Recent research suggests that reinforcement learning may underlie trait formation in social interactions with faces (Hackel, Doll, & Amodio, 2015; Hackel, Mende-Siedlecki, & Amodio, 2020). The current study investigated whether the same learning mechanisms could be engaged for trait learning from voices. On each trial of a training phase, participants (N = 192) chose from pairs of human or slot machine targets that varied in the 1) reward value and 2) generosity of their payouts. Targets were either auditory (voices or tones; Experiment 1) or visual (faces or icons; Experiment 2), and were presented sequentially before payout feedback. A test phase measured participant choice behaviour, and a post-test recorded their target preference ratings. For auditory targets, we found no effect of reward or generosity on target choices, but saw higher preference ratings for generous humans and slot machines. For visual targets, participants learned about both generosity and reward, but generosity was prioritised in the human condition. These findings demonstrate that (1) reinforcement learning of trait information with visual stimuli remains intact even when sequential presentation introduces a delay in feedback and (2) learning about traits and reward in such paradigms is weakened when auditory stimuli are used

    Convergence in voice fundamental frequency during synchronous speech

    Get PDF
    Joint speech behaviours where speakers produce speech in unison are found in a variety of everyday settings, and have clinical relevance as a temporary fluency-enhancing technique for people who stutter. It is currently unknown whether such synchronisation of speech timing among two speakers is also accompanied by alignment in their vocal characteristics, for example in acoustic measures such as pitch. The current study investigated this by testing whether convergence in voice fundamental frequency (F0) between speakers could be demonstrated during synchronous speech. Sixty participants across two online experiments were audio recorded whilst reading a series of sentences, first on their own, and then in synchrony with another speaker (the accompanist) in a number of between-subject conditions. Experiment 1 demonstrated significant convergence in participants’ F0 to a pre-recorded accompanist voice, in the form of both upward (high F0 accompanist condition) and downward (low and extra-low F0 accompanist conditions) changes in F0. Experiment 2 demonstrated that such convergence was not seen during a visual synchronous speech condition, in which participants spoke in synchrony with silent video recordings of the accompanist. An audiovisual condition in which participants were able to both see and hear the accompanist in pre-recorded videos did not result in greater convergence in F0 compared to synchronisation with the pre-recorded voice alone. These findings suggest the need for models of speech motor control to incorporate interactions between self- and other-speech feedback during speech production, and suggest a novel hypothesis for the mechanisms underlying the fluency-enhancing effects of synchronous speech in people who stutter

    A little more conversation, a little less action: Candidate roles for motor cortex in speech perception

    Get PDF
    The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor-cortex activation it is essential in joint speech, particularly for the timing of turn-taking

    A dual larynx motor networks hypothesis

    Get PDF
    Humans are vocal modulators par excellence. This ability is supported in part by the dual representation of the laryngeal muscles in the motor cortex. Movement, however, is not the product of motor cortex alone but of a broader motor network. This network consists of brain regions that contain somatotopic maps that parallel the organization in motor cortex. We therefore present a novel hypothesis that the dual laryngeal representation is repeated throughout the broader motor network. In support of the hypothesis, we review existing literature that demonstrates the existence of network-wide somatotopy and present initial evidence for the hypothesis' plausibility. Understanding how this uniquely human phenotype in motor cortex interacts with broader brain networks is an important step toward understanding how humans evolved the ability to speak. We further suggest that this system may provide a means to study how individual components of the nervous system evolved within the context of neuronal networks. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’

    Highly Accurate and Robust Identity Perception From Personally Familiar Voices

    Get PDF
    Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being “familiar enough” with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices – acquired via substantial, naturalistic experience – should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participants’ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate

    Comparing unfamiliar voice and face identity perception using identity sorting tasks.

    Get PDF
    Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to "tell people together." These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participants' performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on "same identity" trials with "telling people together" for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes

    Highly accurate and robust identity perception from personally familiar voices

    Get PDF
    Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being “familiar enough” with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices – acquired via substantial, naturalistic experience – should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participants’ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate

    Comparing unfamiliar voice and face identity perception using identity sorting tasks

    Get PDF
    Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to “tell people together.” These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participants’ performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on “same identity” trials with “telling people together” for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes

    The Role of Sensory Feedback in Developmental Stuttering: A Review

    Get PDF
    Developmental stuttering is a neurodevelopmental disorder that severely affects speech fluency. Multiple lines of evidence point to a role of sensory feedback in the disorder; this has led to a number of theories proposing different disruptions to the use of sensory feedback during speech motor control in people who stutter. The purpose of this review was to bring together evidence from studies using altered auditory feedback paradigms with people who stutter, in order to evaluate the predictions of these different theories. This review highlights converging evidence for particular patterns of differences in the responses of people who stutter to feedback perturbations. The implications for hypotheses on the nature of the disruption to sensorimotor control of speech in the disorder are discussed, with reference to neurocomputational models of speech control (predominantly, the DIVA model; Guenther et al., 2006; Tourville et al., 2008). While some consistent patterns are emerging from this evidence, it is clear that more work in this area is needed with developmental samples in particular, in order to tease apart differences related to symptom onset from those related to compensatory strategies that develop with experience of stuttering

    Comprehending auditory speech:previous and potential contributions of functional MRI

    Get PDF
    Functional neuroimaging revolutionised the study of human language in the late twentieth century, allowing researchers to investigate its underlying cognitive processes in the intact brain. Here, we review how functional MRI (fMRI) in particular has contributed to our understanding of speech comprehension, with a focus on studies of intelligibility. We highlight the use of carefully controlled acoustic stimuli to reveal the underlying hierarchical organisation of speech processing systems and cortical (a)symmetries, and discuss the contributions of novel design and analysis techniques to the contextualisation of perisylvian regions within wider speech processing networks. Within this, we outline the methodological challenges of fMRI as a technique for investigating speech and describe the innovations that have overcome or mitigated these difficulties. Focussing on multivariate approaches to fMRI, we highlight how these techniques have allowed both local neural representations and broader scale brain systems to be described
    corecore