313 research outputs found
Laminar mixing of heterogeneous axisymmetric coaxial confined jets Final report
Laminar mixing of heterogeneous axisymmetrical coaxial confined jets for application to nuclear rocket propulsio
Comparing unfamiliar voice and face identity perception using identity sorting tasks
Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to âtell people together.â These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participantsâ performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on âsame identityâ trials with âtelling people togetherâ for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes
Highly accurate and robust identity perception from personally familiar voices
Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being âfamiliar enoughâ with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices â acquired via substantial, naturalistic experience â should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participantsâ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate
Comparing unfamiliar voice and face identity perception using identity sorting tasks.
Identity sorting tasks, in which participants sort multiple naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. In both modalities, participants who are unfamiliar with the identities tend to perceive multiple stimuli of the same identity as different people and thus fail to "tell people together." These similarities across modalities suggest that modality-general mechanisms may underpin sorting behaviour. In this study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we asked whether participants' performance on voice and face sorting of unfamiliar identities is correlated. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and a face discrimination task (Glasgow Face Matching Test). Using these tasks, we tested whether performance on sorting related to explicit identity discrimination. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, no significant correlations were found between sorting and discrimination performance, with the exception of significant relationships for performance on "same identity" trials with "telling people together" for voices and faces. Overall, any reported relationships were however relatively weak, suggesting the presence of additional modality-specific and task-specific processes
Highly Accurate and Robust Identity Perception From Personally Familiar Voices
Previous research suggests that familiarity with a voice can afford benefits for voice and speech perception. However, even familiar voice perception has been reported to be error-prone in previous research, especially in the face of challenges such as reduced verbal cues and acoustic distortions. It has been hypothesised that such findings may arise due to listeners not being âfamiliar enoughâ with the voices used in laboratory studies, and thus being inexperienced with their full vocal repertoire. By extension, voice perception based on highly familiar voices â acquired via substantial, naturalistic experience â should therefore be more robust than voice perception from less familiar voices. We investigated this proposal by contrasting voice perception of personally-familiar voices (participantsâ romantic partners) versus lab-trained voices in challenging experimental tasks. Specifically, we tested how differences in familiarity may affect voice identity perception from non-verbal vocalisations and acoustically-modulated speech. Large benefits for the personally-familiar voice over less familiar, lab-trained voice were found for identity recognition, with listeners displaying both highly accurate yet more conservative recognition of personally familiar voices. However, no familiar-voice benefits were found for speech comprehension against background noise. Our findings suggest that listeners have fine-tuned representations of highly familiar voices that result in more robust and accurate voice recognition despite challenging listening contexts, yet these advantages may not always extend to speech perception. Our study therefore highlights that familiarity is indeed a continuum, with identity perception for personally-familiar voices being highly accurate
Recommended from our members
How many voices did you hear? Natural variability disrupts identity perception from unfamiliar voices
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making withinâperson variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only âtell people apartâ (perceiving exemplars from two different speakers as separate identities) but also âtell people togetherâ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural withinâperson variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in âtelling people togetherâ. Our study highlights withinâperson variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a reâevaluation of theoretical models to account for natural variability during identity perception
The influence of perceived vocal traits on trusting behaviours in an economic game.
When presented with voices, we make rapid, automatic judgements of social traits such as trustworthiness-and such judgements are highly consistent across listeners. However, it remains unclear whether voice-based first impressions actually influence behaviour towards a voice's owner, and-if they do-whether and how they interact over time with the voice owner's observed actions to further influence the listener's behaviour. This study used an investment game paradigm to investigate (1) whether voices judged to differ in relevant social traits accrued different levels of investment and/or (2) whether first impressions of the voices interacted with the behaviour of their apparent owners to influence investments over time. Results show that participants were responding to their partner's behaviour. Crucially, however, there were no effects of voice. These findings suggest that, at least under some conditions, social traits perceived from the voice alone may not influence trusting behaviours in the context of a virtual interaction
The effects of high variability training on voice identity learning.
High variability training has been shown to benefit the learning of new face identities. In three experiments, we investigated whether this is also the case for voice identity learning. In Experiment 1a, we contrasted high variability training sets - which included stimuli extracted from a number of different recording sessions, speaking environments and speaking styles - with low variability stimulus sets that only included a single speaking style (read speech) extracted from one recording session (see Ritchie & Burton, 2017 for faces). Listeners were tested on an old/new recognition task using read sentences (i.e. test materials fully overlapped with the low variability training stimuli) and we found a high variability disadvantage. In Experiment 1b, listeners were trained in a similar way, however, now there was no overlap in speaking style or recording session between training sets and test stimuli. Here, we found a high variability advantage. In Experiment 2, variability was manipulated in terms of the number of unique items as opposed to number of unique speaking styles. Here, we contrasted the high variability training sets used in Experiment 1a with low variability training sets that included the same breadth of styles, but fewer unique items; instead, individual items were repeated (see Murphy, Ipser, Gaigg, & Cook, 2015 for faces). We found only weak evidence for a high variability advantage, which could be explained by stimulus-specific effects. We propose that high variability advantages may be particularly pronounced when listeners are required to generalise from trained stimuli to different-sounding, previously unheard stimuli. We discuss these findings in the context of mechanisms thought to underpin advantages for high variability training
Perceptual prioritization of self-associated voices
Information associated with the self is prioritized relative to information associated with others and is therefore processed more quickly and accurately. Across three experiments, we examined whether a new externallyâgenerated voice could become associated with the self and thus be prioritized in perception. In the first experiment, participants learned associations between three unfamiliar voices and three identities (self, friend, stranger). Participants then made speeded judgements of whether voiceâidentity pairs were correctly matched, or not. A clear selfâprioritization effect was found, with participants showing quicker and more accurate responses to the newly selfâassociated voice relative to either the friendâ or strangerâ voice. In two further experiments, we tested whether this prioritization effect increased if the selfâvoice was genderâmatched to the identity of the participant (Experiment 2) or if the selfâvoice was chosen by the participant (Experiment 3). Genderâmatching did not significantly influence prioritization; the selfâvoice was similarly prioritized when it matched the gender identity of the listener as when it did not. However, we observed that choosing the selfâvoice did interact with prioritization (Experiment 3); the selfâvoice became more prominent, via lesser prioritization of the other identities, when the selfâvoice was chosen relative to when it was not. Our findings have implications for the design and selection of individuated synthetic voices used for assistive communication devices, suggesting that agency in choosing a new vocal identity may modulate the distinctiveness of that voice relative to others
Recommended from our members
Similar representations of emotions across faces and voices
Emotions are a vital component of social communication, carried across a range of modalities and via
different perceptual signals such as specific muscle contractions in the face and in the upper
respiratory system. Previous studies have found that emotion recognition impairments after brain
damage depend on the modality of presentation: recognition from faces may be impaired whilst
recognition from voices remains preserved, and vice versa. On the other hand, there is also evidence
for shared neural activation during emotion processing in both modalities. In a behavioural study, we
investigated whether there are shared representations in the recognition of emotions from faces and
voices. We used a within-subjects design in which participants rated the intensity of facial expressions
and non-verbal vocalisations for each of the six basic emotion labels. For each participant and each
modality, we then computed a representation matrix with the intensity ratings of each emotion. These
matrices allowed us to examine the patterns of confusions between emotions and to characterise the
representations of emotions within each modality. We then compared the representations across
modalities by computing the correlations of the representation matrices across faces and voices. We
found highly correlated matrices across modalities, which suggest similar representations of emotions
across faces and voices. We also showed that these results could not be explained by commonalities
between low-level visual and acoustic properties of the stimuli. We thus propose that there are similar
or shared coding mechanisms for emotions which may act independently of modality, despite their
distinct perceptual inputs.This research was supported by an ESRC 1+3 PhD studentship to Lisa Kuhn (ES/I90042X/1)
- âŠ