733 research outputs found

    Listeners’ perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature

    Get PDF
    The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners’ perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners’ perceptions of a speaker’s certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals’ conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions

    Listeners’ perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature

    Get PDF
    The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners’ perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners’ perceptions of a speaker’s certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals’ conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions

    Pragmatics, Prosody, and Social Skills of School-Age Children with Language-Learning Differences

    Get PDF
    Social skills are an important aspect of child development that continues to have influences in adolescence and adulthood (Hart, Olsen, Robinson, & Mandleco, 1997). Interacting in a social world requires an integration of many abilities that include social skills and emotional understanding of oneself and other persons. Children who have difficulties with interpreting social cues (e.g., identifying basic emotions and responding to cues in speech) have immediate and progressive consequences in both academics and social living. Children with typical language skills are successfully interacting with peers and acknowledging social rules for different environments (e.g., playing at school vs. playing at home). In contrast, children with language impairments struggle with using social skills that result in negative experiences in peer interactions (Horowitz, Jansson, Ljungberg, & Hedenbro, 2006). This study explored the social profiles of second grade children with a range of language abilities (e.g., children with low and high levels of language) as they interpret emotions in speech and narrative tasks. Multiple informants (i.e., parents, teachers, speech-language pathologist, and peers) evaluated social skills from different perspectives. A multi-interactional approach explained children’s social-emotional development from three theoretical perspectives: pragmatics, cognition, and emotional understanding. Forty-one second grade children completed a battery of tests that evaluated cognitive measures, language ability, and social skills. Each participant completed three experimental tasks (perception, imitation, and narrative) that examined how children process emotional cues in speech and narratives. A sociometric classification profiled children’s social skills and peer relationships. Results indicated that children with a range of language abilities (i.e., children with low and high levels of language skills) processed emotional cues in speech. Four acoustic patterns significantly related to how children differentiate emotions in speech. Additionally, language ability was a significant factor in the ability to infer emotions in narratives and judge social skills. Children with high language scores were more liked by peers and received better ratings on the teacher questionnaires. This study provides preliminary evidence that children with low and high levels of language abilities are able to interpret emotional cues in speech but differed in the ability to infer emotions in narratives

    The Production of Emotional Prosdy in Varying Severities of Apraxia of Speech

    Get PDF
    One mild AOS, one moderate AOS and one control speaker were asked to produce utterances with different emotional intent. In Experiment 1, the three subjects were asked to produce sentences with a happy, sad, or neutral intent through a repetition task. In Experiment 2, the three subjects were asked to produce sentences with either a happy or sad intent through a picture elicitation task. Paired t-tests comparing data from the acoustic analyses of each subject\u27s utterances revealed significant differences between FO, duration, and intensity characteristics between the happy and sad sentences of the control speaker. There were no significant differences in the acoustic characteristics of the productions of the AOS speakers suggesting that the AOS subjects were unable to volitionally produce acoustic parameters that help convey emotion. Two more experiments were designed to determine if nÀive listeners could hear the acoustic cues to signal emotion in all three speakers. In Experiment 3, nÀive listeners were asked to identify the sentences produced in Experiment 1 as happy, sad, or neutral. In Experiment 4, nÀive listeners were asked to identify the sentences produced in Experiment 2 as either happy or sad. Chi-square findings revealed that the naive listeners were able to identify the emotional differences of the control speaker and the correct identification was not by chance. The nÀive listeners could not distinguish between the emotional utterances of the mild or moderate AOS speakers. Higher percentages of correct identification in certain sentences over others were artifacts attributed to either chance (the nÀive listeners were guessing) or a response strategy (when in doubt, the naive listeners chose neutral or sad). The findings from Exp. 3 & 4 corroborate the acoustic findings from Exp. 1 & 2. In addition to the 4 structured experiments, spontaneous samples of happy, sad, and neutral utterances were collected and compared to those sentences produced in Experiments 1 & 2. Comparisons between the elicited and spontaneous sentences indicated that the moderate AOS subject was able to produce variations of FO and duration similar to those variations that would be produced by normal speakers conveying emotion (Banse & Scherer, 1996; Lieberman & Michaels, 1962; Scherer, 1988). The mild AOS subject was unable to produce prosodic differences between happy and sad emotion. This study found that although these AOS subjects were unable to produce acoustic parameters during elicited speech that signal emotion, they were able to produce some more variation in the acoustic properties of FO and duration, especially in the moderate AOS speaker. However, any meaningful variation pattern that would convey emotion (such as seen in the control subject) were not found. These findings suggest that the AOS subjects probably convey emotion non-verbally (e.g., facial expression, muscle tension, body language)

    Irony in a second language: exploring the comprehension of Japanese speakers of English

    Get PDF
    This thesis focuses on the extent to which non-native speakers of English understand potentially ironic utterances in a similar way to native speakers. Barbe (1995: 4) sees irony as one of ‘the final obstacles before achieving near native-speaker fluency.’ This assumption is supported by the findings of earlier studies (Bouton 1999, Lee 2002; Manowong 2011; Yamanaka 2003) which assumed a Gricean framework seeing irony as communicating the ‘opposite of what is said’ (Grice 1975, 1978). This thesis adopts instead the relevance-theoretic account of irony as echoic (Sperber and Wilson 1995; Wilson and Sperber 2012), arguing that previous work suffers from both problematic theoretical assumptions and flawed experimental methods. The thesis reports the findings of two experiments designed to examine similarities and differences between the responses of non-native speakers of English (here Japanese speakers) and native speakers and how similar or different the effects of prosody are for these groups. The first experiment, conducted by an online survey, provided surprising results, suggesting that Japanese speakers can respond to potentially ironical utterances similarly to native speakers. The second experiment, focusing on the effects of prosody, compared the groups with regard to response trends. Three prosodic contours were used in this study, labelled ‘basic’ (a kind of default, unmarked tone), ‘deadpan’ (with a narrower pitch range), and ‘exaggerated’ (with a wider pitch range). The results indicated that Japanese participants could perceive English prosodic structure in similar ways to native speakers and were affected by prosodic contours in similar ways. It also suggested that Japanese participants were affected less strongly by ‘exaggerated’ intonation and slightly more strongly by ‘deadpan’ tones. These findings suggest that a relevance-theoretic framework provides the means to carry out fuller investigations than carried out previously and to develop a more systematic explanation of the understanding of irony in a second language

    It’s how you said it and what I heard: a comparison of motivational and emotional tone of voice

    Get PDF
    Previous research has viewed motivational and emotional vocal expressions as the same (e.g., Meyer & Turner, 2006; Fontaine & Scherer, 2013), but until now no direct comparison of these types of prosody has been available. Building on the new motivational prosody literature (e.g., Weinstein, Zougkou & Paulmann, 2014; 2018), this series of studies was the first to explore the differences and similarities between these forms of prosody. Initially, contextually valid sentences were intoned in angry, joyful, supportive, and controlling tones of voice by trained speakers, which were then acoustically analysed. Results revealed that each state was intoned with a different acoustic profile. Subsequently, exemplars were validated in a forced choice categorisation study and acoustics were extracted again. Results confirmed that each state was communicated with a different configuration of vocal cues, thus indicating that emotional and motivational states do not share the same prosodic profiles. In a final study, using an event-related potential (ERP) approach the time-course processing of these constructs was investigated. Findings suggest that emotional and motivational prosody share similar processing time-courses and neural resources. Weak evidence indicated possibly deviations in processing but were not strong enough to draw any conclusions. Taken together, the results of this investigation suggest that emotional and motivational prosody are likely distinct constructs. We conclude that these constructs differ on an encoding level and different vocal cues potentially lead to their effective recognition, but they are similar with respect to how they are processed in the brain. Implications, limitations and directions for future research are discussed

    Proceedings of the VIIth GSCP International Conference

    Get PDF
    The 7th International Conference of the Gruppo di Studi sulla Comunicazione Parlata, dedicated to the memory of Claire Blanche-Benveniste, chose as its main theme Speech and Corpora. The wide international origin of the 235 authors from 21 countries and 95 institutions led to papers on many different languages. The 89 papers of this volume reflect the themes of the conference: spoken corpora compilation and annotation, with the technological connected fields; the relation between prosody and pragmatics; speech pathologies; and different papers on phonetics, speech and linguistic analysis, pragmatics and sociolinguistics. Many papers are also dedicated to speech and second language studies. The online publication with FUP allows direct access to sound and video linked to papers (when downloaded)
    • 

    corecore