3,642 research outputs found

    The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood

    Get PDF
    Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non-linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4- to 11-year-olds and adults. Eighty-eight 4- to 11-year-olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non-linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4- to 5-, 6- to 9- and 10- to 11-year-olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio-emotional competence

    Intonation development from five to thirteen

    Get PDF
    Research undertaken to date suggests that important developments in the understanding and use of intonation may take place after the age of 5;0. The present study aims to provide a more comprehensive account of these developments. A specially designed battery of prosodic tasks was administered to four groups of thirty children, from London (U.K.), with mean ages of 5;6, 8;7, 10;10 and 13;9. The tasks tap comprehension and production of functional aspects of intonation, in four communicative areas: CHUNKING (i.e. prosodic phrasing), AFFECT, INTERACTION and FOCUS. Results indicate that there is considerable variability among children within each age band on most tasks. The ability to produce intonation functionally is largely established in five-year-olds, though some specific functional contrasts are not mastered until C.A. 8;7. Aspects of intonation comprehension continue to develop up to C.A. 10;10, correlating with measures of expressive and receptive language development

    Wordsworth's Aeneid and the influence of its eighteenth-century predecessors

    Get PDF
    William Wordsworth's attempt at translating Virgil's Aeneid reached as far as Book 4, and mostly survives in manuscript drafts. The literary influences behind it can be illuminated through the poet's correspondence, and analysed more fully by tracing verbal echoes and other resonances in his translation. Despite the hostility he expressed towards Dryden and Pope, the foremost translators of the previous age, Wordsworth followed them in using heroic couplets, and, as has previously been argued, his translation draws increasingly on Dryden's Aeneis the further he advanced with his project. But Wordsworth owes an equally large debt, hitherto unrecognized, to the eighteenth-century blank verse renderings by Joseph Trapp and others, who anticipated many of his supposed stylistic innovations

    Prosodic development in European Portuguese from childhood to adulthood

    Get PDF
    We describe the European Portuguese version of a test of prosodic abilities originally developed for English: the Profiling Elements of Prosody in Speech-Communication (Peppé & McCann, 2003). Using this test, we examined the development of several components of European Portuguese prosody between 5 and 20 years of age (N = 131). Results showed prosodic performance improving with age: 5-year-olds reach adultlike performance in the affective prosodic tasks; 7-year-olds mastered the ability to discriminate and produce short prosodic items, as well as the ability to understand question versus declarative intonation; 8-year-olds mastered the ability to discriminate long prosodic items; 9-year-olds mastered the ability to produce question versus declarative intonation, as well as the ability to identify focus; 10- to 11-year-olds mastered the ability to produce long prosodic items; 14- to 15-year-olds mastered the ability to comprehend and produce syntactically ambiguous utterances disambiguated by prosody; and 18- to 20-year-olds mastered the ability to produce focus. Cross-linguistic comparisons showed that linguistic form–meaning relations do not necessarily develop at the same pace across languages. Some prosodic contrasts are hard to achieve for younger Portuguese-speaking children, namely, the production of chunking and focus.info:eu-repo/semantics/publishedVersio

    Infant Preferences for Two Properties of Infant-Directed Speech

    Get PDF
    This study examined preferences for prosodic and structural properties of infant-directed speech (IDS) in 20 infants, 11 girls and 9 boys, ages 0;11;3 to 0;13;0 (mean age 0;11;28). It was hypothesized that year-old infants would demonstrate a preference for infant-directed structure (IS) over adult-directed structure (AS) regardless of prosody, and that infants would demonstrate no preference for either infant-directed prosody (IP) or adult-directed prosody (AP) regardless of structure. Listening times to passages were compared across infants for four conditions: IS/IP; IS/AP; AS/IP; AS/AP. Results indicate a non-significant but noticeable trend toward a preference for infant-directed structure. In addition, weak correlations were found between vocabulary size and strength of preference for adult-directed prosody, and between age and strength of preference for adult-directed prosody. A non-significant but noticeable interaction was found between prosody and structure and vocabulary. Overall, infants appear to prefer listening to infant-directed structure to adult-directed structure; more advanced language learners show a stronger preference for adult-directed prosody than do their less advanced age-mates; older infants show a stronger preference for adult-directed prosody than do younger infants; and preference for infant-directed structure (but not infant-directed prosody) depends on vocabulary level

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Neural pathways for visual speech perception

    Get PDF
    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA

    Second language speech production: investigating linguistic correlates of comprehensibility and accentedness for learners at different ability levels

    Get PDF
    The current project aimed to investigate the potentially different linguistic correlates of comprehensibility (i.e., ease of understanding) and accentedness (i.e., linguistic nativelikeness) in adult second language (L2) learners’ extemporaneous speech production. Timed picture descriptions from 120 beginner, intermediate, and advanced Japanese learners of English were analyzed using native speaker global judgments based on learners’ comprehensibility and accentedness, and then submitted to segmental, prosodic, temporal, lexical, and grammatical analyses. Results showed that comprehensibility was related to all linguistic domains, and accentedness was strongly tied with pronunciation (specifically segmentals) rather than lexical and grammatical domains. In particular, linguistic correlates of L2 comprehensibility and accentedness were found to vary by learners’ proficiency levels. In terms of comprehensibility, optimal rate of speech, appropriate and rich vocabulary use, and adequate and varied prosody were important for beginner to intermediate levels, whereas segmental accuracy, good prosody, and correct grammar featured strongly for intermediate to advanced levels. For accentedness, grammatical complexity was a feature of intermediate to high-level performance, whereas segmental and prosodic variables were essential to accentedness across all levels. These findings suggest that syllabi tailored to learners’ proficiency level (beginner, intermediate, or advanced) and learning goal (comprehensibility or nativelike accent) would be advantageous for the teaching of L2 speaking

    Explicit and implicit aptitude effects on second language speech learning: scrutinizing segmental and suprasegmental sensitivity and performance via behavioural and neurophysiological measures

    Get PDF
    The current study examines the role of cognitive and perceptual individual differences (i.e., aptitude) in second language (L2) pronunciation learning, when L2 learners’ varied experience background is controlled for. A total of 48 Chinese learners of English in the UK were assessed for their sensitivity to segmental and suprasegmental aspects of speech on explicit and implicit modes via behavioural (language/music aptitude tests) and neurophysiological (electroencephalography) measures. Subsequently, the participants’ aptitude profiles were compared to the segmental and suprasegmental dimensions of their L2 pronunciation proficiency analyzed through rater judgements and acoustic measurements. According to the results, the participants’ segmental attainment was associated not only with explicit aptitude (phonemic coding), but also with implicit aptitude (enhanced neural encoding of spectral peaks). Whereas the participants’ suprasegmental attainment was linked to explicit aptitude (rhythmic imagery) to some degree, it was primarily influenced by the quality and quantity of their most recent L2 learning experience
    • …
    corecore