24,933 research outputs found

    Intonation words in initial intentional communication of Mandarin-speaking children

    Get PDF
    Intonation words play a very important role in early childhood language development and serve as a crucial entry point for studying children’s language acquisition. Utilizing a natural conversation corpus, this paper thoroughly examines the intentional communication scenes of five Mandarin-speaking children before the age of 1;05 (17 months). We found that children produced a limited yet high-frequency set of intonation words such as “啊 [a], 哎 [æ], 欸 [ε], 嗯 [ən], 呃 [ə], eng [əŋ], 哦 [o], and 咦 [i].” These intonation words do not express the children’s emotional attitudes toward propositions or events; rather, they are utilized within the frameworks of imperative, declarative, and interrogative intents. The children employ non-verbal, multimodal means such as pointing, gesturing, and facial expressions to actively convey or receive commands, provide or receive information, and inquire or respond. The data suggests that the function of intonation words is essentially equivalent to holophrases, indicating the initial stage of syntactic acquisition, which is a milestone in early syntactic development. Based on the cross-linguistic universality of intonation word acquisition and its inherited relationship with pre-linguistic intentional vocalizations, this paper proposes that children’s syntax is initiated by the prosodic features of intonation. The paper also contends that intonation words, as the initial form of human vocal language in individual development, naturally extend from early babbling, emotional vocalizations, or sound expressions for changing intentions. They do not originate from spontaneous gesturing, which seems to have no necessary evolutionary relationship with the body postures that chimpanzees use to change intentions, as suggested by existing research. Human vocal language and non-verbal multimodal means are two parallel and non-contradictory forms of communication, with no apparent evidence of the former inheriting from the latter

    Emotion resonance and divergence: a semiotic analysis of music and sound in 'The Lost Thing', an animated short film and 'Elizabeth' a film trailer

    Get PDF
    Music and sound contributions of interpersonal meaning to film narratives may be different from or similar to meanings made by language and image, and dynamic interactions between several modalities may generate new story messages. Such interpretive potentials of music and voice sound in motion pictures are rarely considered in social semiotic investigations of intermodality. This paper therefore shares two semiotic studies of distinct and combined music, English speech and image systems in an animated short film and a promotional filmtrailer. The paper considers the impact of music and voice sound on interpretations of film narrative meanings. A music system relevant to the analysis of filmic emotion is proposed. Examples show how music and intonation contribute meaning to lexical, visual and gestural elements of the cinematic spaces. Also described are relations of divergence and resonance between emotion types in various couplings of music, intonation, words and images across story phases. The research is relevant to educational knowledge about sound, and semiotic studies of multimodality

    Empathic Agent Technology (EAT)

    Get PDF
    A new view on empathic agents is introduced, named: Empathic Agent Technology (EAT). It incorporates a speech analysis, which provides an indication for the amount of tension present in people. It is founded on an indirect physiological measure for the amount of experienced stress, defined as the variability of the fundamental frequency of the human voice. A thorough review of literature is provided on which the EAT is founded. In addition, the complete processing line of this measure is introduced. Hence, the first generally applicable, completely automated technique is introduced that enables the development of truly empathic agents

    17 ways to say yes:Toward nuanced tone of voice in AAC and speech technology

    Get PDF
    People with complex communication needs who use speech-generating devices have very little expressive control over their tone of voice. Despite its importance in human interaction, the issue of tone of voice remains all but absent from AAC research and development however. In this paper, we describe three interdisciplinary projects, past, present and future: The critical design collection Six Speaking Chairs has provoked deeper discussion and inspired a social model of tone of voice; the speculative concept Speech Hedge illustrates challenges and opportunities in designing more expressive user interfaces; the pilot project Tonetable could enable participatory research and seed a research network around tone of voice. We speculate that more radical interactions might expand frontiers of AAC and disrupt speech technology as a whole

    RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA

    Get PDF
    In this paper, we describe the Rich Representation Language (RRL) which is used in the NECA system. The NECA system generates interactions between two or more animated characters. The RRL is a formal framework for representing the information that is exchanged at the interfaces between the various NECA system modules
    corecore