30 research outputs found

    The psychology of gestures and gesture-like movements in non-human primates

    Get PDF
    Research into gestural communication of nonhuman primates is often inspired by an interest in the evolutionary roots of human language. The focus on intentionally used behaviors is central to this approach that aims at investigating the cognitive mechanisms characterizing gesture use in monkeys and apes. This chapter describes some of the key characteristics that are important in this context, and discusses the evidence the claim is built on that gestures of, nonhuman primates represent intentionally and flexibly used means of communication. This chapter will first provide a brief introduction into what primates are and how a gesture is defined, before the psychological approach to gestural communication is described in more detail, with focus on the cognitive mechanisms underlying gesture use in nonhuman primates

    Increasing the expressiveness for virtual agents. Autonomous generation of speech and gesture for spatial description tasks

    Get PDF
    Bergmann K, Kopp S. Increasing the expressiveness for virtual agents. Autonomous generation of speech and gesture for spatial description tasks. In: Decker KS, Sichman JS, Sierra C, Castelfranchi C, eds. Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009). Ann Arbor, MI: IFAAMAS; 2009: 361-368.Embodied conversational agents are required to be able to express themselves convincingly and autonomously. Based on an empirial study on spatial descriptions of landmarks in direction-giving, we present a model that allows virtual agents to automatically generate, i.e., select the content and derive the form of coordinated language and iconic gestures. Our model simulates the interplay between these two modes of expressiveness on two levels. First, two kinds of knowledge representation (propositional and imagistic) are utilized to capture the modality-specific contents and processes of content planning. Second, specific planners are integrated to carry out the formulation of concrete verbal and gestural behavior. A probabilistic approach to gesture formulation is presented that incorporates multiple contextual factors as well as idiosyncratic patterns in the mapping of visuo-spatial referent properties onto gesture morphology. Results from a prototype implementation are described

    Gesture meaning needs speech meaning to denote – A case of speech-gesture meaning interaction

    Get PDF
    We deal with a yet untreated issue in debates about linguistic interaction, namely a particular multi-modal dimension of meaning-dependence. We argue that the shape interpretation of speech-accompanying iconic gestures is dependent on its co-occurrent speech. Since there is no prototypical solution for modeling such a dependence, we offer an approach to compute a gesture’s meaning as a function of its speech context

    Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications

    Get PDF
    LĂŒcking A, Bergmann K, Hahn F, Kopp S, Rieser H. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications. Journal on Multimodal User Interfaces. 2013;7(1-2):5-18.Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on

    Using the Journalistic Metaphor to Design User Interfaces That Explain Sensor Data

    Get PDF
    Facilitating general access to data from sensor networks (including traffic, hydrology and other domains) increases their utility. In this paper we argue that the journalistic metaphor can be effectively used to automatically generate multimedia presentations that help non-expert users analyze and understand sensor data. The journalistic layout and style are familiar to most users. Furthermore, the journalistic approach of ordering information from most general to most specific helps users obtain a high-level understanding while providing them the freedom to choose the depth of analysis to which they want to go. We describe the general characteristics and architectural requirements for an interactive intelligent user interface for exploring sensor data that uses the journalistic metaphor. We also describe our experience in developing this interface in real-world domains (e.g., hydrology)

    Combining Text and Graphics for Interactive Exploration of Behavior Datasets

    Get PDF
    Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work

    Multi-modal meaning – An empirically-founded process algebra approach

    Get PDF
    Humans communicate with different modalities. We offer an account of multi-modal meaning coordination, taking speech-gesture meaning coordination as a prototypical case. We argue that temporal synchrony (plus prosody) does not determine how to coordinate speech meaning and gesture meaning. Challenging cases are asynchrony and broadcasting cases, which are illustrated with empirical data. We propose that a process algebra account satisfies the desiderata. It models gesture and speech as independent but concurrent processes that can communicate flexibly with each other and exchange the same information more than once. The account utilizes the psi-calculus, allowing for agents, input-output-channels, concurrent processes, and data transport of typed lambda-terms. A multi-modal meaning is produced integrating speech meaning and gesture meaning into one semantic package. Two cases of meaning coordination are handled in some detail: the asynchrony between gesture and speech, and the broadcasting of gesture meaning across several dialogue contributions. This account can be generalized to other cases of multi-modal meaning

    Using parameterised semantics for speech-gesture integration

    Get PDF
    Klein U, Rieser H, Hahn F, Lawler I. Using parameterised semantics for speech-gesture integration. Presented at the Investigating Semantics - Empirical and Philosophical Approaches, Bochum
    corecore