12 research outputs found
Situierte Generierung deiktischer Objektreferenz in der multimodalen Mensch-Maschine-Interaktion
Kranstedt A. Situierte Generierung deiktischer Objektreferenz in der multimodalen Mensch-Maschine-Interaktion. Diski; 313. Berlin: Aka; 2008
Incremental generation of multimodal deixis referring to objects
Kranstedt A, Wachsmuth I. Incremental generation of multimodal deixis referring to objects. In: Proceedings of the 10th European Workshop on Natural Language Generation (ENLG-05). 2005: 75-82.This paper describes an approach for the generation of multimodal deixis to be uttered by an anthropomorphic agent in virtual reality. The proposed algorithm integrates pointing and definite description. Doing so, the context-dependent discriminatory power of the gesture determines the content- selection for the verbal constituent. The concept of a pointing cone is used to model the region singled out by a pointing gesture and to distinguish two referential functions called object-pointing and region-pointing
Towards a cognitively motivated processing of turn-taking signals for the embodied conversational agent Max
Leßmann N, Kranstedt A, Wachsmuth I. Towards a cognitively motivated processing of turn-taking signals for the embodied conversational agent Max. In: Proceedings Workshop Embodied Conversational Agents: Balanced Perception and Action. New York: IEEE Computer Society; 2004: 57-64.Max is a human-size conversational agent that employs synthetic speech, gesture, gaze, and facial display to act in cooperative construction tasks taking place in immersive
virtual reality. In the mixed-initiative dialogs involved in
our research scenario, turn-taking abilities and dialog competences play a crucial role for Max to appear as a convincing multimodal communication partner. The way how they rely on Max’s perception of the user and, in special, how turn-taking signals are handled in the agent’s cognitive architecture is the focus of this paper
Situated generation of multimodal deixis in task-oriented dialogue
Kranstedt A, Wachsmuth I. Situated generation of multimodal deixis in task-oriented dialogue. In: Belz A, Evans R, Piwek P, eds. Extended abstracts of posters presented at the Third International Conference on Natural Language Generation. Technical Report No. ITRI-04-01. Brighton UK: University of Brighton; 2004: 20-23.This poster describes ongoing work concerning the generation of multimodal utterances, animated and visualized with the anthropomorphic agent Max. Max is a conversational agent that collaborates in cooperative construction tasks taking place in immersive virtual reality, realized in a three-side CAVE- like installation. Max is able to produce synchronized output involving synthetic speech, facial display, and gesture from descriptions of their surface form [Kopp and Wachsmuth, 2004]. Focusing on deixis here it is shown how the influence of situational characteristics in face-to-face conversation can be accounted for in the automatic generation of such descriptions in multimodal dialogue
MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents
Kranstedt A, Kopp S, Wachsmuth I. MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: AAMAS'02 Workshop Embodied conversational agents - let's specify and evaluate them!. 2002.This paper presents work on an artificial anthropomorphic agent with multimodal interaction abilitities. It focuses on the development of a markup language, MURML, that bridges between the planning and the animation tasks in the production of multimodal utterances. This hierarchically structured notation provides flexible means of describing gestures in a form-based way and of explicitly experessing their relations to accompanying speech
Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer
Pfeiffer T, Kranstedt A, Lücking A. Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer. In: Müller S, Zachmann G, eds. Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR. Berichte aus der Informatik. Aachen: Shaker; 2006: 61-72.Für die empirische Erforschung situierter natürlicher menschlicher Kommunikation sind wir auf die Akquise und Auswertung umfangreicher Daten angewiesen. Die Modalitäten, über die sich Menschen ausdrücken können, sind sehr unterschiedlich. Entsprechend heterogen sind die Repräsentationen, mit denen die erhobenen Daten für die Auswertung verfügbar gemacht werden können. Für eine Untersuchung des Zeigeverhaltens bei der Referenzierung von Objekten haben wir mit IADE ein Framework für die Aufzeichnung, Analyse und Simulation von Sprach-Gestik Daten entwickelt. Durch den Einsatz von Techniken aus der interaktiven VR erlaubt IADE die synchronisierte Aufnahme von Bewegungs-, Video- und Audiodaten und unterstützt einen iterativen Auswertungsprozess der gewonnenen Daten durch komfortable integrierte Revisualisierungen und Simulationen. Damit stellt IADE einen entscheidenden Fortschritt für unsere linguistische Experimentalmethodik dar
Deictic object reference in task-oriented dialogue
Abstract. This chapter presents a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in computational models of processing multi-modal deictic expressions. 1
Deictic object reference in task-oriented dialogue
Kranstedt A, Lücking A, Pfeiffer T, Rieser H, Wachsmuth I. Deictic object reference in task-oriented dialogue. In: Rickheit G, Wachsmuth I, eds. Situated Communication. Berlin: Mouton de Gruyter; 2006: 155-208.This chapter presents an original approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions