88,260 research outputs found
Collaborating on Referring Expressions
This paper presents a computational model of how conversational participants
collaborate in order to make a referring action successful. The model is based
on the view of language as goal-directed behavior. We propose that the content
of a referring expression can be accounted for by the planning paradigm. Not
only does this approach allow the processes of building referring expressions
and identifying their referents to be captured by plan construction and plan
inference, it also allows us to account for how participants clarify a
referring expression by using meta-actions that reason about and manipulate the
plan derivation that corresponds to the referring expression. To account for
how clarification goals arise and how inferred clarification plans affect the
agent, we propose that the agents are in a certain state of mind, and that this
state includes an intention to achieve the goal of referring and a plan that
the agents are currently considering. It is this mental state that sanctions
the adoption of goals and the acceptance of inferred plans, and so acts as a
link between understanding and generation.Comment: 32 pages, 2 figures, to appear in Computation Linguistics 21-
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
How Do I Address You? Modelling addressing behavior based on an analysis of a multi-modal corpora of conversational discourse
Addressing is a special kind of referring and thus principles of multi-modal referring expression generation will also be basic for generation of address terms and addressing gestures for conversational agents. Addressing is a special kind of referring because of the different (second person instead of object) role that the referent has in the interaction. Based on an analysis of addressing behaviour in multi-party face-to-face conversations (meetings, TV discussions as well as theater plays), we present outlines of a model for generating multi-modal verbal and non-verbal addressing behaviour for agents in multi-party interactions
Salience and pointing in multimodal reference
Pointing combined with verbal referring is one of the most paradigmatic human multimodal behaviours. The aim of this paper is foundational: to uncover the central notions that are required for a computational model of human-generated multimodal referring acts. The paper draws on existing work on the generation of referring expressions and shows that in order to extend that work with pointing, the notion of salience needs to play a pivotal role. The paper investigates the role of salience in the generation of referring expressions and introduces a distinction between two opposing approaches: salience-first and salience-last accounts. The paper then argues that these differ not only in computational efficiency, as has been pointed out previously, but also lead to incompatible empirical predictions. The second half of the paper shows how a salience first account nicely meshes with a range of existing empirical findings on multimodal reference. A novel account of the circumstances under which speakers choose to point is proposed that directly links salience with pointing. Finally, a multidimensional model of salience is proposed to flesh this model out
A generic architecture and dialogue model for multimodal interaction
This paper presents a generic architecture and a dialogue model for multimodal interaction. Architecture and model are transparent and have been used for different task domains. In this paper the emphasis is on their use for the navigation task in a virtual environment. The dialogue model is based on the information state approach and the recognition of dialogue acts. We explain how pairs of backward and forward looking tags and the preference rules of the dialogue act determiner together determine the structure of the dialogues that can be handled by the system. The system action selection mechanism and the problem of reference resolution are discussed in detail
Collaboration on reference to objects that are not mutually known
In conversation, a person sometimes has to refer to an object that is not
previously known to the other participant. We present a plan-based model of how
agents collaborate on reference of this sort. In making a reference, an agent
uses the most salient attributes of the referent. In understanding a reference,
an agent determines his confidence in its adequacy as a means of identifying
the referent. To collaborate, the agents use judgment, suggestion, and
elaboration moves to refashion an inadequate referring expression.Comment: 6 pages, to appear in proceedings of COLING-94, LaTeX (now uses
fullname.sty, fullname.bst
A Flexible pragmatics-driven language generator for animated agents
This paper describes the NECA MNLG; a fully implemented Multimodal Natural Language Generation module. The MNLG is deployed as part of the NECA system which generates dialogues between animated agents. The generation module supports the seamless integration of full grammar rules, templates and canned text. The generator takes input which allows for the specification of syntactic, semantic and pragmatic constraints on the output
Recommended from our members
Reference and Gestures in Dialogue Generation: Three Studies with Embodied Conversational Agents
This paper reports on three studies into social presence cues which were carried out in the context of the NECA (Net-environment for Embodied Emotional Conversational Agents) project and the EPOCH network. The first study concerns the generation of referring expressions. We adopted an existing algorithm for generating referring expressions such that it could run according to an egocentric and a neutral strategy. In an evaluation study, we found that the two strategies were correlated with the perceived friendliness of the speaker. In the second and the third study, we evaluated the gestures that were generated by the NECA system. In this paper, we briefly summarize the most salient results of these two studies. They concern the effect of gestures on perceived quality of speech and information retention
From Monologue to Dialogue: Natural Language Generation in OVIS
This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system's utterances should make up a single, coherent dialogue turn, several modifications had to be made to the system. The paper also discusses the influence of dialogue context on information status, and its consequences for the generation of referring expressions and accentuation
- âŠ