152 research outputs found
ANGELICA : choice of output modality in an embodied agent
The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality
From Monologue to Dialogue: Natural Language Generation in OVIS
This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system's utterances should make up a single, coherent dialogue turn, several modifications had to be made to the system. The paper also discusses the influence of dialogue context on information status, and its consequences for the generation of referring expressions and accentuation
Cueing the Virtual Storyteller: Analysis of cue phrase usage in fairy tales
An existing taxonomy of Dutch cue phrases, designed for use in story generation, was validated by analysing cue phrase usage in a corpus of classical fairy tales. The analysis led to some adaptations of the original taxonomy
Using Out-of-Character Reasoning to Combine Storytelling and Education in a Serious Game
To reconcile storytelling and educational meta-goals in the context of a serious game, we propose to make use of out-of-character reasoning in virtual agents. We will implement these agents in a serious game of our design, which will focus on social interaction in conflict scenarios with the meta-goal of improving social awareness of users. The agents will use out-of-character reasoning to manage conflicts by assuming different in-character personalities or by planning to take specific actions based on interaction with the users. In-character reasoning is responsible for the storytelling concerns of character believability and consistency. These are not endangered by out-of-character reasoning, as it takes in-character information into account when making decisions
Generating Instructions in a 3D Game Environment: Efficiency or Entertainment?
The GIVE Challenge was designed for the evaluation of natural language generation (NLG) systems. It involved the automatic generation of instructions for users in a 3D environment. In this paper we introduce two NLG systems that we developed for this challenge. One system focused on generating optimally helpful instructions while the other focused on entertainment. We used the data gathered in the Challenge to compare the efficiency and entertainment value of both systems. We found a clear difference in efficiency, but were unable to prove that one system was more entertaining than the other. This could be explained by the fact that the set-up and evaluation methods of the GIVE Challenge were not aimed at entertainment
The Virtual Storyteller: story generation by simulation
The Virtual Storyteller is a multi-agent framework that generates stories based on a concept called emergent narrative. In this paper, we describe the motivation and approach of the Virtual Storyteller, and give an overview of the computational processes involved in the story generation process. We also discuss some of the challenges posed by our chosen approach
Natural Language Generation for dialogue: system survey
Many natural language dialogue systems make use of `canned text' for output generation. This approach may be su±cient for dialogues in restricted domains where system utterances are short and simple and use fixed expressions (e.g., slot filling dialogues in the ticket reservation or travel information domain); but for more sophisticated dialogues (e.g., tutoring dialogues) a more advanced generation method is required. In such dialogues, the system utterances should be produced in a context-sensitive fashion, for instance by pronominalising anaphoric references, and by using more or less elaborate wording depending on the state of the dialogue, the expertise of the user, etc. In the case of spoken dialogues, it is very useful if the natural language generation component can provide information that is relevant for determining the prosody of the speech output. Similarly, for use in embodied agents it is useful if the generation component can provide information about the facial and body movements that should accompany the language being produced by the agent. Clearly, it will be extremely di±cult to achieve all this using simple string manipulation, so a more flexible and context-sensitive generation method is required. This report discusses some of the possibilities for the sophisticated generation of system utterances in a dialogue system. The basic assumption is that this task is performed by a separate language generation module, which takes as its input a message specification produced by a dialogue planner and transforms this message into an expression in natural language. Part I of this report provides a general discussion of different methods for performing this task, and outlines some requirements on language generation systems that might be used for this purpose. Part II gives an overview of publicly available language generation systems, and discusses to what extent they meet the previously stated requirements
GoalGetter: predicting contrastive accent in data-to-speech generation
This paper addresses the problem of predicting contrastive accent in spoken language generation. The common strategy of accenting 'new' and deaccenting 'old' information is not sufficient to achieve correct accentuation: generation of contrastive accent is required as well. I will discuss a few approaches to the prediction of contrastive accent, and propose a practical solution which avoids the problems these approaches are faced with. These issues are discussed in the context of GoalGetter, a data-to-speech system which generates spoken reports of football matches on the basis of tabular information
Which way to turn? Guide orientation in virtual way finding
In this paper we describe an experiment aimed at determining the most effective and natural orientation of a virtual guide that gives route directions in a 3D virtual environment. We hypothesized that, due to the presence of mirrored gestures, having the route provider directly face the route seeker would result in a less effective and less natural route description than having the route provider adapt his orientation to that of the route seeker. To compare the effectiveness of the different orientations, after having received a route description the participants in our experiment had to ‘virtually’ traverse the route using prerecorded route segments. The results showed no difference in effectiveness between the two orientations, but suggested that the orientation where the speaker directly faces the route seeker is more natural
- …