161 research outputs found

    Integrating Discourse Markers into a Pipelined Natural Language Generation Architecture

    Get PDF
    Pipelined Natural Language Generation (NLG) systems have grown increasingly complex as architectural modules were added to support language functionalities such as referring expressions, lexical choice, and revision. This has given rise to discussions about the relative placement of these new modules in the overall architecture

    Fully generated scripted dialogue for embodied agents

    Get PDF
    This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated construction of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ''performed'' by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA's approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focusing on the generation of behaviour (i.e., focusing on information presentation) rather than on interpretation

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    XS, S, M, L: Creative Text Generators of Different Scales

    Get PDF
    Creative text generation projects of different sizes (in terms of lines of code and length of development time) are described. “Extra-small,” “small,” “medium,” and “large” projects are discussed as participating in the practice of creative computing differently. Different ways in which these projects have circulated and are being used in the community of practice are identified. While large-scale projects have clearly been important in advancing creative text generation, the argument presented here is that the other types of projects are also valuable and that they are undervalued (particularly in computer science and strongly related fields) by current structures of higher education and academic communication – structures which could be changed

    Natural language generation in the LOLITA system an engineering approach

    Get PDF
    Natural Language Generation (NLG) is the automatic generation of Natural Language (NL) by computer in order to meet communicative goals. One aim of NL processing (NLP) is to allow more natural communication with a computer and, since communication is a two-way process, a NL system should be able to produce as well as interpret NL text. This research concerns the design and implementation of a NLG module for the LOLITA system. LOLITA (Large scale, Object-based, Linguistic Interactor, Translator and Analyser) is a general purpose base NLP system which performs core NLP tasks and upon which prototype NL applications have been built. As part of this encompassing project, this research shares some of its properties and methodological assumptions: the LOLITA generator has been built following Natural Language Engineering principles uses LOLITA's SemNet representation as input and is implemented in the functional programming language Haskell. As in other generation systems the adopted solution utilises a two component architecture. However, in order to avoid problems which occur at the interface between traditional planning and realisation modules (known as the generation gap) the distribution of tasks between the planner and plan-realiser is different: the plan-realiser, in the absence of detailed planning instructions, must perform some tasks (such as the selection and ordering of content) which are more traditionally performed by a planner. This work largely concerns the development of the plan- realiser and its interface with the planner. Another aspect of the solution is the use of Abstract Transformations which act on the SemNet input before realisation leading to an increased ability for creating paraphrases. The research has lead to a practical working solution which has greatly increased the power of the LOLITA system. The research also investigates how NLG systems can be evaluated and the advantages and disadvantages of using a functional language for the generation task

    Understanding stories via event sequence modeling

    Get PDF
    Understanding stories, i.e. sequences of events, is a crucial yet challenging natural language understanding (NLU) problem, which requires dealing with multiple aspects of semantics, including actions, entities and emotions, as well as background knowledge. In this thesis, towards the goal of building a NLU system that can model what has happened in stories and predict what would happen in the future, we contribute on three fronts: First, we investigate the optimal way to model events in text; Second, we study how we can model a sequence of events with the balance of generality and specificity; Third, we improve event sequence modeling by joint modeling of semantic information and incorporating background knowledge. Each of the above three research problems poses both conceptual and computational challenges. For event extraction, we find that Semantic Role Labeling (SRL) signals can be served as good intermediate representations for events, thus giving us the ability to reliably identify events with minimal supervision. In addition, since it is important to resolve co-referred entities for extracted events, we make improvements to an existing co-reference resolution system. To model event sequences, we start from studying within document event co-reference (the simplest flow of events); and then extend to model two other more natural event sequences along with discourse phenomena while abstracting over the specific mentions of predicates and entities. We further identify problems for the basic event sequence models, where we fail to capture multiple semantic aspects and background knowledge. We then improve our system by jointly modeling frames, entities and sentiments, yielding joint representations of all these semantic aspects; while at the same time incorporate explicit background knowledge acquired from other corpus as well as human experience. For all tasks, we evaluate the developed algorithms and models on benchmark datasets and achieve better performance compared to other highly competitive methods
    • …
    corecore