201 research outputs found

    Vagueness in referring expressions of quantity: effects on the audience

    Get PDF
    NLG systems that generate natural language text from numerical input data must decide be- tween alternative surface linguistic forms for the natural language output. When using refer- ring expressions to identify numerical quantities, the system must decide between vague and crisp surface forms of the referring expression. Ideally, the system would be equipped with heuristics that it could use to make these decisions in the way that best suits the audience: however there is currrently little empirical data to draw on concerning the differential audience benefits of vague and crisp surface forms. In this paper we describe a series of experiments that investigate whether different surface forms affect the audienceā€™s cognitive load in differ- ent ways. We estimate cognitive load by measuring the response latencies in a forced choice referent identification task in which we vary the surface form of the referring expression that constitutes the instruction in the task. We find that the pattern of audience responses across the series of experiments provides little support for the cost reduction hypothesis that vague surface forms should place fewer cogntive demands on the audience than crisp surface forms: instead the results support the view that referring expressions that contain numerals are more taxing for the audience than referring expressions that use natural language quantifiers, at least in the context of a forced choice referent identification task. We offer this work as an initial foray into the provision of heuristics to augment NLG systems with audience-sensitivity

    The utility of vagueness: does it lie elsewhere?

    Get PDF
    Much of everyday language is vague, yet standard game-theoretic models do not find any utility of vagueness in cooperative situations. We report a novel experiment, the fourth in a series that aims to discern the utility of vagueness from the utility of other factors that come together with vagueness. We argue that the results support a view of vagueness where the benefits that vague terms exert are due to other influences that vagueness brings with it rather than to influences of vagueness itself

    Real vs. template- based natural language generation:A false opposition?

    Get PDF
    This paper challenges the received wisdom that template-based approaches to the generation of language are necessarily inferior to other approaches as regards their maintainability, linguistic well-foundedness and quality of output. Some recent NLG systems that call themselves `templatebased' will illustrate our claim

    A text reassembling approach to natural language generation

    Get PDF
    Recent years have seen a number of proposals for performing Natural Language Generation (NLG) based in large part on statistical techniques. Despite having many attractive features, we argue that these existing approaches nonetheless have some important drawbacks, sometimes because the approach in question is not fully statistical (i.e., relies on a certain amount of handcrafting), sometimes because the approach in question lacks transparency. Focussing on some of the key NLG tasks (namely Content Selection, Lexical Choice, and Linguistic Realisation), we propose a novel approach, called the Text Reassembling approach to NLG (TRG), which approaches the ideal of a purely statistical approach very closely, and which is at the same time highly transparent. We evaluate the TRG approach and discuss how TRG may be extended to deal with other NLG tasks, such as Document Structuring, and Aggregation. We discuss the strengths and limitations of TRG, concluding that the method may hold particular promise for domain experts who want to build an NLG system despite having little expertise in linguistics and NLG

    Why be Vague?

    Get PDF
    • ā€¦
    corecore