9,537 research outputs found

    Experiments with discourse-level choices and readability

    Get PDF
    This paper reports on pilot experiments that are being used, together with corpus analysis, in the development of a Natural Language Generation (NLG) system, GIRL (Generator for Individual Reading Levels). GIRL generates reports for individuals after a literacy assessment. We tested GIRL's output on adult learner readers and good readers. Our aim was to find out if choices the system makes at the discourse-level have an impact on readability. Our preliminary results indicate that such choices do indeed appear to be important for learner readers. These will be investigated further in future larger-scale experiments. Ultimately we intend to use the results to develop a mechanism that makes discourse-level choices that are appropriate for individuals' reading skills

    Language choice models for microplanning and readability

    Get PDF
    This paper describes the construction of language choice models for the microplanning of discourse relations in a Natural Language Generation system that attempts to generate appropriate texts for users with varying levels of literacy. The models consist of constraint satisfaction problem graphs that have been derived from the results of a corpus analysis. The corpus that the models are based on was written for good readers. We adapted the models for poor readers by allowing certain constraints to be tightened, based on psycholinguistic evidence. We describe how the design of microplanner is evolving. We discuss the compromises involved in generating more readable textual output and implications of our design for NLG architectures. Finally we describe plans for future work

    Generating readable texts for readers with low basic skills

    Get PDF
    Most NLG systems generate texts for readers with good reading ability, but SkillSum adapts its output for readers with poor literacy. Evaluation with lowskilled readers confirms that SkillSum's knowledge-based microplanning choices enhance readability. We also discuss future readability improvements

    A corpus analysis of discourse relations for Natural Language Generation

    Get PDF
    We are developing a Natural Language Generation (NLG) system that generates texts tailored for the reading ability of individual readers. As part of building the system, GIRL (Generator for Individual Reading Levels), we carried out an analysis of the RST Discourse Treebank Corpus to find out how human writers linguistically realise discourse relations. The goal of the analysis was (a) to create a model of the choices that need to be made when realising discourse relations, and (b) to understand how these choices were typically made for “normal” readers, for a variety of discourse relations. We present our results for discourse relations: concession, condition, elaboration additional, evaluation, example, reason and restatement. We discuss the results and how they were used in GIRL

    Vocabulary and reading comprehension : instructional effects

    Get PDF
    Bibliography: leaves 32-34Supported by the National Institute of Educatio

    Comprehensibility and the basic structures of dialogue

    Get PDF
    The study of what makes utterances difficult or easy to understand is one of the central topics of research in comprehension. It is both theoretically attractive and useful in practice. The more we know about difficulties in understanding the more we know about understanding. And the better we grasp typical problems of understanding in certain types of discourse and for certain recipients the better we can overcome these problems and the better we can advise people whose job it is to overcome such problems. It is therefore not surprising that comprehensibility has been the object of much reflection as far back as the days of classical rhetoric and that it is a center of lively interest in several present-day scientific disciplines, ranging from artificial intelligence and educational psychology to linguistics

    Generating basic skills reports for low-skilled readers

    Get PDF
    We describe SkillSum, a Natural Language Generation (NLG) system that generates a personalised feedback report for someone who has just completed a screening assessment of their basic literacy and numeracy skills. Because many SkillSum users have limited literacy, the generated reports must be easily comprehended by people with limited reading skills; this is the most novel aspect of SkillSum, and the focus of this paper. We used two approaches to maximise readability. First, for determining content and structure (document planning), we did not explicitly model readability, but rather followed a pragmatic approach of repeatedly revising content and structure following pilot experiments and interviews with domain experts. Second, for choosing linguistic expressions (microplanning), we attempted to formulate explicitly the choices that enhanced readability, using a constraints approach and preference rules; our constraints were based on corpus analysis and our preference rules were based on psycholinguistic findings. Evaluation of the SkillSum system was twofold: it compared the usefulness of NLG technology to that of canned text output, and it assessed the effectiveness of the readability model. Results showed that NLG was more effective than canned text at enhancing users' knowledge of their skills, and also suggested that the empirical 'revise based on experiments and interviews' approach made a substantial contribution to readability as well as our explicit psycholinguistically inspired models of readability choices
    • …
    corecore