9,537 research outputs found
Experiments with discourse-level choices and readability
This paper reports on pilot experiments that are being used, together with corpus analysis, in the development of a Natural Language Generation (NLG) system, GIRL (Generator for Individual Reading Levels). GIRL generates reports for individuals after a literacy assessment.
We tested GIRL's output on adult learner readers and good readers. Our aim was to find out if choices the system makes at the discourse-level have an impact on readability. Our preliminary results indicate that such choices do indeed appear to be important for learner readers. These will be investigated further in future larger-scale experiments. Ultimately we intend to use the results to develop a mechanism that makes discourse-level choices that are appropriate for individuals' reading skills
Language choice models for microplanning and readability
This paper describes the construction of language choice models for the microplanning of discourse relations in a Natural Language Generation system that attempts to generate appropriate texts for users with varying levels of literacy. The models consist of constraint satisfaction problem graphs that have been derived from the results of a corpus analysis. The corpus that the models are based on was written for good readers. We adapted the models for poor readers by allowing certain constraints to be tightened, based on psycholinguistic evidence. We describe how the design of microplanner is evolving. We discuss the compromises involved in generating more readable textual output and implications of our design for NLG architectures. Finally we describe plans for future work
Generating readable texts for readers with low basic skills
Most NLG systems generate texts for readers with good reading ability, but SkillSum adapts its output for readers with poor literacy. Evaluation with lowskilled readers confirms that SkillSum's knowledge-based microplanning choices enhance readability. We also discuss future readability improvements
A corpus analysis of discourse relations for Natural Language Generation
We are developing a Natural Language Generation (NLG) system that generates texts tailored for the reading ability of individual readers. As part of building the system, GIRL (Generator for Individual Reading Levels), we carried out an analysis of the RST Discourse Treebank Corpus to find out how human writers linguistically realise discourse relations. The goal of the analysis was (a) to create a model of the choices that need to be made when realising discourse relations, and (b) to understand how these choices were typically made for “normal” readers, for a variety of discourse relations. We present our results for discourse relations: concession, condition, elaboration additional, evaluation, example, reason and restatement. We discuss the results and how they were used in GIRL
Recommended from our members
Reading errors made by skilled and unskilled readers: evaluating a system that generates reports for people with poor literacy
This study evaluates a natural language generation system that creates literacy assessment reports in order to create more readable documents. Prior research assessed comprehension and reading speed on modified documents. Here, we investigate whether individuals make less reading errors after the system modifies documents to make them more readable. Preliminary results show that poor readers make more errors than good readers. The full paper will describe readers' rates of errors on documents modified for readability
Vocabulary and reading comprehension : instructional effects
Bibliography: leaves 32-34Supported by the National Institute of Educatio
Recommended from our members
Generating Feedback Reports for Adults Taking Basic Skills Tests
SkillSum is an Artificial Intelligence (AI) and Natural Language Generation (NLG) system that produces short feedback reports for people who are taking online tests which check their basic literacy and numeracy skills. In this paper, we describe the SkillSum system and application, focusing on three challenges which we believe are important ones for many systems which try to generate feedback reports from Web-based tests: choosing content based on very limited data, generating appropriate texts for people with varied levels of literacy and knowledge, and integrating the web-based system with existing assessment and support procedures
Comprehensibility and the basic structures of dialogue
The study of what makes utterances difficult or easy to understand is one of the central topics of research in comprehension. It is both theoretically attractive and useful in practice. The more we know about difficulties in understanding the more we know about understanding. And the better we grasp typical problems of understanding in certain types of discourse and for certain recipients the better we can overcome these problems and the better we can advise people whose job it is to overcome such problems. It is therefore not surprising that comprehensibility has been the object of much reflection as far back as the days of classical rhetoric and that it is a center of lively interest in several present-day scientific disciplines, ranging from artificial intelligence and educational psychology to linguistics
Generating basic skills reports for low-skilled readers
We describe SkillSum, a Natural Language Generation (NLG) system that generates a personalised feedback report for someone who has just completed a screening assessment of their basic literacy and numeracy skills. Because many SkillSum users have limited literacy, the generated reports must be easily comprehended by people with limited reading skills; this is the most novel aspect of SkillSum, and the focus of this paper. We used two approaches to maximise readability. First, for determining content and structure (document planning), we did not explicitly model readability, but rather followed a pragmatic approach of repeatedly revising content and structure following pilot experiments and interviews with domain experts. Second, for choosing linguistic expressions (microplanning), we attempted to formulate explicitly the choices that enhanced readability, using a constraints approach and preference rules; our constraints were based on corpus analysis and our preference rules were based on psycholinguistic findings. Evaluation of the SkillSum system was twofold: it compared the usefulness of NLG technology to that of canned text output, and it assessed the effectiveness of the readability model. Results showed that NLG was more effective than canned text at enhancing users' knowledge of their skills, and also suggested that the empirical 'revise based on experiments and interviews' approach made a substantial contribution to readability as well as our explicit psycholinguistically inspired models of readability choices
- …