16,492 research outputs found

    STARC: Structured Annotations for Reading Comprehension

    Full text link
    We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE, is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.Comment: ACL 2020. OneStopQA dataset, STARC guidelines and human experiments data are available at https://github.com/berzak/onestop-q

    MOSAIC: A Model for Technologically Enhanced Educational Linguistics

    Get PDF

    A process-oriented language for describing aspects of reading comprehension

    Get PDF
    Includes bibliographical references (p. 36-38)The research described herein was supported in part by the National Institute of Education under Contract No. MS-NIE-C-400-76-011

    Propositional Content and Interpretation in Expository Text

    Get PDF
    What is it about expository prose that makes it harder to follow than most spoken language? SPEECH we acquire naturally, regardless of instruction. Skill in the production and comprehension of written language-TEXT we'll call it-takes years to achieve, along with academic instruction, and, even then, success is too often incomplete. Additionally, the gap between production and comprehension seems far wider for text than for speech. It should be apparent that we are discussing here not the prose of personal letters or newspaper advertisements but the kinds of complex expository prose found most commonly in academic texts, in the more prestigious newspapers and magazines, and in legal, medical, and business writing for nonspecialist readers. Of course, the assumed readership is not really nonspecialist. Students in a high school social studies class or a college sociology course are assumed to have achieved some appropriate level of sophistication both in the subject-matter and the language generally used to communicate it. But, leaving aside specialized content and vocabulary, what other factors might be involved? What properties of text make it harder to process than speech
    corecore