73,944 research outputs found

    Looking Under the Hood : Tools for Diagnosing your Question Answering Engine

    Full text link
    In this paper we analyze two question answering tasks : the TREC-8 question answering task and a set of reading comprehension exams. First, we show that Q/A systems perform better when there are multiple answer opportunities per question. Next, we analyze common approaches to two subproblems: term overlap for answer sentence identification, and answer typing for short answer extraction. We present general tools for analyzing the strengths and limitations of techniques for these subproblems. Our results quantify the limitations of both term overlap and answer typing to distinguish between competing answer candidates.Comment: Revision of paper appearing in the Proceedings of the Workshop on Open-Domain Question Answerin

    A process-oriented language for describing aspects of reading comprehension

    Get PDF
    Includes bibliographical references (p. 36-38)The research described herein was supported in part by the National Institute of Education under Contract No. MS-NIE-C-400-76-011

    Measure for Measure: A Critical Consumers' Guide to Reading Comprehension Assessments for Adolescents

    Get PDF
    A companion report to Carnegie's Time to Act, analyzes and rates commonly used reading comprehension tests for various elements and purposes. Outlines trends in types of questions, stress on critical thinking, and screening or diagnostic functions

    Teaching Machines to Read and Comprehend

    Full text link
    Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.Comment: Appears in: Advances in Neural Information Processing Systems 28 (NIPS 2015). 14 pages, 13 figure

    Crowdsourcing Multiple Choice Science Questions

    Full text link
    We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.Comment: accepted for the Workshop on Noisy User-generated Text (W-NUT) 201

    Writing to Read: Evidence for How Writing Can Improve Reading

    Get PDF
    Analyzes studies showing that writing about reading material enhances reading comprehension, writing instruction strengthens reading skills, and increased writing leads to improved reading. Outlines recommended writing practices and how to implement them
    corecore