326 research outputs found

    Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

    Full text link
    We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic---in the context of common knowledge---and the language it is expressed in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.Comment: Published as conference long paper at EMNLP 201

    Answering Count Questions with Structured Answers from Text

    Get PDF
    In this work we address the challenging case of answering count queries in web search, such as ``number of songs by John Lennon''. Prior methods merely answer these with a single, and sometimes puzzling number or return a ranked list of text snippets with different numbers. This paper proposes a methodology for answering count queries with inference, contextualization and explanatory evidence. Unlike previous systems, our method infers final answers from multiple observations, supports semantic qualifiers for the counts, and provides evidence by enumerating representative instances. Experiments with a wide variety of queries, including existing benchmark show the benefits of our method, and the influence of specific parameter settings. Our code, data and an interactive system demonstration are publicly available at https://github.com/ghoshs/CoQEx and https://nlcounqer.mpi-inf.mpg.de/

    Answering Count Queries with Explanatory Evidence

    Get PDF
    A challenging case in web search and question answering are count queries, such as \textit{"number of songs by John Lennon"}. Prior methods merely answer these with a single, and sometimes puzzling number or return a ranked list of text snippets with different numbers. This paper proposes a methodology for answering count queries with inference, contextualization and explanatory evidence. Unlike previous systems, our method infers final answers from multiple observations, supports semantic qualifiers for the counts, and provides evidence by enumerating representative instances. Experiments with a wide variety of queries show the benefits of our method. To promote further research on this underexplored topic, we release an annotated dataset of 5k queries with 200k relevant text spans.Comment: Version published at SIGIR 202

    EFL University Students' Cognitive Processing of Spoken Academic Discourse as Evidenced by Lecture Notes

    Get PDF
    This paper presents an empirical investigation of the role of EFL university students’ bottom-up and top-down processing in academic listening. EFL student notes from a Tourism lecture were analysed according to their test answerability, that is the test questions they helped answer. The true-false questions administered to check comprehension were classed as a) “supporting” or b) “main” according to whether a) they asked for specific facts, examples or ideas supporting the main concepts in the lecture and therefore required accurate bottom-up processing or whether b) they referred to more general, essential and recurrent concepts in the lectures, which the students could mainly try to answer by resorting to top-down processing. The student notes were also classed as “main” or “supporting”, depending on the type of question they helped answer. Two paired t-tests conducted on the T/F scores and the answerability scores, respectively, revealed that both highproficiency and low-proficiency students were weaker in bottom-up than in top-down processing. This paper also provides evidence of some common EFL auditory processing difficulties: word beginnings, endings, number of syllables and function words
    • …
    corecore