376,757 research outputs found

    Modeling Task Effects in Human Reading with Neural Attention

    Get PDF
    Humans read by making a sequence of fixations and saccades. They often skip words, without apparent detriment to understanding. We offer a novel explanation for skipping: readers optimize a tradeoff between performing a language-related task and fixating as few words as possible. We propose a neural architecture that combines an attention module (deciding whether to skip words) and a task module (memorizing the input). We show that our model predicts human skipping behavior, while also modeling reading times well, even though it skips 40% of the input. A key prediction of our model is that different reading tasks should result in different skipping behaviors. We confirm this prediction in an eye-tracking experiment in which participants answers questions about a text. We are able to capture these experimental results using the our model, replacing the memorization module with a task module that performs neural question answering

    How much should you ask? On the question structure in QA systems

    Full text link
    Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner. However, users are still used to query-like systems where they type in keywords to search for answer. In this study we validate which parts of questions are essential for obtaining valid answer. In order to conclude that, we take advantage of LIME - a framework that explains prediction by local approximation. We find that grammar and natural language is disregarded by QA. State-of-the-art model can answer properly even if 'asked' only with a few words with high coefficients calculated with LIME. According to our knowledge, it is the first time that QA model is being explained by LIME.Comment: Accepted to Analyzing and interpreting neural networks for NLP workshop at EMNLP 201
    • …
    corecore