10,128 research outputs found

    Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives

    Full text link
    This paper tackles the problem of reading comprehension over long narratives where documents easily span over thousands of tokens. We propose a curriculum learning (CL) based Pointer-Generator framework for reading/sampling over large documents, enabling diverse training of the neural model based on the notion of alternating contextual difficulty. This can be interpreted as a form of domain randomization and/or generative pretraining during training. To this end, the usage of the Pointer-Generator softens the requirement of having the answer within the context, enabling us to construct diverse training samples for learning. Additionally, we propose a new Introspective Alignment Layer (IAL), which reasons over decomposed alignments using block-based self-attention. We evaluate our proposed method on the NarrativeQA reading comprehension benchmark, achieving state-of-the-art performance, improving existing baselines by 51%51\% relative improvement on BLEU-4 and 17%17\% relative improvement on Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and CL components.Comment: Accepted to ACL 201

    Technology For Improving Early Reading In Multilingual Settings: Evidence From Rural South Africa

    Get PDF
    In September 2015, the United Nations ratified 17 Sustainable Development Goals (SDGs), including a central goal to improve the quality of learning, and attain universal literacy. As part of this effort, the UN and other funding agencies see technology as a major enabling tool for achievement of the SDGs. However, little evidence exists concerning major claims about the success of particular interventions, especially in developing countries. An additional barrier to achieving the SDGs for education is a better understanding of how learning occurs for promoting successful transfer of reading skills in linguistically diverse settings. This research investigates the impact of a computer-based early grade reading intervention for improving literacy outcomes in rural South Africa. Results show that learners in intervention schools performed significantly better on mother tongue reading fluency measures, as well as comprehension. Further, this study identified a pair of values by which mother tongue decoding skills significantly improved the ability to predict transfer of skills to English. The findings indicate that teaching literacy through guided and contextualized digital material can support development of early reading skills. However, more research is needed to enhance sustainability of the treatment effect over time. The results further demonstrate the importance of establishing baseline reading skills in a mother tongue language for improving transfer of literacy skills to English

    Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks

    Full text link
    An important goal of computer vision is to build systems that learn visual representations over time that can be applied to many tasks. In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multi-task learning. In particular, the task of visual recognition is aligned to the task of visual question answering by forcing each to use the same word-region embeddings. We show this leads to greater inductive transfer from recognition to VQA than standard multitask learning. Visual recognition also improves, especially for categories that have relatively few recognition training labels but appear often in the VQA setting. Thus, our paper takes a small step towards creating more general vision systems by showing the benefit of interpretable, flexible, and trainable core representations.Comment: Accepted in ICCV 2017. The arxiv version has an extra analysis on correlation with human attentio
    • …
    corecore