225,321 research outputs found

    Learning Structured Text Representations

    Get PDF
    In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias, we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluation across different tasks and datasets shows that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.Comment: change to one-based indexing, published in Transactions of the Association for Computational Linguistics (TACL), https://transacl.org/ojs/index.php/tacl/article/view/1185/28

    Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

    Full text link
    We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.Comment: In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 201

    Master of Science

    Get PDF
    thesisRepresentations in the form of concept maps have been shown to be a benefit to leaners. However, previous research examined the influence of these representations in learning in well-structured environments. Additionally, previous research suggests that increasing the activity of students in learning environments has also been shown to yield gains in learning, called the generation effect. The current study extends the literature by examining the influence generative activities and concept map representations have on an ill-structured reasoning process, namely thinking like a lawyer. Pre- and posttests targeting factual knowledge, recall, and transfer were used to assess learning, while verbal protocols were implemented to examine learning processes used by participants. Results were mixed. Representation and activity had no effect on factual knowledge, recall, and near transfer measures. Verbal protocol results showed that students who studied with the concept map representation condition produced a higher proportion of deep utterances during problem solving when using static representations compared to those that generated their representation. The opposite was true for students in the text list condition. Those who generated their text list representation during study produced a higher proportion of deep utterances in problem solving when compared to those who studied with a static list. Thus, a careful consideration of topical materials and learning environments is necessary to determine whether or not concept maps and generation effects will encourage deeper comprehension in learners
    • …
    corecore