51 research outputs found

    Reinforcement Learning based NLP

    Get PDF
    In the field of Natural Language Processing (NLP), reinforcement learning (RL) has drawn attention as a viable method for training models. An agent is trained to interact with a linguistic environment in order to carry out a given task using RL- based NLP, and the agent learns from feedback in the form of rewards or penalties. This method has been effectively used for a variety of linguistic problems, including text summarization, conversation systems, and machine translation. Sequence-to- sequence Two common methods used in RL-based NLP are reinforcement learning and deep reinforcement learning. Sequence-to-sequence While deep reinforcement learning includes training a neural network to discover the optimum strategy for a language challenge, reinforcement learning (RL) trains a model to create a series of words or characters that most closely matches a goal sequence. In several linguistic challenges, RL-based NLP has demonstrated promising results and attained cutting-edge performance. There are still issues to be solved, such as the need for more effective exploration tactics, data scarcity, and sample efficiency. In summary, RL-based NLP represents a potential line of inquiry for NLP research in the future. This method outperforms more established NLP strategies in a variety of language problems and has the added benefit of being able to improve over time with user feedback. To further enhance RL-based NLP's effectiveness and increase its applicability to real-world settings, future research should concentrate on resolving the difficulties associated with this approach.Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP) © Copyright: All rights reserved

    Grounding Language for Transfer in Deep Reinforcement Learning

    Full text link
    In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. Specifically, by learning to ground the meaning of text to the dynamics of the environment such as transitions and rewards, an autonomous agent can effectively bootstrap policy learning on a new domain given its description. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively use entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. For instance, we achieve up to 14% and 11.5% absolute improvement over previously existing models in terms of average and initial rewards, respectively.Comment: JAIR 201

    Towards Solving Text-based Games by Producing Adaptive Action Spaces

    Get PDF
    To solve a text-based game, an agent needs to formulate valid text commands for a given context and find the ones that lead to success. Recent attempts at solving text-based games with deep reinforcement learning have focused on the latter, i.e., learning to act optimally when valid actions are known in advance. In this work, we propose to tackle the first task and train a model that generates the set of all valid commands for a given context. We try three generative models on a dataset generated with Textworld. The best model can generate valid commands which were unseen at training and achieve high F1F_1 score on the test set
    • …
    corecore