4 research outputs found

    Stable deep reinforcement learning method by predicting uncertainty in rewards as a subtask

    Full text link
    In recent years, a variety of tasks have been accomplished by deep reinforcement learning (DRL). However, when applying DRL to tasks in a real-world environment, designing an appropriate reward is difficult. Rewards obtained via actual hardware sensors may include noise, misinterpretation, or failed observations. The learning instability caused by these unstable signals is a problem that remains to be solved in DRL. In this work, we propose an approach that extends existing DRL models by adding a subtask to directly estimate the variance contained in the reward signal. The model then takes the feature map learned by the subtask in a critic network and sends it to the actor network. This enables stable learning that is robust to the effects of potential noise. The results of experiments in the Atari game domain with unstable reward signals show that our method stabilizes training convergence. We also discuss the extensibility of the model by visualizing feature maps. This approach has the potential to make DRL more practical for use in noisy, real-world scenarios.Comment: Published as a conference paper at ICONIP 202

    Efficient reinforcement learning through variance reduction and trajectory synthesis

    Get PDF
    Reinforcement learning is a general and unified framework that has been proven promising for many important AI applications, such as robotics, self-driving vehicles. However, current reinforcement learning algorithms suffer from large variance and sampling inefficiency, which leads to slow convergent rate as well as unstable performance. In this thesis, we manage to alleviate these two relevant problems. For enormous variance, we combine variance reduced optimization with deep Q-learning. For inefficient sampling, we propose novel framework that integrates self-imitation learning and artificial synthesis procedure. Our approaches, which are flexible and could be extended to many tasks, prove their effectiveness through experiments on Atari and MuJoCo environment
    corecore