Bayesian learning for policy search in trajectory control of a planar manipulator

Abstract

Application of learning algorithms to robotics and control problems with highly nonlinear dynamics to obtain a plausible control policy in a continuous state space is expected to greatly facilitate the design process. Recently, policy search methods such as policy gradient in Reinforcement Learning (RL) have succeeded in coping with such complex systems. Nevertheless, they are slow in convergence speed and are prone to get stuck in local optima. To alleviate this, a Bayesian inference method based on Markov Chain Monte Carlo (MCMC), utilizing a multiplicative reward function, is proposed. This study aims to compare eNAC, a popular gradient based RL method, with the proposed Bayesian learning method, where the objective is trajectory control of a complex model of a 2-DOF planar manipulator. The results obtained for the convergence speed of the proposed algorithm and time response performance, illustrate that the proposed MCMC algorithm is qualified for complex problems in robotics

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 10/08/2021