2 research outputs found

    Scaled gradient descent learning rate - Reinforcement learning with light-seeking robot

    No full text
    robot Abstract: Adaptive behaviour through machine learning is challenging in many real-world applications such as robotics. This is because learning has to be rapid enough to be performed in real time and to avoid damage to the robot. Models using linear function approximation are interesting in such tasks because they offer rapid learning and have small memory and processing requirements. Adalines are a simple model for gradient descent learning with linear function approximation. However, the performance of gradient descent learning even with a linear model greatly depends on identifying a good value for the learning rate to use. In this paper it is shown that the learning rate should be scaled as a function of the current input values. A scaled learning rate makes it possible to avoid weight oscillations without slowing down learning. The advantages of using the scaled learning rate are illustrated using a robot that learns to navigate towards a light source. This light-seeking robot performs a Reinforcement Learning task, where the robot collects training samples by exploring the environment, i.e. taking actions and learning from their result by a trialand-error procedure.
    corecore