4 research outputs found
Toward Real-Time Decentralized Reinforcement Learning using Finite Support Basis Functions
This paper addresses the design and implementation of complex Reinforcement
Learning (RL) behaviors where multi-dimensional action spaces are involved, as
well as the need to execute the behaviors in real-time using robotic platforms
with limited computational resources and training times. For this purpose, we
propose the use of decentralized RL, in combination with finite support basis
functions as alternatives to Gaussian RBF, in order to alleviate the effects of
the curse of dimensionality on the action and state spaces respectively, and to
reduce the computation time. As testbed, a RL based controller for the in-walk
kick in NAO robots, a challenging and critical problem for soccer robotics, is
used. The reported experiments show empirically that our solution saves up to
99.94% of execution time and 98.82% of memory consumption during execution,
without diminishing performance compared to classical approaches.Comment: Accepted in the RoboCup Symposium 2017. Final version will be
published at Springe
Neural Dynamic Movement Primitives -- a survey
One of the most important challenges in robotics is producing accurate
trajectories and controlling their dynamic parameters so that the robots can
perform different tasks. The ability to provide such motion control is closely
related to how such movements are encoded. Advances on deep learning have had a
strong repercussion in the development of novel approaches for Dynamic Movement
Primitives. In this work, we survey scientific literature related to Neural
Dynamic Movement Primitives, to complement existing surveys on Dynamic Movement
Primitives