31,256 research outputs found
Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning
We explore fixed-horizon temporal difference (TD) methods, reinforcement
learning algorithms for a new kind of value function that predicts the sum of
rewards over a number of future time steps. To learn the value
function for horizon , these algorithms bootstrap from the value function
for horizon , or some shorter horizon. Because no value function
bootstraps from itself, fixed-horizon methods are immune to the stability
problems that plague other off-policy TD methods using function approximation
(also known as "the deadly triad"). Although fixed-horizon methods require the
storage of additional value functions, this gives the agent additional
predictive power, while the added complexity can be substantially reduced via
parallel updates, shared weights, and -step bootstrapping. We show how to
use fixed-horizon value functions to solve reinforcement learning problems
competitively with methods such as Q-learning that learn conventional value
functions. We also prove convergence of fixed-horizon temporal difference
methods with linear and general function approximation. Taken together, our
results establish fixed-horizon TD methods as a viable new way of avoiding the
stability problems of the deadly triad.Comment: AAAI 202
A vision-guided parallel parking system for a mobile robot using approximate policy iteration
Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions.
Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time.
A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning)
- ā¦