14,042 research outputs found
Learning to Race through Coordinate Descent Bayesian Optimisation
In the automation of many kinds of processes, the observable outcome can
often be described as the combined effect of an entire sequence of actions, or
controls, applied throughout its execution. In these cases, strategies to
optimise control policies for individual stages of the process might not be
applicable, and instead the whole policy might have to be optimised at once. On
the other hand, the cost to evaluate the policy's performance might also be
high, being desirable that a solution can be found with as few interactions as
possible with the real system. We consider the problem of optimising control
policies to allow a robot to complete a given race track within a minimum
amount of time. We assume that the robot has no prior information about the
track or its own dynamical model, just an initial valid driving example.
Localisation is only applied to monitor the robot and to provide an indication
of its position along the track's centre axis. We propose a method for finding
a policy that minimises the time per lap while keeping the vehicle on the track
using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert
space. We apply an algorithm to search more efficiently over high-dimensional
policy-parameter spaces with BO, by iterating over each dimension individually,
in a sequential coordinate descent-like scheme. Experiments demonstrate the
performance of the algorithm against other methods in a simulated car racing
environment.Comment: Accepted as conference paper for the 2018 IEEE International
Conference on Robotics and Automation (ICRA
Fingerprint Policy Optimisation for Robust Reinforcement Learning
Policy gradient methods ignore the potential value of adjusting environment
variables: unobservable state features that are randomly determined by the
environment in a physical setting, but are controllable in a simulator. This
can lead to slow learning, or convergence to suboptimal policies, if the
environment variable has a large impact on the transition dynamics. In this
paper, we present fingerprint policy optimisation (FPO), which finds a policy
that is optimal in expectation across the distribution of environment
variables. The central idea is to use Bayesian optimisation (BO) to actively
select the distribution of the environment variable that maximises the
improvement generated by each iteration of the policy gradient method. To make
this BO practical, we contribute two easy-to-compute low-dimensional
fingerprints of the current policy. Our experiments show that FPO can
efficiently learn policies that are robust to significant rare events, which
are unlikely to be observable under random sampling, but are key to learning
good policies.Comment: ICML 201
- …