4,398 research outputs found
Reward Shaping with Recurrent Neural Networks for Speeding up On-Line Policy Learning in Spoken Dialogue Systems
Statistical spoken dialogue systems have the attractive property of being
able to be optimised from data via interactions with real users. However in the
reinforcement learning paradigm the dialogue manager (agent) often requires
significant time to explore the state-action space to learn to behave in a
desirable manner. This is a critical issue when the system is trained on-line
with real users where learning costs are expensive. Reward shaping is one
promising technique for addressing these concerns. Here we examine three
recurrent neural network (RNN) approaches for providing reward shaping
information in addition to the primary (task-orientated) environmental
feedback. These RNNs are trained on returns from dialogues generated by a
simulated user and attempt to diffuse the overall evaluation of the dialogue
back down to the turn level to guide the agent towards good behaviour faster.
In both simulated and real user scenarios these RNNs are shown to increase
policy learning speed. Importantly, they do not require prior knowledge of the
user's goal.Comment: Accepted for publication in SigDial 201
Active Inverse Reward Design
Designers of AI agents often iterate on the reward function in a
trial-and-error process until they get the desired behavior, but this only
guarantees good behavior in the training environment. We propose structuring
this process as a series of queries asking the user to compare between
different reward functions. Thus we can actively select queries for maximum
informativeness about the true reward. In contrast to approaches asking the
designer for optimal behavior, this allows us to gather additional information
by eliciting preferences between suboptimal behaviors. After each query, we
need to update the posterior over the true reward function from observing the
proxy reward function chosen by the designer. The recently proposed Inverse
Reward Design (IRD) enables this. Our approach substantially outperforms IRD in
test environments. In particular, it can query the designer about
interpretable, linear reward functions and still infer non-linear ones
Deep reinforcement learning from human preferences
For sophisticated reinforcement learning (RL) systems to interact usefully
with real-world environments, we need to communicate complex goals to these
systems. In this work, we explore goals defined in terms of (non-expert) human
preferences between pairs of trajectory segments. We show that this approach
can effectively solve complex RL tasks without access to the reward function,
including Atari games and simulated robot locomotion, while providing feedback
on less than one percent of our agent's interactions with the environment. This
reduces the cost of human oversight far enough that it can be practically
applied to state-of-the-art RL systems. To demonstrate the flexibility of our
approach, we show that we can successfully train complex novel behaviors with
about an hour of human time. These behaviors and environments are considerably
more complex than any that have been previously learned from human feedback
- …