12,395 research outputs found
Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning
Off-policy learning is more unstable compared to on-policy learning in
reinforcement learning (RL). One reason for the instability of off-policy
learning is a discrepancy between the target () and behavior (b) policy
distributions. The discrepancy between and b distributions can be
alleviated by employing a smooth variant of the importance sampling (IS), such
as the relative importance sampling (RIS). RIS has parameter
which controls smoothness. To cope with instability, we present the first
relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free
algorithms in RL. In our method, the network yields a target policy (the
actor), a value function (the critic) assessing the current policy ()
using samples drawn from behavior policy. We use action value generated from
the behavior policy in reward function to train our algorithm rather than from
the target policy. We also use deep neural networks to train both actor and
critic. We evaluated our algorithm on a number of Open AI Gym benchmark
problems and demonstrate better or comparable performance to several
state-of-the-art RL baselines
Multiobjective Reinforcement Learning for Reconfigurable Adaptive Optimal Control of Manufacturing Processes
In industrial applications of adaptive optimal control often multiple
contrary objectives have to be considered. The weights (relative importance) of
the objectives are often not known during the design of the control and can
change with changing production conditions and requirements. In this work a
novel model-free multiobjective reinforcement learning approach for adaptive
optimal control of manufacturing processes is proposed. The approach enables
sample-efficient learning in sequences of control configurations, given by
particular objective weights.Comment: Conference, Preprint, 978-1-5386-5925-0/18/$31.00 \c{opyright} 2018
IEE
- …