32 research outputs found
Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning
Off-policy learning is more unstable compared to on-policy learning in
reinforcement learning (RL). One reason for the instability of off-policy
learning is a discrepancy between the target () and behavior (b) policy
distributions. The discrepancy between and b distributions can be
alleviated by employing a smooth variant of the importance sampling (IS), such
as the relative importance sampling (RIS). RIS has parameter
which controls smoothness. To cope with instability, we present the first
relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free
algorithms in RL. In our method, the network yields a target policy (the
actor), a value function (the critic) assessing the current policy ()
using samples drawn from behavior policy. We use action value generated from
the behavior policy in reward function to train our algorithm rather than from
the target policy. We also use deep neural networks to train both actor and
critic. We evaluated our algorithm on a number of Open AI Gym benchmark
problems and demonstrate better or comparable performance to several
state-of-the-art RL baselines
Deep Deterministic Portfolio Optimization
Can deep reinforcement learning algorithms be exploited as solvers for
optimal trading strategies? The aim of this work is to test reinforcement
learning algorithms on conceptually simple, but mathematically non-trivial,
trading environments. The environments are chosen such that an optimal or
close-to-optimal trading strategy is known. We study the deep deterministic
policy gradient algorithm and show that such a reinforcement learning agent can
successfully recover the essential features of the optimal trading strategies
and achieve close-to-optimal rewards.Comment: Minor typ