3,854 research outputs found
Fourier Policy Gradients
We propose a new way of deriving policy gradient updates for reinforcement
learning. Our technique, based on Fourier analysis, recasts integrals that
arise with expected policy gradients as convolutions and turns them into
multiplications. The obtained analytical solutions allow us to capture the low
variance benefits of EPG in a broad range of settings. For the critic, we treat
trigonometric and radial basis functions, two function families with the
universal approximation property. The choice of policy can be almost arbitrary,
including mixtures or hybrid continuous-discrete probability distributions.
Moreover, we derive a general family of sample-based estimators for stochastic
policy gradients, which unifies existing results on sample-based approximation.
We believe that this technique has the potential to shape the next generation
of policy gradient approaches, powered by analytical results
Deep Conservative Policy Iteration
Conservative Policy Iteration (CPI) is a founding algorithm of Approximate
Dynamic Programming (ADP). Its core principle is to stabilize greediness
through stochastic mixtures of consecutive policies. It comes with strong
theoretical guarantees, and inspired approaches in deep Reinforcement Learning
(RL). However, CPI itself has rarely been implemented, never with neural
networks, and only experimented on toy problems. In this paper, we show how CPI
can be practically combined with deep RL with discrete actions. We also
introduce adaptive mixture rates inspired by the theory. We experiment
thoroughly the resulting algorithm on the simple Cartpole problem, and validate
the proposed method on a representative subset of Atari games. Overall, this
work suggests that revisiting classic ADP may lead to improved and more stable
deep RL algorithms.Comment: AAAI 2020 (long version
IOB: Integrating Optimization Transfer and Behavior Transfer for Multi-Policy Reuse
Humans have the ability to reuse previously learned policies to solve new
tasks quickly, and reinforcement learning (RL) agents can do the same by
transferring knowledge from source policies to a related target task. Transfer
RL methods can reshape the policy optimization objective (optimization
transfer) or influence the behavior policy (behavior transfer) using source
policies. However, selecting the appropriate source policy with limited samples
to guide target policy learning has been a challenge. Previous methods
introduce additional components, such as hierarchical policies or estimations
of source policies' value functions, which can lead to non-stationary policy
optimization or heavy sampling costs, diminishing transfer effectiveness. To
address this challenge, we propose a novel transfer RL method that selects the
source policy without training extra components. Our method utilizes the Q
function in the actor-critic framework to guide policy selection, choosing the
source policy with the largest one-step improvement over the current target
policy. We integrate optimization transfer and behavior transfer (IOB) by
regularizing the learned policy to mimic the guidance policy and combining them
as the behavior policy. This integration significantly enhances transfer
effectiveness, surpasses state-of-the-art transfer RL baselines in benchmark
tasks, and improves final performance and knowledge transferability in
continual learning scenarios. Additionally, we show that our optimization
transfer technique is guaranteed to improve target policy learning.Comment: 26 pages, 9 figure
Lexicographic Multi-Objective Reinforcement Learning
In this work we introduce reinforcement learning techniques for solving
lexicographic multi-objective problems. These are problems that involve
multiple reward signals, and where the goal is to learn a policy that maximises
the first reward signal, and subject to this constraint also maximises the
second reward signal, and so on. We present a family of both action-value and
policy gradient algorithms that can be used to solve such problems, and prove
that they converge to policies that are lexicographically optimal. We evaluate
the scalability and performance of these algorithms empirically, demonstrating
their practical applicability. As a more specific application, we show how our
algorithms can be used to impose safety constraints on the behaviour of an
agent, and compare their performance in this context with that of other
constrained reinforcement learning algorithms
- …