49,640 research outputs found
Discretizing Continuous Action Space for On-Policy Optimization
In this work, we show that discretizing action space for continuous control
is a simple yet powerful technique for on-policy optimization. The explosion in
the number of discrete actions can be efficiently addressed by a policy with
factorized distribution across action dimensions. We show that the discrete
policy achieves significant performance gains with state-of-the-art on-policy
optimization algorithms (PPO, TRPO, ACKTR) especially on high-dimensional tasks
with complex dynamics. Additionally, we show that an ordinal parameterization
of the discrete distribution can introduce the inductive bias that encodes the
natural ordering between discrete actions. This ordinal architecture further
significantly improves the performance of PPO/TRPO.Comment: Accepted at AAAI Conference on Artificial Intelligence (2020) in New
York, NY, USA. An open source implementation can be found at
https://github.com/robintyh1/onpolicybaseline
Exploring Restart Distributions
We consider the generic approach of using an experience memory to help
exploration by adapting a restart distribution. That is, given the capacity to
reset the state with those corresponding to the agent's past observations, we
help exploration by promoting faster state-space coverage via restarting the
agent from a more diverse set of initial states, as well as allowing it to
restart in states associated with significant past experiences. This approach
is compatible with both on-policy and off-policy methods. However, a caveat is
that altering the distribution of initial states could change the optimal
policies when searching within a restricted class of policies. To reduce this
unsought learning bias, we evaluate our approach in deep reinforcement learning
which benefits from the high representational capacity of deep neural networks.
We instantiate three variants of our approach, each inspired by an idea in the
context of experience replay. Using these variants, we show that performance
gains can be achieved, especially in hard exploration problems.Comment: RLDM 201
- …
