1 research outputs found
A Logarithmic Barrier Method For Proximal Policy Optimization
Proximal policy optimization(PPO) has been proposed as a first-order
optimization method for reinforcement learning. We should notice that an
exterior penalty method is used in it. Often, the minimizers of the exterior
penalty functions approach feasibility only in the limits as the penalty
parameter grows increasingly large. Therefore, it may result in the low level
of sampling efficiency. This method, which we call proximal policy optimization
with barrier method (PPO-B), keeps almost all advantageous spheres of PPO such
as easy implementation and good generalization. Specifically, a new surrogate
objective with interior penalty method is proposed to avoid the defect arose
from exterior penalty method. Conclusions can be draw that PPO-B is able to
outperform PPO in terms of sampling efficiency since PPO-B achieved clearly
better performance on Atari and Mujoco environment than PPO.Comment: 7 page