2 research outputs found
Adversarial Active Exploration for Inverse Dynamics Model Learning
We present an adversarial active exploration for inverse dynamics model
learning, a simple yet effective learning scheme that incentivizes exploration
in an environment without any human intervention. Our framework consists of a
deep reinforcement learning (DRL) agent and an inverse dynamics model
contesting with each other. The former collects training samples for the
latter, with an objective to maximize the error of the latter. The latter is
trained with samples collected by the former, and generates rewards for the
former when it fails to predict the actual action taken by the former. In such
a competitive setting, the DRL agent learns to generate samples that the
inverse dynamics model fails to predict correctly, while the inverse dynamics
model learns to adapt to the challenging samples. We further propose a reward
structure that ensures the DRL agent to collect only moderately hard samples
but not overly hard ones that prevent the inverse model from predicting
effectively. We evaluate the effectiveness of our method on several robotic arm
and hand manipulation tasks against multiple baseline models. Experimental
results show that our method is comparable to those directly trained with
expert demonstrations, and superior to the other baselines even without any
human priors.Comment: Published as a conference paper at CoRL 201
Generalization in Transfer Learning
Agents trained with deep reinforcement learning algorithms are capable of
performing highly complex tasks including locomotion in continuous
environments. We investigate transferring the learning acquired in one task to
a set of previously unseen tasks. Generalization and overfitting in deep
reinforcement learning are not commonly addressed in current transfer learning
research. Conducting a comparative analysis without an intermediate
regularization step results in underperforming benchmarks and inaccurate
algorithm comparisons due to rudimentary assessments. In this study, we propose
regularization techniques in deep reinforcement learning for continuous control
through the application of sample elimination, early stopping and maximum
entropy regularized adversarial learning. First, the importance of the
inclusion of training iteration number to the hyperparameters in deep transfer
reinforcement learning will be discussed. Because source task performance is
not indicative of the generalization capacity of the algorithm, we start by
acknowledging the training iteration number as a hyperparameter. In line with
this, we introduce an additional step of resorting to earlier snapshots of
policy parameters to prevent overfitting to the source task. Then, to generate
robust policies, we discard the samples that lead to overfitting via a method
we call strict clipping. Furthermore, we increase the generalization capacity
in widely used transfer learning benchmarks by using maximum entropy
regularization, different critic methods, and curriculum learning in an
adversarial setup. Subsequently, we propose maximum entropy adversarial
reinforcement learning to increase the domain randomization. Finally, we
evaluate the robustness of these methods on simulated robots in target
environments where the morphology of the robot, gravity, and tangential
friction coefficient of the environment are altered.Comment: 23 pages, 36 figure