183 research outputs found
Sample-Efficient Multi-Agent Reinforcement Learning with Demonstrations for Flocking Control
Flocking control is a significant problem in multi-agent systems such as
multi-agent unmanned aerial vehicles and multi-agent autonomous underwater
vehicles, which enhances the cooperativity and safety of agents. In contrast to
traditional methods, multi-agent reinforcement learning (MARL) solves the
problem of flocking control more flexibly. However, methods based on MARL
suffer from sample inefficiency, since they require a huge number of
experiences to be collected from interactions between agents and the
environment. We propose a novel method Pretraining with Demonstrations for MARL
(PwD-MARL), which can utilize non-expert demonstrations collected in advance
with traditional methods to pretrain agents. During the process of pretraining,
agents learn policies from demonstrations by MARL and behavior cloning
simultaneously, and are prevented from overfitting demonstrations. By
pretraining with non-expert demonstrations, PwD-MARL improves sample efficiency
in the process of online MARL with a warm start. Experiments show that PwD-MARL
improves sample efficiency and policy performance in the problem of flocking
control, even with bad or few demonstrations.Comment: Accepted by IEEE Vehicular Technology Conference (VTC) 2022-Fal
Coherent Soft Imitation Learning
Imitation learning methods seek to learn from an expert either through
behavioral cloning (BC) of the policy or inverse reinforcement learning (IRL)
of the reward. Such methods enable agents to learn complex tasks from humans
that are difficult to capture with hand-designed reward functions. Choosing BC
or IRL for imitation depends on the quality and state-action coverage of the
demonstrations, as well as additional access to the Markov decision process.
Hybrid strategies that combine BC and IRL are not common, as initial policy
optimization against inaccurate rewards diminishes the benefit of pretraining
the policy with BC. This work derives an imitation method that captures the
strengths of both BC and IRL. In the entropy-regularized ('soft') reinforcement
learning setting, we show that the behaviour-cloned policy can be used as both
a shaped reward and a critic hypothesis space by inverting the regularized
policy update. This coherency facilities fine-tuning cloned policies using the
reward estimate and additional interactions with the environment. This approach
conveniently achieves imitation learning through initial behaviour cloning,
followed by refinement via RL with online or offline data sources. The
simplicity of the approach enables graceful scaling to high-dimensional and
vision-based tasks, with stable learning and minimal hyperparameter tuning, in
contrast to adversarial approaches.Comment: 51 pages, 47 figures. DeepMind internship repor
- …