749 research outputs found
Synthetic Experience Replay
A key theme in the past decade has been that when large neural networks and
large datasets combine they can produce remarkable results. In deep
reinforcement learning (RL), this paradigm is commonly made possible through
experience replay, whereby a dataset of past experiences is used to train a
policy or value function. However, unlike in supervised or self-supervised
learning, an RL agent has to collect its own data, which is often limited.
Thus, it is challenging to reap the benefits of deep learning, and even small
neural networks can overfit at the start of training. In this work, we leverage
the tremendous recent progress in generative modeling and propose Synthetic
Experience Replay (SynthER), a diffusion-based approach to flexibly upsample an
agent's collected experience. We show that SynthER is an effective method for
training RL agents across offline and online settings, in both proprioceptive
and pixel-based environments. In offline settings, we observe drastic
improvements when upsampling small offline datasets and see that additional
synthetic data also allows us to effectively train larger networks.
Furthermore, SynthER enables online agents to train with a much higher
update-to-data ratio than before, leading to a significant increase in sample
efficiency, without any algorithmic changes. We believe that synthetic training
data could open the door to realizing the full potential of deep learning for
replay-based RL algorithms from limited data. Finally, we open-source our code
at https://github.com/conglu1997/SynthER.Comment: Published at NeurIPS, 202
Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning
Practising and honing skills forms a fundamental component of how humans learn, yet artificial agents are rarely specifically trained to perform them. Instead, they are usually trained end-to-end, with the hope being that useful skills will be implicitly learned in order to maximise discounted return of some extrinsic reward function. In this paper, we investigate how skills can be incorporated into the training of reinforcement learning (RL) agents in complex environments with large state-action spaces and sparse rewards. To this end, we created SkillHack, a benchmark of tasks and associated skills based on the game of NetHack. We evaluate a number of baselines on this benchmark, as well as our own novel skill-based method Hierarchical Kickstarting (HKS), which is shown to outperform all other evaluated methods. Our experiments show that learning with a prior knowledge of useful skills can significantly improve the performance of agents on complex problems. We ultimately argue that utilising predefined skills provides a useful inductive bias for RL problems, especially those with large state-action spaces and sparse rewards
Replay-Guided Adversarial Environment Design
Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR
⊥
, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR
⊥
improves the performance of PAIRED, from which it inherited its theoretical framework
Evolving Curricula with Regret-Based Environment Design
Training generally-capable agents with reinforcement learning (RL) remains a significant challenge. A promising avenue for improving the robustness of RL agents is through the use of curricula. One such class of methods frames environment design as a game between a student and a teacher, using regret-based objectives to produce environment instantiations (or levels) at the frontier of the student agent's capabilities. These methods benefit from theoretical robustness guarantees at equilibrium, yet they often struggle to find effective levels in challenging design spaces in practice. By contrast, evolutionary approaches incrementally alter environment complexity, resulting in potentially open-ended learning, but often rely on domain-specific heuristics and vast amounts of computational resources. This work proposes harnessing the power of evolution in a principled, regret-based curriculum. Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex. ACCEL maintains the theoretical benefits of prior regret-based methods, while providing significant empirical gains in a diverse set of environments. An interactive version of this paper is available at https://accelagent.github.io
Grounding Aleatoric Uncertainty for Unsupervised Environment Design
Adaptive curricula in reinforcement learning (RL) have proven effective for producing policies robust to discrepancies between the train and test environment. Recently, the Unsupervised Environment Design (UED) framework generalized RL curricula to generating sequences of entire environments, leading to new methods with robust minimax regret properties. Problematically, in partially-observable or stochastic settings, optimal policies may depend on the ground-truth distribution over aleatoric parameters of the environment in the intended deployment setting, while curriculum learning necessarily shifts the training distribution. We formalize this phenomenon as curriculum-induced covariate shift (CICS), and describe how its occurrence in aleatoric parameters can lead to suboptimal policies. Directly sampling these parameters from the ground-truth distribution avoids the issue, but thwarts curriculum learning. We propose SAMPLR, a minimax regret UED method that optimizes the ground-truth utility function, even when the underlying training data is biased due to CICS. We prove, and validate on challenging domains, that our approach preserves optimality under the ground-truth distribution, while promoting robustness across the full range of environment settings
Learning General World Models in a Handful of Reward-Free Deployments
Building generally capable agents is a grand challenge for deep reinforcement learning (RL). To approach this challenge practically, we outline two key desiderata: 1) to facilitate generalization, exploration should be task agnostic; 2) to facilitate scalability, exploration policies should collect large quantities of data without costly centralized retraining. Combining these two properties, we introduce the reward-free deployment efficiency setting, a new paradigm for RL research. We then present CASCADE, a novel approach for self-supervised exploration in this new setting. CASCADE seeks to learn a world model by collecting data with a population of agents, using an information theoretic objective inspired by Bayesian Active Learning. CASCADE achieves this by specifically maximizing the diversity of trajectories sampled by the population through a novel cascading objective. We provide theoretical intuition for CASCADE which we show in a tabular setting improves upon naïve approaches that do not account for population diversity. We then demonstrate that CASCADE collects diverse task-agnostic datasets and learns agents that generalize zero-shot to novel, unseen downstream tasks on Atari, MiniGrid, Crafter and the DM Control Suite. Code and videos are available at https://ycxuyingchen.github.io/cascade/
Bayesian Generational Population-Based Training
Reinforcement learning (RL) offers the potential for training generally
capable agents that can interact autonomously in the real world. However, one
key limitation is the brittleness of RL algorithms to core hyperparameters and
network architecture choice. Furthermore, non-stationarities such as evolving
training data and increased agent complexity mean that different
hyperparameters and architectures may be optimal at different points of
training. This motivates AutoRL, a class of methods seeking to automate these
design choices. One prominent class of AutoRL methods is Population-Based
Training (PBT), which have led to impressive performance in several large scale
settings. In this paper, we introduce two new innovations in PBT-style methods.
First, we employ trust-region based Bayesian Optimization, enabling full
coverage of the high-dimensional mixed hyperparameter search space. Second, we
show that using a generational approach, we can also learn both architectures
and hyperparameters jointly on-the-fly in a single training run. Leveraging the
new highly parallelizable Brax physics engine, we show that these innovations
lead to large performance gains, significantly outperforming the tuned baseline
while learning entire configurations on the fly. Code is available at
https://github.com/xingchenwan/bgpbt.Comment: AutoML Conference 2022. 10 pages, 4 figure, 3 tables (28 pages, 10
figures, 7 tables including references and appendices
- …