591 research outputs found
Learning World Models with Identifiable Factorization
Extracting a stable and compact representation of the environment is crucial
for efficient reinforcement learning in high-dimensional, noisy, and
non-stationary environments. Different categories of information coexist in
such environments -- how to effectively extract and disentangle these
information remains a challenging problem. In this paper, we propose IFactor, a
general framework to model four distinct categories of latent state variables
that capture various aspects of information within the RL system, based on
their interactions with actions and rewards. Our analysis establishes
block-wise identifiability of these latent variables, which not only provides a
stable and compact representation but also discloses that all reward-relevant
factors are significant for policy learning. We further present a practical
approach to learning the world model with identifiable blocks, ensuring the
removal of redundants but retaining minimal and sufficient information for
policy optimization. Experiments in synthetic worlds demonstrate that our
method accurately identifies the ground-truth latent variables, substantiating
our theoretical findings. Moreover, experiments in variants of the DeepMind
Control Suite and RoboDesk showcase the superior performance of our approach
over baselines
Learning and Control of Dynamical Systems
Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise.
In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems.
We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p
Online Control Barrier Functions for Decentralized Multi-Agent Navigation
Control barrier functions (CBFs) enable guaranteed safe multi-agent
navigation in the continuous domain. The resulting navigation performance,
however, is highly sensitive to the underlying hyperparameters. Traditional
approaches consider fixed CBFs (where parameters are tuned apriori), and hence,
typically do not perform well in cluttered and highly dynamic environments:
conservative parameter values can lead to inefficient agent trajectories, or
even failure to reach goal positions, whereas aggressive parameter values can
lead to infeasible controls. To overcome these issues, in this paper, we
propose online CBFs, whereby hyperparameters are tuned in real-time, as a
function of what agents perceive in their immediate neighborhood. Since the
explicit relationship between CBFs and navigation performance is hard to model,
we leverage reinforcement learning to learn CBF-tuning policies in a model-free
manner. Because we parameterize the policies with graph neural networks (GNNs),
we are able to synthesize decentralized agent controllers that adjust parameter
values locally, varying the degree of conservative and aggressive behaviors
across agents. Simulations as well as real-world experiments show that (i)
online CBFs are capable of solving navigation scenarios that are infeasible for
fixed CBFs, and (ii), that they improve navigation performance by adapting to
other agents and changes in the environment
Evaluating Architectural Safeguards for Uncertain AI Black-Box Components
Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability
Some Supervision Required: Incorporating Oracle Policies in Reinforcement Learning via Epistemic Uncertainty Metrics
An inherent problem of reinforcement learning is performing exploration of an
environment through random actions, of which a large portion can be
unproductive. Instead, exploration can be improved by initializing the learning
policy with an existing (previously learned or hard-coded) oracle policy,
offline data, or demonstrations. In the case of using an oracle policy, it can
be unclear how best to incorporate the oracle policy's experience into the
learning policy in a way that maximizes learning sample efficiency. In this
paper, we propose a method termed Critic Confidence Guided Exploration (CCGE)
for incorporating such an oracle policy into standard actor-critic
reinforcement learning algorithms. More specifically, CCGE takes in the oracle
policy's actions as suggestions and incorporates this information into the
learning scheme when uncertainty is high, while ignoring it when the
uncertainty is low. CCGE is agnostic to methods of estimating uncertainty, and
we show that it is equally effective with two different techniques.
Empirically, we evaluate the effect of CCGE on various benchmark reinforcement
learning tasks, and show that this idea can lead to improved sample efficiency
and final performance. Furthermore, when evaluated on sparse reward
environments, CCGE is able to perform competitively against adjacent algorithms
that also leverage an oracle policy. Our experiments show that it is possible
to utilize uncertainty as a heuristic to guide exploration using an oracle in
reinforcement learning. We expect that this will inspire more research in this
direction, where various heuristics are used to determine the direction of
guidance provided to learning.Comment: Under review at TML
A Survey of Zero-shot Generalisation in Deep Reinforcement Learning
The study of zero-shot generalisation (ZSG) in deep Reinforcement Learning
(RL) aims to produce RL algorithms whose policies generalise well to novel
unseen situations at deployment time, avoiding overfitting to their training
environments. Tackling this is vital if we are to deploy reinforcement learning
algorithms in real world scenarios, where the environment will be diverse,
dynamic and unpredictable. This survey is an overview of this nascent field. We
rely on a unifying formalism and terminology for discussing different ZSG
problems, building upon previous works. We go on to categorise existing
benchmarks for ZSG, as well as current methods for tackling these problems.
Finally, we provide a critical discussion of the current state of the field,
including recommendations for future work. Among other conclusions, we argue
that taking a purely procedural content generation approach to benchmark design
is not conducive to progress in ZSG, we suggest fast online adaptation and
tackling RL-specific problems as some areas for future work on methods for ZSG,
and we recommend building benchmarks in underexplored problem settings such as
offline RL ZSG and reward-function variation
ContraBAR: Contrastive Bayes-Adaptive Deep RL
In meta reinforcement learning (meta RL), an agent seeks a Bayes-optimal
policy -- the optimal policy when facing an unknown task that is sampled from
some known task distribution. Previous approaches tackled this problem by
inferring a belief over task parameters, using variational inference methods.
Motivated by recent successes of contrastive learning approaches in RL, such as
contrastive predictive coding (CPC), we investigate whether contrastive methods
can be used for learning Bayes-optimal behavior. We begin by proving that
representations learned by CPC are indeed sufficient for Bayes optimality.
Based on this observation, we propose a simple meta RL algorithm that uses CPC
in lieu of variational belief inference. Our method, ContraBAR, achieves
comparable performance to state-of-the-art in domains with state-based
observation and circumvents the computational toll of future observation
reconstruction, enabling learning in domains with image-based observations. It
can also be combined with image augmentations for domain randomization and used
seamlessly in both online and offline meta RL settings.Comment: ICML 2023. Pytorch code available at
https://github.com/ec2604/ContraBA
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
We study off-policy evaluation (OPE) for partially observable MDPs (POMDPs)
with general function approximation. Existing methods such as sequential
importance sampling estimators and fitted-Q evaluation suffer from the curse of
horizon in POMDPs. To circumvent this problem, we develop a novel model-free
OPE method by introducing future-dependent value functions that take future
proxies as inputs. Future-dependent value functions play similar roles as
classical value functions in fully-observable MDPs. We derive a new Bellman
equation for future-dependent value functions as conditional moment equations
that use history proxies as instrumental variables. We further propose a
minimax learning method to learn future-dependent value functions using the new
Bellman equation. We obtain the PAC result, which implies our OPE estimator is
consistent as long as futures and histories contain sufficient information
about latent states, and the Bellman completeness. Finally, we extend our
methods to learning of dynamics and establish the connection between our
approach and the well-known spectral learning methods in POMDPs.Comment: This paper was accepted in NeurIPS 202
Privileged Knowledge Distillation for Sim-to-Real Policy Generalization
Reinforcement Learning (RL) has recently achieved remarkable success in
robotic control. However, most RL methods operate in simulated environments
where privileged knowledge (e.g., dynamics, surroundings, terrains) is readily
available. Conversely, in real-world scenarios, robot agents usually rely
solely on local states (e.g., proprioceptive feedback of robot joints) to
select actions, leading to a significant sim-to-real gap. Existing methods
address this gap by either gradually reducing the reliance on privileged
knowledge or performing a two-stage policy imitation. However, we argue that
these methods are limited in their ability to fully leverage the privileged
knowledge, resulting in suboptimal performance. In this paper, we propose a
novel single-stage privileged knowledge distillation method called the
Historical Information Bottleneck (HIB) to narrow the sim-to-real gap. In
particular, HIB learns a privileged knowledge representation from historical
trajectories by capturing the underlying changeable dynamic information.
Theoretical analysis shows that the learned privileged knowledge representation
helps reduce the value discrepancy between the oracle and learned policies.
Empirical experiments on both simulated and real-world tasks demonstrate that
HIB yields improved generalizability compared to previous methods.Comment: 22 page
- …