Learning to Plan by Learning Rules

Abstract

Many environments involve following rules and tasks; for example, a chef cooking a dish follows a recipe, and a person driving follows rules of the road. People are naturally fluent with rules: we can learn rules efficiently; we can follow rules; we can interpret rules and explain them to others; and we can rapidly adjust to modified rules such as a new recipe without needing to relearn everything from scratch. By contrast, deep reinforcement learning (DRL) algorithms are ill-suited to learning policies in rule-based environments, as satisfying rules often involves executing lengthy tasks with sparse rewards. Furthermore, learned DRL policies are difficult if not impossible to interpret and are not composable. The aim of this thesis is to develop a reinforcement learning framework for rule-based environments that can efficiently learn policies that are interpretable, satisfying, and composable. We achieve interpretability by representing rules as automata or Linear Temporal Logic (LTL) formulas in a hierarchical Markov Decision Process (MDP). We achieve satisfaction by planning over the hierarchical MDP using a modified version of value iteration. We achieve composability by building off of a hierarchical reinforcement learning (HRL) framework called the options framework, in which low-level options can be composed arbitrarily. And lastly, we achieve data-efficient learning by integrating our HRL framework into a Bayesian model that can infer a distribution over LTL formulas given a low-level environment and a set of expert trajectories. We demonstrate the effectiveness of our approach via a number of rule-learning and planning experiments in both simulated and real-world environments.Ph.D

    Similar works

    Full text

    thumbnail-image

    Available Versions