2 research outputs found

    Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning

    Full text link
    We address one-shot imitation learning, where the goal is to execute a previously unseen task based on a single demonstration. While there has been exciting progress in this direction, most of the approaches still require a few hundred tasks for meta-training, which limits the scalability of the approaches. Our main contribution is to formulate one-shot imitation learning as a symbolic planning problem along with the symbol grounding problem. This formulation disentangles the policy execution from the inter-task generalization and leads to better data efficiency. The key technical challenge is that the symbol grounding is prone to error with limited training data and leads to subsequent symbolic planning failures. We address this challenge by proposing a continuous relaxation of the discrete symbolic planner that directly plans on the probabilistic outputs of the symbol grounding model. Our continuous relaxation of the planner can still leverage the information contained in the probabilistic symbol grounding and significantly improve over the baseline planner for the one-shot imitation learning tasks without using large training data.Comment: IROS 201

    A Hybrid Architecture for Hierarchical Reinforcement Learning

    No full text
    Abstract Autonomous robot systems operating in the real world have to be able to learn new tasks and environmental conditions without the need for an outside teacher. While reinforcement learning represents a good formalism to achieve this, its long learning times and need for extensive exploration often make it impracticable for on-line learning on complex systems. The hybrid architecture presented in this paper addresses this issue by applying reinforcement learning on top of an automatically derived abstract Discrete Event Dynamic System (DEDS) supervisor. This reduces the problem of policy acquisition within this approach to learning to coordinate a set of closed-loop control strategies in order to perform a given task. Besides dramatically reducing the complexity of the learning task this framework also permits the incorporation of a priori knowledge and facilitates the inclusion of learned policies as actions in order to transfer skills to new task domains. To demonstrate the applicability of this approach, the architecture is used to learn locomotion gaits on a four-legged robot platform.
    corecore