21 research outputs found
Verifiable Learned Behaviors via Motion Primitive Composition: Applications to Scooping of Granular Media
A robotic behavior model that can reliably generate behaviors from natural
language inputs in real time would substantially expedite the adoption of
industrial robots due to enhanced system flexibility. To facilitate these
efforts, we construct a framework in which learned behaviors, created by a
natural language abstractor, are verifiable by construction. Leveraging recent
advancements in motion primitives and probabilistic verification, we construct
a natural-language behavior abstractor that generates behaviors by synthesizing
a directed graph over the provided motion primitives. If these component motion
primitives are constructed according to the criteria we specify, the resulting
behaviors are probabilistically verifiable. We demonstrate this verifiable
behavior generation capacity in both simulation on an exploration task and on
hardware with a robot scooping granular media
IIFL: Implicit Interactive Fleet Learning from Heterogeneous Human Supervisors
Imitation learning has been applied to a range of robotic tasks, but can
struggle when (1) robots encounter edge cases that are not represented in the
training data (distribution shift) or (2) the human demonstrations are
heterogeneous: taking different paths around an obstacle, for instance
(multimodality). Interactive fleet learning (IFL) mitigates distribution shift
by allowing robots to access remote human teleoperators during task execution
and learn from them over time, but is not equipped to handle multimodality.
Recent work proposes Implicit Behavior Cloning (IBC), which is able to
represent multimodal demonstrations using energy-based models (EBMs). In this
work, we propose addressing both multimodality and distribution shift with
Implicit Interactive Fleet Learning (IIFL), the first extension of implicit
policies to interactive imitation learning (including the single-robot,
single-human setting). IIFL quantifies uncertainty using a novel application of
Jeffreys divergence to EBMs. While IIFL is more computationally expensive than
explicit methods, results suggest that IIFL achieves 4.5x higher return on
human effort in simulation experiments and an 80% higher success rate in a
physical block pushing task over (Explicit) IFL, IBC, and other baselines when
human supervision is heterogeneous
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Learning-based methods in robotics hold the promise of generalization, but
what can be done if a learned policy does not generalize to a new situation? In
principle, if an agent can at least evaluate its own success (i.e., with a
reward classifier that generalizes well even when the policy does not), it
could actively practice the task and finetune the policy in this situation. We
study this problem in the setting of industrial insertion tasks, such as
inserting connectors in sockets and setting screws. Existing algorithms rely on
precise localization of the connector or socket and carefully managed physical
setups, such as assembly lines, to succeed at the task. But in unstructured
environments such as homes or even some industrial settings, robots cannot rely
on precise localization and may be tasked with previously unseen connectors.
Offline reinforcement learning on a variety of connector insertion tasks is a
potential solution, but what if the robot is tasked with inserting previously
unseen connector? In such a scenario, we will still need methods that can
robustly solve such tasks with online practice. One of the main observations we
make in this work is that, with a suitable representation learning and domain
generalization approach, it can be significantly easier for the reward function
to generalize to a new but structurally similar task (e.g., inserting a new
type of connector) than for the policy. This means that a learned reward
function can be used to facilitate the finetuning of the robot's policy in
situations where the policy fails to generalize in zero shot, but the reward
function generalizes successfully. We show that such an approach can be
instantiated in the real world, pretrained on 50 different connectors, and
successfully finetuned to new connectors via the learned reward function.
Videos can be viewed at https://sites.google.com/view/learningonthejobComment: 10 page