57 research outputs found
Safe and Efficient Exploration of Human Models During Human-Robot Interaction
Many collaborative human-robot tasks require the robot to stay safe and work
efficiently around humans. Since the robot can only stay safe with respect to
its own model of the human, we want the robot to learn a good model of the
human in order to act both safely and efficiently. This paper studies methods
that enable a robot to safely explore the space of a human-robot system to
improve the robot's model of the human, which will consequently allow the robot
to access a larger state space and better work with the human. In particular,
we introduce active exploration under the framework of energy-function based
safe control, investigate the effect of different active exploration
strategies, and finally analyze the effect of safe active exploration on both
analytical and neural network human models.Comment: IROS 202
Safe Control Algorithms Using Energy Functions: A Unified Framework, Benchmark, and New Directions
Safe autonomy is important in many application domains, especially for
applications involving interactions with humans. Existing safe control
algorithms are similar to one another in the sense that: they all provide
control inputs to maintain a low value of an energy function that measures
safety. In different methods, the energy function is called a potential
function, a safety index, or a barrier function. The connections and relative
advantages among these methods remain unclear. This paper introduces a unified
framework to derive safe control laws using energy functions. We demonstrate
how to integrate existing controllers based on potential field method, safe set
algorithm, barrier function method, and sliding mode algorithm into this
unified framework. In addition to theoretical comparison, this paper also
introduces a benchmark which implements and compares existing methods on a
variety of problems with different system dynamics and interaction modes. Based
on the comparison results, a new method, called the sublevel safe set
algorithm, is derived under the unified framework by optimizing the
hyperparameters. The proposed algorithm achieves the best performance in terms
of safety and efficiency on the vast majority of benchmark tests.Comment: This is the extended version of a paper submitted to 58th Conference
on Decision and Control March, 2019; revised August, 201
Online Verification of Deep Neural Networks under Domain or Weight Shift
Although neural networks are widely used, it remains challenging to formally
verify the safety and robustness of neural networks in real-world applications.
Existing methods are designed to verify the network before use, which is
limited to relatively simple specifications and fixed networks. These methods
are not ready to be applied to real-world problems with complex and/or
dynamically changing specifications and networks. To effectively handle
dynamically changing specifications and networks, the verification needs to be
performed online when these changes take place. However, it is still
challenging to run existing verification algorithms online. Our key insight is
that we can leverage the temporal dependencies of these changes to accelerate
the verification process, e.g., by warm starting new online verification using
previous verified results. This paper establishes a novel framework for
scalable online verification to solve real-world verification problems with
dynamically changing specifications and/or networks, known as domain shift and
weight shift respectively. We propose three types of techniques (branch
management, perturbation tolerance analysis, and incremental computation) to
accelerate the online verification of deep neural networks. Experiment results
show that our online verification algorithm is up to two orders of magnitude
faster than existing verification algorithms, and thus can scale to real-world
applications
- …