4 research outputs found
Recovering from External Disturbances in Online Manipulation through State-Dependent Revertive Recovery Policies
Robots are increasingly entering uncertain and unstructured environments.
Within these, robots are bound to face unexpected external disturbances like
accidental human or tool collisions. Robots must develop the capacity to
respond to unexpected events. That is not only identifying the sudden anomaly,
but also deciding how to handle it. In this work, we contribute a recovery
policy that allows a robot to recovery from various anomalous scenarios across
different tasks and conditions in a consistent and robust fashion. The system
organizes tasks as a sequence of nodes composed of internal modules such as
motion generation and introspection. When an introspection module flags an
anomaly, the recovery strategy is triggered and reverts the task execution by
selecting a target node as a function of a state dependency chart. The new
skill allows the robot to overcome the effects of the external disturbance and
conclude the task. Our system recovers from accidental human and tool
collisions in a number of tasks. Of particular importance is the fact that we
test the robustness of the recovery system by triggering anomalies at each node
in the task graph showing robust recovery everywhere in the task. We also
trigger multiple and repeated anomalies at each of the nodes of the task
showing that the recovery system can consistently recover anywhere in the
presence of strong and pervasive anomalous conditions. Robust recovery systems
will be key enablers for long-term autonomy in robot systems. Supplemental info
including code, data, graphs, and result analysis can be found at [1].Comment: 8 pages, 8 figures, 1 tabl
Fast, Robust, and Versatile Event Detection through HMM Belief State Gradient Measures
Event detection is a critical feature in data-driven systems as it assists
with the identification of nominal and anomalous behavior. Event detection is
increasingly relevant in robotics as robots operate with greater autonomy in
increasingly unstructured environments. In this work, we present an accurate,
robust, fast, and versatile measure for skill and anomaly identification. A
theoretical proof establishes the link between the derivative of the
log-likelihood of the HMM filtered belief state and the latest emission
probabilities. The key insight is the inverse relationship in which gradient
analysis is used for skill and anomaly identification. Our measure showed
better performance across all metrics than related state-of-the art works. The
result is broadly applicable to domains that use HMMs for event detection.Comment: 8 pages, 7 figures, double col, ieee conference forma
Intent Classification during Human-Robot Contact
Robots are used in many areas of industry and automation. Currently, human safety is
ensured through physical separation and safeguards. However, there is increasing interest
in allowing robots and humans to work in close proximity or on collaborative tasks. In
these cases, there is a need for the robot itself to recognize if a collision has occurred and
respond in a way which prevents further damage or harm. At the same time, there is
a need for robots to respond appropriately to intentional contact during interactive and
collaborative tasks.
This thesis proposes a classification-based approach for differentiating between several
intentional contact types, accidental contact, and no-contact situations. A dataset is de-
veloped using the Franka Emika Panda robot arm. Several machine learning algorithms,
including Support Vector Machines, Convolutional Neural Networks, and Long Short-Term
Memory Networks, are applied and used to perform classification on this dataset.
First, Support Vector Machines were used to perform feature identification. Compar-
isons were made between classification on raw sensor data compared to data calculated
from a robot dynamic model, as well as between linear and nonlinear features. The results
show that very few features can be used to achieve the best results, and accuracy is highest
when combining raw data from sensors with model-based data. Accuracies of up to 87%
were achieved. Methods of performing classification on the basis of each individual joint,
compared to the whole arm, are tested, and shown not to provide additional benefits.
Second, Convolutional Neural Networks and Long Short-Term Memory Networks were
evaluated for the classification task. A simulated dataset was generated and augmented
with noise for training the classifiers. Experiments show that additional simulated and
augmented data can improve accuracy in some cases, as well as lower the amount of real-
world data required to train the networks. Accuracies up to 93% and 84% we achieved by
the CNN and LSTM networks, respectively. The CNN achieved an accuracy of 87% using
all real data, and up to 93% using only 50% of the real data with simulated data added
to the training set, as well as with augmented data. The LSTM achieved an accuracy of
75% using all real data, and nearly 80% accuracy using 75% of real data with augmented
simulation data