17 research outputs found
Safety Controller Synthesis for Collaborative Robots
In human-robot collaboration (HRC), software-based automatic safety
controllers (ASCs) are used in various forms (e.g. shutdown mechanisms,
emergency brakes, interlocks) to improve operational safety. Complex robotic
tasks and increasingly close human-robot interaction pose new challenges to ASC
developers and certification authorities. Key among these challenges is the
need to assure the correctness of ASCs under reasonably weak assumptions. To
address this need, we introduce and evaluate a tool-supported ASC synthesis
method for HRC in manufacturing. Our ASC synthesis is: (i) informed by the
manufacturing process, risk analysis, and regulations; (ii) formally verified
against correctness criteria; and (iii) selected from a design space of
feasible controllers according to a set of optimality criteria. The synthesised
ASC can detect the occurrence of hazards, move the process into a safe state,
and, in certain circumstances, return the process to an operational state from
which it can resume its original task
Sample Complexity of Adversarially Robust Linear Classification on Separated Data
We consider the sample complexity of learning with adversarial robustness.
Most prior theoretical results for this problem have considered a setting where
different classes in the data are close together or overlapping. Motivated by
some real applications, we consider, in contrast, the well-separated case where
there exists a classifier with perfect accuracy and robustness, and show that
the sample complexity narrates an entirely different story. Specifically, for
linear classifiers, we show a large class of well-separated distributions where
the expected robust loss of any algorithm is at least ,
whereas the max margin algorithm has expected standard loss .
This shows a gap in the standard and robust losses that cannot be obtained via
prior techniques. Additionally, we present an algorithm that, given an instance
where the robustness radius is much smaller than the gap between the classes,
gives a solution with expected robust loss is . This shows that
for very well-separated data, convergence rates of are
achievable, which is not the case otherwise. Our results apply to robustness
measured in any norm with (including )
Verified synthesis of optimal safety controllers for human-robot collaboration
We present a tool-supported approach for the synthesis, verification and validation of the control software responsible for the safety of the human-robot interaction in manufacturing processes that use collaborative robots. In human-robot collaboration, software-based safety controllers are used to improve operational safety, e.g., by triggering shutdown mechanisms or emergency stops to avoid accidents. Complex robotic tasks and increasingly close human-robot interaction pose new challenges to controller developers and certification authorities. Key among these challenges is the need to assure the correctness of safety controllers under explicit (and preferably weak) assumptions. Our controller synthesis, verification and validation approach is informed by the process, risk analysis, and relevant safety regulations for the target application. Controllers are selected from a design space of feasible controllers according to a set of optimality criteria, are formally verified against correctness criteria, and are translated into executable code and validated in a digital twin. The resulting controller can detect the occurrence of hazards, move the process into a safe state, and, in certain circumstances, return the process to an operational state from which it can resume its original task. We show the effectiveness of our software engineering approach through a case study involving the development of a safety controller for a manufacturing work cell equipped with a collaborative robot