3 research outputs found

    Novel approaches for the safety of human-robot interaction

    Get PDF
    In recent years there has been a concerted effort to address many of the safety issues associated with physical human-robot interaction (pHRI). However, a number of challenges remain. For personal robots, and those intended to operate in unstructured environments, the problem of safety is compounded. We believe that the safety issue is a primary factor in wide scale adoption of personal robots, and until these issues are addressed, commercial enterprises will be unlikely to invest heavily in their development.In this thesis we argue that traditional system design techniques fail to capture the complexities associated with dynamic environments. This is based on a careful analysis of current design processes, which looks at how effectively they identify hazards that may arise in typical environments that a personal robot may be required to operate in. Based on this investigation, we show how the adoption of a hazard check list that highlights particular hazardous areas, can be used to improve current hazard analysis techniques.A novel safety-driven control system architecture is presented, which attempts to address many of the weaknesses identified with the present designs found in the literature. The new architecture design centres around safety, and the concept of a `safety policy' is introduced. These safety policies are shown to be an effective way of describing safety systems as a set of rules that dictate how the system should behave in potentially hazardous situations.A safety analysis methodology is introduced, which integrates both our hazard analysis technique and the implementation of the safety layer of our control system. This methodology builds on traditional functional hazard analysis, with the addition of processes aimed to improve the safety of personal robots. This is achieved with the use of a safety system, developed during the hazard analysis stage. This safety system, called the safety protection system, is initially used to verify that safety constraints, identified during hazard analysis, have been implemented appropriately. Subsequently it serves as a high-level safety enforcer, by governing the actions of the robot and preventing the control layer from performing unsafe operations.To demonstrate the effectiveness of the design, a series of experiments have been conducted using both simulation environments and physical hardware. These experiments demonstrate the effectiveness of the safety-driven control system for performing tasks safely, while maintaining a high level of availability

    Safety monitoring for autonomous systems: interactive elicitation of safety rules

    Get PDF
    Un moniteur de sécurité actif est un mécanisme indépendant qui est responsable de maintenir le système dans un état sûr, en cas de situation dangereuse. Il dispose d'observations (capteurs) et d'interventions (actionneurs). Des règles de sécurité sont synthétisées, à partir des résultats d'une analyse de risques, grâce à l'outil SMOF (Safety MOnitoring Framework), afin d'identifier quelles interventions appliquer quand une observation atteint une valeur dangereuse. Les règles de sécurité respectent une propriété de sécurité (le système reste das un état sûr) ainsi que des propriétés de permissivité, qui assurent que le système peut toujours effectuer ses tâches. Ce travail se concentre sur la résolution de cas où la synthèse échoue à retourner un ensemble de règles sûres et permissives. Pour assister l'utilisateur dans ces cas, trois nouvelles fonctionnalités sont introduites et développées. La première adresse le diagnostique des raisons pour lesquelles une règle échoue à respecter les exigences de permissivité. La deuxième suggère des interventions de sécurité candidates à injecter dans le processus de synthèse. La troisième permet l'adaptation des exigences de permissivités à un ensemble de tâches essentielles à préserver. L'utilisation des ces trois fonctionnalités est discutée et illustrée sur deux cas d'étude industriels, un robot industriel de KUKA et un robot de maintenance de Sterela.An active safety monitor is an independent mechanism that is responsible for keeping the system in a safe state, should a hazardous situation occur. Is has observations (sensors) and interventions (actuators). Safety rules are synthesized from the results of the hazard analysis, using the tool SMOF (Safety MOnitoring Framework), in order to identify which interventions to apply for dangerous observations values. The safety rules enforce a safety property (the system remains in a safe state) and some permissiveness properties, ensuring that the system can still perform its tasks. This work focuses on solving cases where the synthesis fails to return a set of safe and permissive rules. To assist the user in these cases, three new features are introduced and developed. The first one addresses the diagnosis of why the rules fail to fulfill a permissiveness requirement. The second one suggests candidate safety interventions to inject into the synthesis process. The third one allows the tuning of the permissiveness requirements based on a set of essential functionalities to maintain. The use of these features is discussed and illustrated on two industrial case studies, a manufacturing robot from KUKA and a maintenance robot from Sterela
    corecore