4,219 research outputs found

    Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers

    Get PDF
    The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified

    Formal verification of an autonomous personal robotic assistant

    Get PDF
    Human–robot teams are likely to be used in a variety of situations wherever humans require the assistance of robotic systems. Obvious examples include healthcare and manufacturing, in which people need the assistance of machines to perform key tasks. It is essential for robots working in close proximity to people to be both safe and trustworthy. In this paper we examine formal verification of a high-level planner/scheduler for autonomous personal robotic assistants such as Care-O-bot ™ . We describe how a model of Care-O-bot and its environment was developed using Brahms, a multiagent workflow language. Formal verification was then carried out by translating this to the input language of an existing model checker. Finally we present some formal verification results and describe how these could be complemented by simulation-based testing and realworld end-user validation in order to increase the practical and perceived safety and trustworthiness of robotic assistants

    Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Get PDF
    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion

    Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification

    Get PDF
    Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system's behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system's state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.Comment: Accepted at Journal of Risk and Reliabilit

    Research Priorities for Robust and Beneficial Artificial Intelligence

    Get PDF
    Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.Comment: This article gives examples of the type of research advocated by the open letter for robust & beneficial AI at http://futureoflife.org/ai-open-lette

    Formal Verification of Astronaut-Rover Teams for Planetary Surface Operations

    Get PDF
    This paper describes an approach to assuring the reliability of autonomous systems for Astronaut-Rover (ASRO) teams using the formal verification of models in the Brahms multi-agent modelling language. Planetary surface rovers have proven essential to several manned and unmanned missions to the moon and Mars. The first rovers were tele- or manuallyoperated, but autonomous systems are increasingly being used to increase the effectiveness and range of rover operations on missions such as the NASA Mars Science Laboratory. It is anticipated that future manned missions to the moon and Mars will use autonomous rovers to assist astronauts during extravehicular activity (EVA), including science, technical and construction operations. These ASRO teams have the potential to significantly increase the safety and efficiency of surface operations. We describe a new Brahms model in which an autonomous rover may perform several different activities including assisting an astronaut during EVA. These activities compete for the autonomous rovers “attention’ and therefore the rover must decide which activity is currently the most important and engage in that activity. The Brahms model also includes an astronaut agent, which models an astronauts predicted behaviour during an EVA. The rover must also respond to the astronauts activities. We show how this Brahms model can be simulated using the Brahms integrated development environment. The model can then also be formally verified with respect to system requirements using the SPIN model checker, through automatic translation from Brahms to PROMELA (the input language for SPIN). We show that such formal verification can be used to determine that mission- and safety critical operations are conducted correctly, and therefore increase the reliability of autonomous systems for planetary rovers in ASRO teams

    Data-Centric Governance

    Full text link
    Artificial intelligence (AI) governance is the body of standards and practices used to ensure that AI systems are deployed responsibly. Current AI governance approaches consist mainly of manual review and documentation processes. While such reviews are necessary for many systems, they are not sufficient to systematically address all potential harms, as they do not operationalize governance requirements for system engineering, behavior, and outcomes in a way that facilitates rigorous and reproducible evaluation. Modern AI systems are data-centric: they act on data, produce data, and are built through data engineering. The assurance of governance requirements must also be carried out in terms of data. This work explores the systematization of governance requirements via datasets and algorithmic evaluations. When applied throughout the product lifecycle, data-centric governance decreases time to deployment, increases solution quality, decreases deployment risks, and places the system in a continuous state of assured compliance with governance requirements.Comment: 26 pages, 13 figure

    Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems

    Get PDF
    Prepared for: Naval Air Warfare Development Center (NAVAIR)Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particular situations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practical datasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches may not apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offers advanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, or planning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwanted events in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to be employable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations.Naval Air Warfare Development CenterNaval Postgraduate School, Naval Research Program (PE 0605853N/2098)Approved for public release. Distribution is unlimite

    Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions

    Get PDF
    © Springer International Publishing Switzerland 2016. Industries such as flexible manufacturing and home care will be transformed by the presence of robotic assistants. Assurance of safety and functional soundness for these robotic systems will require rigorous verification and validation. We propose testing in simulation using Coverage-Driven Verification (CDV) to guide the testing process in an automatic and systematic way. We use a two-tiered test generation approach, where abstract test sequences are computed first and then concretized (e.g., data and variables are instantiated), to reduce the complexity of the test generation problem. To demonstrate the effectiveness of our approach, we developed a testbench for robotic code, running in ROS-Gazebo, that implements an object handover as part of a humanrobot interaction (HRI) task. Tests are generated to stimulate the robot’s code in a realistic manner, through stimulating the human, environment, sensors, and actuators in simulation. We compare the merits of unconstrained, constrained and model-based test generation in achieving thorough exploration of the code under test, and interesting combinations of human-robot interactions. Our results show that CDV combined with systematic test generation achieves a very high degree of automation in simulation-based verification of control code for robots in HRI
    • …
    corecore