59 research outputs found

    Towards the formal verification of human-agent-robot teamwork

    Get PDF
    The formal analysis of computational processes is by now a well-established field. However, in practical scenarios, the problem of how we can formally verify interactions with humans still remains. This thesis is concerned with addressing this problem through the use of the Brahms language. Our overall goal is to provide formal verification techniques for human-agent teamwork, particularly astronaut-robot teamwork on future space missions and human-robot interactions in health-care scenarios modelled in Brahms

    Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers

    Get PDF
    The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified

    It's Time to Rethink Levels of Automation for Self-Driving Vehicles

    Get PDF
    Discusses issues involving the automation of self-driving vehicles. Reports on the technology of self-driving or autonomous automobiles. Examines the extent to which these vehicles serve the public interest as well as the level of consumer confidence in driving these vehicles. Suggests that self-driving cars could be a transformative technology in both good and bad ways. The important questions are not to do with when they will arrive but where, for whom, and in what forms they will appear. If we want a clearer sense of the possibilities from automated vehicle systems, we need to broaden our gaze [3]. Rather than emphasizing the autonomy of self-driving vehicles, we should instead be talking about their conditionality. We need to know about the circumstances in which different systems could have an impact on our lives. Self-driving vehicle systems will serve different purposes and take on different shapes in different places. A schema for innovation that points in one direction and says nothing about the desirability of the destination makes for a poor roadmap

    Social-aware robot navigation in urban environments

    Get PDF
    In this paper we present a novel robot navigation approach based on the so-called Social Force Model (SFM). First, we construct a graph map with a set of destinations that completely describe the navigation environment. Second, we propose a robot navigation algorithm, called social-aware navigation, which is mainly driven by the social-forces centered at the robot. Third, we use a MCMC Metropolis-Hastings algorithm in order to learn the parameters values of the method. Finally, the validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.Peer ReviewedPostprint (author’s final draft

    Communicative Capabilities of Agents for the Collaboration in a Human-Agent Team

    Get PDF
    International audienceThe coordination is an essential ingredient for the human-agent teamwork. It requires team members to share knowledge to establish common grounding and mutual awareness among them. In this paper, we propose a behavioral architecture C 2 BDI that allows to enhance the knowledge sharing using natural language communication between team members. We define collaborative conversation protocols that provide proactive behavior to agents for the coordination between team members. We have applied this architecture to a real scenario in a col-laborative virtual environment for training. Our solution enables users to coordinate with other team members

    Technical Report: Analysis of Intervention Modes in Human-In-The-Loop (HITL) Teleoperation With Autonomous Ground Vehicle Systems

    Get PDF
    Fully autonomous systems are human-out-of-the-loop systems that single-handedly determine the right course of action when given an autonomous task. In all future visions of operating networks of fully autonomous self-driving ground or aerial vehicles, humans are expected to intervene with some kind of remote instantaneous intervention role and ``Human-on-the-Loop (HOTL)'' and ``Human-in-the-Loop (HOTL)'' telemonitoring and telemanipulation is expected to establish a desired level of trust in AVs while they are interacting with a highly dynamic urban or aerial environment. Many studies envision a future with fully autonomous self-driving vehicles (FA-SDVs) with increasing penetration levels in mixed traffic. However, effective management of FA-SDVs in real-world use cases under highly uncertain conditions has not been examined sufficiently in the literature. This report, by covering the teleoperation collaboration modes between two intelligent agents — human telesupervisors (HTSs) and FA-SDVs — aims to close this gap

    It's (Not) Your Fault! Blame and Trust Repair in Human-Agent Cooperation

    Get PDF
    Buchholz V, Kulms P, Kopp S. It's (Not) Your Fault! Blame and Trust Repair in Human-Agent Cooperation. Kognitive Systeme. 2017;2017(1).In cooperative settings the success of the team is interlinked with the performance of the individual members. Thus, the possibility to address problems and mistakes of team members needs to be given. A common means in human-human interaction is the attribution of blame. Yet, it is not clear how blame attributions affect cooperation between humans and intelligent virtual agents and the overall perception of the agent. In order to take a first step in answering these questions, a study on cooperative human-agent interaction was conducted. The study was designed to investigate the effects of two different blaming strategies used by the agent in response to an alleged goal achievement failure, that is, self-blame (agent blames itself) followed by an apology versus other-blame (agent blames the user). The results indicate that the combination of blame and trust repair enables a successful continuation of the cooperation without loss of trust and likeability
    • …
    corecore