8,024 research outputs found

    Evaluating a human-robot interface for exploration missions

    Get PDF
    The research reported in this paper concerns the design, implementation, and experimental evaluation of a Human-Robot Interface for stationary remote operators, implemented for a PC computer. The GUI design and functionality is described. An Autonomy Management Model has been implemented and explained. We have conducted user evaluation, making two set of experiments, that will be described and the resulting data analyzed. The conclusions give an insight on the most important usability concerns, regarding the operator situational awareness. The scalability of the interface is also experimentally studied

    Teams organization and performance analysis in autonomous human-robot teams

    Get PDF
    This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM

    Asynchronous displays for multi-UV search tasks

    Get PDF
    Synchronous video has long been the preferred mode for controlling remote robots with other modes such as asynchronous control only used when unavoidable as in the case of interplanetary robotics. We identify two basic problems for controlling multiple robots using synchronous displays: operator overload and information fusion. Synchronous displays from multiple robots can easily overwhelm an operator who must search video for targets. If targets are plentiful, the operator will likely miss targets that enter and leave unattended views while dealing with others that were noticed. The related fusion problem arises because robots' multiple fields of view may overlap forcing the operator to reconcile different views from different perspectives and form an awareness of the environment by "piecing them together". We have conducted a series of experiments investigating the suitability of asynchronous displays for multi-UV search. Our first experiments involved static panoramas in which operators selected locations at which robots halted and panned their camera to capture a record of what could be seen from that location. A subsequent experiment investigated the hypothesis that the relative performance of the panoramic display would improve as the number of robots was increased causing greater overload and fusion problems. In a subsequent Image Queue system we used automated path planning and also automated the selection of imagery for presentation by choosing a greedy selection of non-overlapping views. A fourth set of experiments used the SUAVE display, an asynchronous variant of the picture-in-picture technique for video from multiple UAVs. The panoramic displays which addressed only the overload problem led to performance similar to synchronous video while the Image Queue and SUAVE displays which addressed fusion as well led to improved performance on a number of measures. In this paper we will review our experiences in designing and testing asynchronous displays and discuss challenges to their use including tracking dynamic targets. © 2012 by the American Institute of Aeronautics and Astronautics, Inc

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Improving situation awareness of a single human operator interacting with multiple unmanned vehicles: first results

    Get PDF
    In the context of the supervision of one or several unmanned vehicles by a human operator, the design of an adapted user interface is a major challenge. Therefore, in the context of an existing experimental set up composed of a ground station and heterogeneous unmanned ground and air vehicles we aim at redesigning the human-robot interactions to improve the operator's situation awareness. We base our new design on a classical user centered approach

    Integrating Affective Expressions into Robot-Assisted Search and Rescue to Improve Human-Robot Communication

    Get PDF
    Unexplained or ambiguous behaviours of rescue robots can lead to inefficient collaborations between humans and robots in robot-assisted SAR teams. To date, rescue robots do not have the ability to interact with humans on a social level, which is believed to be an essential ability that can improve the quality of interactions. This thesis research proposes to bring affective robot expressions into the SAR context to provide rescue robots social capabilities. The first experiment presented in Chapter 3 investigates whether there is consensus in mapping emotions to messages/situations in Urban Search and Rescue (USAR) scenarios, where efficiency and effectiveness of interactions are crucial to success. We studied mappings between 10 specific messages, presented in two different communication styles, reflecting common situations that might happen during search and rescue missions and the emotions exhibited by robots in those situations. The data was obtained through a Mechanical Turk study with 78 participants. The findings support the feasibility of using emotions as an additional communication channel to improve multi-modal human-robot interaction for urban search and rescue robots and suggest that these mappings are robust, i.e., are not affected by the robot’s communication style. The second experiment was conducted on Amazon Mechanical Turk as well with 223 participants. We used Affect Control Theory (ACT) as a method for deriving the mappings between situations and emotions (similar to the ones in the first experiment) and as an alternative method to obtaining mappings that can be adjusted for different emotion sets (Chapter 4). The results suggested that there is consistency in the choice of emotions for a robot to show in different situations between the two methods used in the first and second experiment, indicating the feasibility of using emotions as an additional modality in SAR robots. After validating the feasibility of bringing emotions to SAR context based on the findings from the first two experiments, we created affective expressions based on Evaluation, Potency and Activity (EPA) dimensions of ACT with the help of LED lights on a rescue robot called Husky. We evaluated the effect of emotions on rescue workers’ situational awareness through an online Amazon Mechanical Turk Study with 151 participants (Chapter 5). Findings indicated that participants who saw Husky with affective expressions (conveyed through lights) had better perception accuracy of the situation happening in the disaster scene than participants who saw the videos of the Husky robot without any affective lights. In other words, Husky with affective lights improved participants’ situational awareness
    • …
    corecore