17 research outputs found

    Human factors of semi-autonomous robots for urban search and rescue

    Get PDF
    During major disasters or other emergencies, Urban Search and Rescue (USAR) teams are responsible for extricating casualties safely from collapsed urban structures. The rescue work is dangerous due to possible further collapse, fire, dust or electricity hazards. Sometimes the necessary precautions and checks can last several hours before rescuers are safe to start the search for survivors. Remote controlled rescue robots provide the opportunity to support human rescuers to search the site for trapped casualties while they remain in a safe place. The research reported in this thesis aimed to understand how robot behaviour and interface design can be applied to utilise the benefits of robot autonomy and how to inform future human-robot collaborative systems. The data was analysed in the context of USAR missions when using semi-autonomous remote controlled robot systems. The research focussed on the influence of robot feedback, robot reliability, task complexity, and transparency. The influence of these factors on trust, workload, and performance was examined. The overall goal of the research was to make the life of rescuers safer and enhance their performance to help others in distress. Data obtained from the studies conducted for this thesis showed that semi-autonomous robot reliability is still the most dominant factor influencing trust, workload, and team performance. A robot with explanatory feedback was perceived as more competent, more efficient and less malfunctioning. The explanatory feedback was perceived as a clearer type of communication compared to concise robot feedback. Higher levels of robot transparency were perceived as more trustworthy. However, single items on the trust questionnaire were manipulated and further investigation is necessary. However, neither explanatory feedback from the robot nor robot transparency, increased team performance or mediated workload levels. Task complexity mainly influenced human-robot team performance and the participants’ control allocation strategy. Participants allowed the robot to find more targets and missed more robot errors in the high complexity conditions compared to the low task complexity conditions. Participants found more targets manually in the low complexity tasks. In addition, the research showed that recording the observed robot performance (the performance of the robot that was witnessed by the participant) can help to identify the cause of contradicting results: participants might not have noticed some of the robots mistakes and therefore they were not able to distinguish between the robot reliability levels. Furthermore, the research provided a foundation of knowledge regarding the real world application of USAR in the United Kingdom. This included collecting knowledge via an autoethnographic approach about working processes, command structures, currently used technical equipment, and attitudes of rescuers towards robots. Also, recommendations about robot behaviour and interface design were collected throughout the research. However, recommendations made in the thesis include consideration of the overall outcome (mission performance) and the perceived usefulness of the system in order to support the uptake of the technology in real world applications. In addition, autonomous features might not be appropriate in all USAR applications. When semi-autonomous robot trials were compared to entirely manual operation, only the robot with an average of 97% reliability significantly increased the team performance and reduced the time needed to complete the USAR scenario compared to the manually operated robot. Unfortunately, such high robot success levels do not exist to date. This research has contributed to our understanding of the factors influencing human-robot collaboration in USAR operations, and provided guidance for the next generation of autonomous robots

    Human factors of semi-autonomous robots for urban search and rescue

    Get PDF
    During major disasters or other emergencies, Urban Search and Rescue (USAR) teams are responsible for extricating casualties safely from collapsed urban structures. The rescue work is dangerous due to possible further collapse, fire, dust or electricity hazards. Sometimes the necessary precautions and checks can last several hours before rescuers are safe to start the search for survivors. Remote controlled rescue robots provide the opportunity to support human rescuers to search the site for trapped casualties while they remain in a safe place. The research reported in this thesis aimed to understand how robot behaviour and interface design can be applied to utilise the benefits of robot autonomy and how to inform future human-robot collaborative systems. The data was analysed in the context of USAR missions when using semi-autonomous remote controlled robot systems. The research focussed on the influence of robot feedback, robot reliability, task complexity, and transparency. The influence of these factors on trust, workload, and performance was examined. The overall goal of the research was to make the life of rescuers safer and enhance their performance to help others in distress. Data obtained from the studies conducted for this thesis showed that semi-autonomous robot reliability is still the most dominant factor influencing trust, workload, and team performance. A robot with explanatory feedback was perceived as more competent, more efficient and less malfunctioning. The explanatory feedback was perceived as a clearer type of communication compared to concise robot feedback. Higher levels of robot transparency were perceived as more trustworthy. However, single items on the trust questionnaire were manipulated and further investigation is necessary. However, neither explanatory feedback from the robot nor robot transparency, increased team performance or mediated workload levels. Task complexity mainly influenced human-robot team performance and the participants’ control allocation strategy. Participants allowed the robot to find more targets and missed more robot errors in the high complexity conditions compared to the low task complexity conditions. Participants found more targets manually in the low complexity tasks. In addition, the research showed that recording the observed robot performance (the performance of the robot that was witnessed by the participant) can help to identify the cause of contradicting results: participants might not have noticed some of the robots mistakes and therefore they were not able to distinguish between the robot reliability levels. Furthermore, the research provided a foundation of knowledge regarding the real world application of USAR in the United Kingdom. This included collecting knowledge via an autoethnographic approach about working processes, command structures, currently used technical equipment, and attitudes of rescuers towards robots. Also, recommendations about robot behaviour and interface design were collected throughout the research. However, recommendations made in the thesis include consideration of the overall outcome (mission performance) and the perceived usefulness of the system in order to support the uptake of the technology in real world applications. In addition, autonomous features might not be appropriate in all USAR applications. When semi-autonomous robot trials were compared to entirely manual operation, only the robot with an average of 97% reliability significantly increased the team performance and reduced the time needed to complete the USAR scenario compared to the manually operated robot. Unfortunately, such high robot success levels do not exist to date. This research has contributed to our understanding of the factors influencing human-robot collaboration in USAR operations, and provided guidance for the next generation of autonomous robots

    Opportunistic communication schemes for unmanned vehicles in urban search and rescue

    Get PDF
    In urban search and rescue (USAR) operations, there is a considerable amount of danger faced by rescuers. The use of mobile robots can alleviate this issue. Coordinating the search effort is made more difficult by the communication issues typically faced in these environments, such that communication is often restricted. With small numbers of robots, it is necessary to break communication links in order to explore the entire environment. The robots can be viewed as a broken ad hoc network, relying on opportunistic contact in order to share data. In order to minimise overheads when exchanging data, a novel algorithm for data exchange has been created which maintains the propagation speed of flooding while reducing overheads. Since the rescue workers outside of the structure need to know the location of any victims, the task of finding their locations is two parted: 1) to locate the victims (Search Time), and 2) to get this data outside the structure (Delay Time). Communication with the outside is assumed to be performed by a static robot designated as the Command Station. Since it is unlikely that there will be sufficient robots to provide full communications coverage of the area, robots that discover victims are faced with the difficult decision of whether they should continue searching or return with the victim data. We investigate a variety of search techniques and see how the application of biological foraging models can help to streamline the search process, while we have also implemented an opportunistic network to ensure that data are shared whenever robots come within line of sight of each other or the Command Station. We examine this trade-off between performing a search and communicating the results

    Self–organised multi agent system for search and rescue operations

    Get PDF
    Autonomous multi-agent systems perform inadequately in time critical missions, while they tend to explore exhaustively each location of the field in one phase with out selecting the pertinent strategy. This research aims to solve this problem by introducing a hierarchy of exploration strategies. Agents explore an unknown search terrain with complex topology in multiple predefined stages by performing pertinent strategies depending on their previous observations. Exploration inside unknown, cluttered, and confined environments is one of the main challenges for search and rescue robots inside collapsed buildings. In this regard we introduce our novel exploration algorithm for multi–agent system, that is able to perform a fast, fair, and thorough search as well as solving the multi–agent traffic congestion. Our simulations have been performed on different test environments in which the complexity of the search field has been defined by fractal dimension of Brownian movements. The exploration stages are depicted as defined arenas of National Institute of Standard and Technology (NIST). NIST introduced three scenarios of progressive difficulty: yellow, orange, and red. The main concentration of this research is on the red arena with the least structure and most challenging parts to robot nimbleness

    INTEGRATION OF MULTIPLE UNMANNED SYSTEMS IN AN URBAN SEARCH AND RESCUE ENVIRONMENT

    Get PDF
    In view of the local, regional and global security trends over the past decade, the threats of disaster to the populace inhabiting urbanized areas are real and there is a need for increased vigilance. There can be multiple causes for urban disaster natural disasters, terrorist attack and urban warfare are all viable. This thesis focused on the event in which an urban search and rescue operation is required due to the aftermath of a terrorist activity. Systems engineering techniques were utilized to analyze the problem space and suggested a plausible solution. Application of unmanned vehicles in the scenario enhanced the reconnaissance, intelligence and surveillance capabilities of the responding forces, while limiting the exposure risk of personnel. One of the many challenges facing unmanned systems in a cluttered environment is a capability to rapidly generate reactive obstacle avoidance trajectories. A direct method of calculus of variations was applied for the unmanned platforms to achieve mission objectives collaboratively, and perform real-time trajectory optimization for a collision-free flight. Dynamic models were created to enable simulated operations within the thesis design scenario. Experiments conducted in an indoor lab verified the unmanned systems ability to avoid obstacles and carry out collaborative missions successfully.http://archive.org/details/integrationofmul1094532805Civilian, Defence Science and Technology Agency, Singapor

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception, and Active Vision

    Get PDF
    Search and rescue (SAR) operations can take significant advantage from supporting autonomous or teleoperated robots and multi-robot systems. These can aid in mapping and situational assessment, monitoring and surveillance, establishing communication networks, or searching for victims. This paper provides a review of multi-robot systems supporting SAR operations, with system-level considerations and focusing on the algorithmic perspectives for multi-robot coordination and perception. This is, to the best of our knowledge, the first survey paper to cover (i) heterogeneous SAR robots in different environments, (ii) active perception in multi-robot systems, while (iii) giving two complementary points of view from the multi-agent perception and control perspectives. We also discuss the most significant open research questions: shared autonomy, sim-to-real transferability of existing methods, awareness of victims' conditions, coordination and interoperability in heterogeneous multi-robot systems, and active perception. The different topics in the survey are put in the context of the different challenges and constraints that various types of robots (ground, aerial, surface, or underwater) encounter in different SAR environments (maritime, urban, wilderness, or other post-disaster scenarios). The objective of this survey is to serve as an entry point to the various aspects of multi-robot SAR systems to researchers in both the machine learning and control fields by giving a global overview of the main approaches being taken in the SAR robotics area

    General Concepts for Human Supervision of Autonomous Robot Teams

    Get PDF
    For many dangerous, dirty or dull tasks like in search and rescue missions, deployment of autonomous teams of robots can be beneficial due to several reasons. First, robots can replace humans in the workspace. Second, autonomous robots reduce the workload of a human compared to teleoperated robots, and therefore multiple robots can in principle be supervised by a single human. Third, teams of robots allow distributed operation in time and space. This thesis investigates concepts of how to efficiently enable a human to supervise and support an autonomous robot team, as common concepts for teleoperation of robots do not apply because of the high mental workload. The goal is to find a way in between the two extremes of full autonomy and pure teleoperation, by allowing to adapt the robots’ level of autonomy to the current situation and the needs of the human supervisor. The methods presented in this thesis make use of the complementary strengths of humans and robots, by letting the robots do what they are good at, while the human should support the robots in situations that correspond to the human strengths. To enable this type of collaboration between a human and a robot team, the human needs to have an adequate knowledge about the current state of the robots, the environment, and the mission. For this purpose, the concept of situation overview (SO) has been developed in this thesis, which is composed of the two components robot SO and mission SO. Robot SO includes information about the state and activities of each single robot in the team, while mission SO deals with the progress of the mission and the cooperation between the robots. For obtaining SO a new event-based communication concept is presented in this thesis, that allows the robots to aggregate information into discrete events using methods from complex event processing. The quality and quantity of the events that are actually sent to the supervisor can be adapted during runtime by defining positive and negative policies for (not) sending events that fulfill specific criteria. This reduces the required communication bandwidth compared to sending all available data. Based on SO, the supervisor is enabled to efficiently interact with the robot team. Interactions can be initiated either by the human or by the robots. The developed concept for robot-initiated interactions is based on queries, that allow the robots to transfer decisions to another process or the supervisor. Various modes for answering the queries, ranging from fully autonomous to pure human decisions, allow to adapt the robots’ level of autonomy during runtime. Human-initiated interactions are limited to high-level commands, whereas interactions on the action level (e. g., teleoperation) are avoided, to account for the specific strengths of humans and robots. These commands can in principle be applied to quite general classes of task allocation methods for autonomous robot teams, e. g., in terms of specific restrictions, which are introduced into the system as constraints. In that way, the desired allocations emerge implicitly because of the introduced constraints, and the task allocation method does not need to be aware of the human supervisor in the loop. This method is applicable to different task allocation approaches, e. g., instantaneous or time-extended task assignments, and centralized or distributed algorithms. The presented methods are evaluated by a number of different experiments with physical and simulated scenarios from urban search and rescue as well as robot soccer, and during robot competitions. The results show that with these methods a human supervisor can significantly improve the robot team performance
    corecore